entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03580v1 | 20240905143528 | Enhancing Sensitivity in Ge-Based Rare-Event Physics Experiments through Underground Crystal Growth and Detector Fabrication | [
"Dongming Mei"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex"
] |
AIP/123-QED
[email protected]
Physics Department, University of South Dakota, Vermillion, SD, 57069
§ ABSTRACT
The cosmogenic production of long-lived isotopes such as ^3H,^55Fe, ^60Co, ^65Zn, and ^68Ge poses a significant challenge as a source of background events in Ge-based dark matter (DM) and neutrinoless double-beta decay (0νββ) experiments. In the pursuit of DM, particularly within the largely unexplored parameter space for low-mass DM, new detector technologies are being developed with extremely low-energy thresholds to detect MeV-scale DM. However, isotopes like ^3H, ^55Fe, ^65Zn, and ^68Ge, produced cosmogenically within the detector material, emerge as dominant backgrounds that severely limit sensitivity in these searches. Similarly, efforts to detect 0νββ, especially under a neutrino normal mass hierarchy scenario, require a sensitivity to the effective Majorana mass of ∼1 meV. Achieving this level of sensitivity necessitates stringent suppression of background signals from isotopes such as ^60Co and ^68Ge, which impose critical detection limits. To reach the targeted sensitivity for these next-generation experiments and to unlock their full discovery potential for both low-mass DM and 0νββ, relocating Ge crystal growth and detector fabrication to underground environments is crucial. This approach is the most effective strategy to significantly reduce the production of these long-lived isotopes, thereby ensuring the experimental sensitivity required for groundbreaking discoveries.
Enhancing Sensitivity in Ge-Based Rare-Event Physics Experiments through Underground Crystal Growth and Detector Fabrication
Dongming Mei
September 9, 2024
============================================================================================================================
§ INTRODUCTION
The search for dark matter (DM) and the quest to observe neutrinoless double-beta decay (0νββ) represent two of the most significant challenges in modern physics. Germanium (Ge)-based detectors have emerged as leading tools in these searches due to their excellent energy resolution and the ability to achieve low background levels <cit.>. However, one of the critical limitations to the sensitivity of these detectors is the production of cosmogenic isotopes, such as tritium (^3H), iron-55 (^55Fe), cobalt-60 (^60Co), zinc-65 (^65Zn), and Ge-68 (^68Ge), which are produced when Ge detectors are exposed to cosmic rays during the crystal growth and detector fabrication processes at the Earth's surface<cit.>.
Cosmogenic isotopes present a significant source of background in Ge-based DM experiments, particularly in the search for low-mass DM candidates, where a vast unexplored parameter space exists for masses in the MeV range. These isotopes can mimic the signal of DM interactions by producing low-energy events, thereby setting a fundamental limit on the sensitivity of the detectors <cit.>. This challenge also arises in 0νββ experiments, where the presence of cosmogenic isotopes such as ^60Co and ^68Ge can produce background events that interfere with the detection of the extremely rare 0νββ decay <cit.>.
To achieve the target sensitivity required for these experiments, it is crucial to mitigate the production of cosmogenic isotopes. State-of-the-art Ge-based experiments, such as SuperCDMS <cit.> for DM searches and LEGEND <cit.> for 0νββ decay, must implement effective strategies to address cosmogenic backgrounds. For instance, SuperCDMS <cit.> not only requires the rapid transportation of detectors from the Earth's surface to minimize exposure but also depends on its exceptional ability to distinguish between electronic recoils (e-recoils) and nuclear recoils (n-recoils). This capability significantly reduces background noise from cosmogenically produced events, thereby enhancing the search for DM with masses greater than ∼1 GeV/c^2. However, in the pursuit of low-mass DM using High Voltage detectors, this e/n recoil discrimination capability diminishes, resulting cosmogenic backgrounds to dominate the sensitivity. Conversely, LEGEND <cit.> imposes stringent surface exposure limits, restricting exposure to just 30 days. This approach effectively limits the contribution of cosmogenic backgrounds to only 20% of the total background budget, thereby preserving the experiment's sensitivity to a Majorana effective mass of ∼10 meV. However, for future experiments beyond LEGEND-1000, which aim to probe the Majorana effective mass down to approximately 1 meV, it will be necessary to reduce this cosmogenic background by an additional factor of 30 <cit.>.
If not the only practical solution, the most effective strategy for achieving the required sensitivity in next-generation Ge-based experiments is to relocate Ge crystal growth and detector fabrication to underground environments. Underground facilities are naturally shielded from cosmic rays by the Earth's crust, significantly reducing the rate of cosmogenic activation <cit.>. By minimizing the exposure of Ge detectors to cosmic radiation, it is possible to suppress the production of long-lived isotopes, thereby enhancing the sensitivity of the detectors to both DM interactions and
0νββ decay events <cit.>.
This paper explores the impact of cosmogenic isotope production on the sensitivity of next-generation Ge-based experiments and discusses the necessity and feasibility of underground environments for Ge crystal growth and detector fabrication to ensure the success of these pivotal searches.
§ IMPACT OF COSMOGENIC ISOTOPE PRODUCTION ON THE SENSITIVITY OF NEXT-GENERATION GE-BASED EXPERIMENTS
Weakly Interacting Massive Particles (WIMPs) <cit.> have long been considered a leading candidate for DM. These particles, with masses thought to be comparable to those of heavy nuclei, interact with atomic nuclei via extremely weak and short-range forces. Although WIMPs are expected to collide with atomic nuclei only very rarely, such collisions would impart significant recoil energy to the nuclei, causing them to recoil at velocities several thousand times the speed of sound <cit.>. Numerous experiments have been designed to directly detect the small recoil energies resulting from WIMP-nucleus interactions <cit.>. The LZ experiment <cit.>, currently collecting data at the Sanford Underground Research Facility (SURF), has provided the best experimental sensitivity to date, ruling out a significant portion of the parameter space previously allowed for WIMP detection (as shown in Figure <ref>).
In the past decade, light, MeV-scale DM <cit.> has emerged as an intriguing alternative DM candidate. Despite considerable recent efforts in searching for DM-electron interactions <cit.>, a large portion of the parameter space for MeV-scale DM remains unexplored <cit.>, as shown in Figure <ref>. Detecting MeV-scale DM requires novel detectors with extremely low-energy thresholds (below 100 eV) and minimal background noise. Ge-based experiments are poised to play a crucial role in the search for sub-GeV and MeV-scale DM.
To demonstrate the existence of Majorana neutrinos, numerous 0νββ experiments have been conducted over the past several decades <cit.>. Despite more than 30 years of dedicated research, 0νββ decay has yet to be observed. The most stringent constraint on the decay half-life, currently at ∼10^26 years, has been set by the KamLANDZen <cit.> and GERDA experiments, where GERDA is a Ge-based project <cit.>. Looking forward, the planned ton-scale Ge-based experiment, LEGEND-1000, aims to achieve a sensitivity that surpasses 10^28 years for the 0νββ decay half-life <cit.>.
Cosmogenic production of long-lived radioactive isotopes in Ge during crystal growth and detector fabrication on the Earth's surface has a significant impact on the sensitivity of next-generation Ge-based DM and 0νββ experiments. These isotopes, produced through interactions with cosmic rays, set stringent limits on the discovery potential of these experiments, as observed in several major experimental efforts, including SuperCDMS, the Majorana Demonstrator, and GERDA <cit.>.
For instance, the SuperCDMS experiment has detected cosmogenically produced isotopes such as ^3H, ^55Fe, ^65Zn, and ^68Ge in their detectors, with production rates measured at 74±9, 1.5±0.7, 17±5, and 30±18 atoms/kg/day, respectively <cit.>. These findings show reasonable alignment with Monte Carlo simulations, particularly those conducted by Wei et al. <cit.>, which utilized cross-section data from ACTIVIA 1/2 (as detailed in their Table 3). The simulations predict that these isotopes are primarily produced through neutron interactions with Ge isotopes, using a neutron energy spectrum representative of sea-level conditions. The observed production rates are within a factor of 2 to 3 of the simulated values (34.12/52.37, 3.29/4.10, 19.53/44.19, and 10.25/24.65 for ^3H, ^55Fe, ^65Zn, and ^68Ge, respectively, based on ACTIVIA 1/2 cross-section data), which is encouraging given the significant uncertainties in the interaction cross sections.
The production of cosmogenic isotopes, particularly in surface-based facilities, remains a dominant source of background events in DM searches, potentially overwhelming the rare signals that experiments like SuperCDMS seek to detect low-mass DM. Without mitigation, trace-level production of radioisotopes during surface processing of Ge and Si crystals could severely compromise the sensitivity of future DM experiments <cit.>.
Figure <ref> illustrates the sensitivity of low-mass DM searches using a hypothetical highly sensitive Ge-based detector with internal charge amplification, achieving an exceptionally low-energy threshold down to 0.1 eV <cit.>. The formulas used to generate this figure are detailed in Mei et al. <cit.>. As depicted in the figure, background events produced by ^3H significantly limit the sensitivity, even before accounting for contributions from shorter-lived isotopes such as ^56Fe, ^65Zn, and ^68Ge. This highlights the critical need to mitigate cosmogenic production in order to effectively explore low-mass DM.
The most effective strategy to mitigate cosmogenic isotope production is to relocate Ge crystal growth and detector fabrication to underground facilities. This approach significantly reduces exposure to cosmic rays, particularly muon-induced neutron fluxes, which are reduced by more than five orders of magnitude at the 4850-ft level (4.3 km.w.e.) at SURF <cit.>. Figure <ref> illustrates the substantial reduction in muon-induced neutron flux with depth, demonstrating the potential for underground environments to effectively minimize cosmogenic activation.
Moreover, the projected background from isotopes like ^60Co and ^68Ge constitutes about 20% of the LEGEND-1000 background budget, highlighting the critical need for underground processing to achieve the necessary sensitivity for future large-scale experiments <cit.>. By performing Ge purification, crystal growth, characterization, and detector fabrication entirely within underground laboratories at the 4850-ft level at SURF, the production of cosmogenically induced isotopes can be rendered negligible, thereby allowing the next generation experiments to reach unprecedented levels of sensitivity.
Figure <ref> presents the projected sensitivity of Ge-based experiments in exploring both normal and inverted mass hierarchies of neutrinos <cit.>, taking into account cosmogenic production on the surface. The equations used to derive this figure are detailed in Mei et al.<cit.>. As shown in the figure, the 20% background contribution from surface cosmogenic production is not a significant issue for a ton-scale experiment like LEGEND-1000. However, it does limit the sensitivity of a 100-scale experiment aimed at achieving a ∼1 meV Majorana effective mass in the normal mass hierarchy. This indicates that a factor of ∼30 reduction in cosmogenic background is necessary to have discovery potential for 0νββ decay experiments if the decay is mediated by the minimum neutrino mass <cit.>.
The importance of underground facilities for reducing cosmogenic backgrounds has been recognized in the 2022 Snowmass reports on direct DM searches and 0νββ decay <cit.>. These reports underscore the necessity of moving key stages of detector development underground to fully exploit the discovery potential of next-generation experiments.
§ THE FEASIBILITY OF UNDERGROUND GE CRYSTAL GROWTH AND DETECTOR FABRICATION
At the 4850-ft level in an underground environment such as SURF, cosmic muons and muon-induced neutrons are reduced by ∼five orders of magnitude (see Figure <ref>) compared to the Earth's surface, making cosmogenic production at this depth negligible. The long-lived isotopes produced at the Earth's surface in Ge ingots will be largely removed during the Ge purification process through zone refining. To fully realize the potential of this environment, it is necessary to establish a comprehensive underground facility that includes Ge purification, crystal growth, mechanical processing and characterization, and detector fabrication.
Ge is a rare element in the Earth’s crust, with an estimated abundance of approximately 7 parts per million (ppm). Ge is primarily produced as a byproduct of zinc ore processing and through the extraction from coal fly ash. Natural germanium comprises five isotopes: ^70Ge (20.52%), ^72Ge (27.45%), ^73Ge (7.76%), ^74Ge (36.52%), and ^76Ge (7.75%).
As a semiconductor material, Ge has a wide range of industrial applications, including electronics, fiber optic systems, infrared optics, polymerization catalysts, and solar technologies. However, commercially available Ge ingots typically have impurity levels ranging from 99.99% to 99.9999%. Even the highest purity level commercially available is insufficient for growing Ge crystals intended for detector production.
Therefore, commercial ingots must undergo further purification to reach an ultra-high purity level of 99.999,999,999,9% to grow a Ge crystal. Only then does a portion of this crystal have the potential to achieve the extreme purity level required to fabricate a Ge detector, with an impurity level of 99.999,999,999,99%.
Ge is purified using a process called zone refining (Figure <ref>), which involves creating a melting zone that travels along the length of the Ge ingot. This process, first developed at Bell Labs in 1954, works by segregating impurities at the boundary between the liquid and solid phases within the melting zone. As the melting zone moves, impurities remain in the liquid phase and are transported to the end of the ingot. After multiple passes, the impurities become concentrated at the far end of the ingot, which is then removed to increase the purity of the remaining Ge.
Through meticulous quality control and the optimization of various parameters, the zone refining process can purify Ge ingots from 99.99% to an extraordinary 99.999,999,999,9%, achieving an eight orders of magnitude reduction in impurities. This process effectively removes cosmogenically produced long-lived isotopes from the Earth's surface, making the purified Ge ingots ideal for Ge crystal growth. After 15 years of research and development (R&D) at the surface lab <cit.>, the University of South Dakota (USD) has established a standardized procedure that consistently achieves a high yield ( 80%) of purified Ge ingots that meet the stringent requirements for crystal growth. This R&D program positions USD as a significant contributor to the global production of qualified Ge crystals, alongside a handful commercial companies. As a research institution, USD is uniquely positioned to transfer this technology to underground Ge production, paving the way for the next generation of Ge-based experiments.
Large-size Ge crystals are grown using the Czochralski technique, which was developed in the 1930s. Crystal growth is a highly intricate process involving heat, momentum, and mass transport phenomena, along with chemical reactions (e.g., contamination of crystal and melt) and electromagnetic processes (e.g., induction and resistance heating, magnetic stirring, and magnetic breaks). Consequently, the crystal growth process is dynamic, involving phase transformations from liquid to solid. The interface control between the liquid and solid phases occurs on the nanometer scale, while the growth system itself spans approximately a meter in size. This complexity requires the optimization of numerous parameters (10 or more), each with its own set of constraints. As a result, the dynamic nature of crystal growth is challenging to control.
The growth rate and quality of high-purity Ge crystals largely depend on the precise control of the thermal field (heat transfer and temperature profile). However, these control parameters can only be regulated externally, through the geometry of the growth system, gas flow rate and pressure, pulling rate, frequency, and the power of the RF heater. Measurements inside the growth chamber, where temperatures exceed 1000 ^∘C, and the quantitative determination of control parameters are technically challenging.
After 15 years of R&D at the surface lab <cit.>, USD has invented a growth method (patent number 10,125,431) that consistently facilitates the production of detector-grade crystals (99.999,999,999,999,9% purity) on a regular basis (Figure <ref>). This advancement paves the way for growing Ge crystals underground.
Since 2009, the USD group has published research <cit.> demonstrating improvements in the quality of large-size Ge crystals, highlighting our ability to control the parameters necessary for the growth of low-dislocation (3,000–7,000 etch pits/cm^2), large-diameter (∼12 cm), and high-purity Ge single crystals (∼ 10^10/cm^2) for detector fabrication.
After the purification of Ge ingots, three slices are typically cut along the zone-refined ingots to measure the impurity distribution using the Hall Effect method. This method determines both the impurity level and charge mobility. If the impurity level (<2×10^11/cm^3) and mobility (>3×10^4 cm^2/Vs) meet the requirements for crystal growth, that portion of the zone-refined ingot is selected for Ge crystal growth. At USD, the usable portion of the zone-refined ingots for crystal growth usually ranges between 70% and 80% of the entire ingot.
Additionally, when Ge crystals are grown, slices are cut from various parts of the crystal, including the neck, shoulder, middle, and end, to measure impurity distribution using a Hall Effect system. If a portion of the grown crystal meets the necessary criteria—(1) an impurity level between 5×10^9/cm^3 to 3×10^10/cm^3 , (2) mobility greater than 45,000 cm^2/Vs, and (3) a dislocation density between 100 and 7,000 etch-pits/cm^2—this portion is deemed suitable for making a Ge detector. The crystal is then mechanically processed into the desired geometry for making electrical contacts, necessitating a mechanical processing and crystal characterization lab.
Characterizing grown crystals provides crucial feedback for the growth process and helps determine whether the crystals are of sufficient quality for detector fabrication. With 15 years of R&D experience, USD has established a comprehensive characterization program to assess whether crystals meet the qualifications for detector fabrication. The quality of a single-crystalline Ge crystal is determined through X-ray diffraction and dislocation density measurements using an optical microscope. Figure <ref> shows an example of the dislocation density measured via microscopy for a crystal grown at USD.
Commercially available Ge detectors are traditionally fabricated using lithium diffusion for the n+ contact and boron implantation for the p+ contact. The fundamental principle behind this method is to create a charge barrier through a conventional p-n junction. More recently, scientists at Lawrence Berkeley National Laboratory (LBNL) have developed a bi-polar blocking technology that utilizes amorphous Ge and amorphous Si as detector contacts <cit.>. This innovative technology enables the creation of thin contacts (∼600 nm) on Ge, compared to the much thicker Li-diffused contacts (∼1 mm). One significant advantage of thin contacts is their suitability for fabricating segmented Ge detectors. In collaborating with LBNL, USD has developed the capability to produce thin-contact detectors using sputtering technology. After 7 years of R&D, USD has established a robust detector fabrication process and successfully demonstrated thin-contact technology with USD-grown crystals <cit.>. A detailed description of the fabrication processes used to transform single-crystal Ge or Si boules into functional detectors—including crystal alignment, shaping, polishing, and sensor fabrication—can be found in Ref. <cit.>.
§ CONCLUSION
In summary, cosmogenic isotope production is a significant limiting factor for the sensitivity of future Ge-based DM and 0νββ decay experiments. Relocating the critical processes of crystal growth and detector fabrication to underground environments can substantially reduce these backgrounds, thereby enhancing the experimental sensitivity required for groundbreaking discoveries. The pursuit of DM and 0νββ decay detection demands cutting-edge technologies capable of producing the most sensitive detectors, a challenge that requires decades of dedicated research and development.
The United States is well-positioned to advance Ge crystal growth and detector development, building on technology that originated at LBNL. With the guidance and mentorship of LBNL pioneers, USD has successfully developed its own expertise in Ge crystal growth and detector fabrication. Over nearly 15 years of continuous R&D, supported by the Department of Energy, the National Science Foundation, and the state of South Dakota, USD has refined this technology to the point where the production of high-purity Ge crystals and detectors in an underground environment is now feasible. This progress not only builds on the foundational work at LBNL but also positions the United States at the forefront of next-generation experiments in DM detection and 0νββ decay, where ultra-pure Ge crystals and highly sensitive detectors are crucial.
Additionally, institutions within the SuperCDMS collaboration, such as SLAC and Texas A&M, have developed their own detector fabrication capabilities, further bolstering the United States' leading position in producing cutting-edge detectors for frontier science. Complementing these efforts, scientists at South Dakota Mines have pioneered advanced radon reduction techniques to remove radon from the air, ensuring that detectors fabricated underground have radon daughter plate-out free contacts. Together, these advancements position the United States to establish a state-of-the-art underground facility for Ge crystal growth and detector development, meeting the stringent sensitivity requirements of next-generation experiments.
SURF in South Dakota is the only underground lab of its kind in the United States, offering unique access to multiple depths, including 4,850 feet below the surface. This versatility in depth makes SURF an ideal location for establishing an underground Ge crystal growth and detector fabrication facility. The deep underground environment provides exceptional shielding from cosmic rays, significantly reducing the production of cosmogenic isotopes and ensuring the high purity Ge crystals and detectors required for next-generation Ge-based experiments in DM detection and 0νββ decay.
In conclusion, the establishment of an underground crystal growth and detector fabrication facility represents a transformative advancement in the production of ultra-high-purity Ge crystals and the development of next-generation detector technologies. By moving these processes underground, the facility effectively mitigates cosmogenic isotope production, a significant source of background noise generated during the manufacturing of Ge detectors at the Earth's surface. Reducing cosmogenic long-lived isotopes is critical for ensuring the sensitivity and performance of detectors used in next-generation Ge-based DM and 0νββ decay experiments. The integration of crystal growth, mechanical processing, and detector fabrication within a single underground facility provides a controlled environment that supports the precise and high-quality production of detector-grade crystals. As the demand for more sensitive and reliable detectors continues to grow, this underground facility will be pivotal in meeting the stringent requirements of future scientific research, establishing itself as a key asset in the global effort to unlock the mysteries of the universe.
§ ACKNOWLEDGEMENT
This work was supported in part by NSF OISE 1743790, NSF PHYS 2310027, DOE DE-SC0024519, DE-SC0004768, and a research center supported by the State of South Dakota.
99
supercdms R. Agnese et al. (SuperCDMS Collaboration), "First Dark Matter Constraints from a SuperCDMS Single-Charge Sensitive Detector", Phys. Rev. Lett. 121, (2018) 051301.
cdex W. Zhao et al. (CDEX Collaboration), "First results
on low-mass WIMPs from the CDEX-1 experiment at the China Jinping underground laboratory", Phys. Rev. D 88, (2013) 052004.
edelweiss E. Armengaud et al. (EDELWEISS Collaboration), "A search for low-mass WIMPs with EDELWEISS-II heat-and-ionization detectors,” Phys. Rev. D86, 051701 (R) (2012).
gerda M. Agostini et al. (GERDA Collaboration), "Production, characterization and operation of ^76Ge enriched BEGe detectors in GERDA", The Euorepan Physics Journal C, 75 (2015) 39.
majorana N. Abgrall et al. (Majorana Collaboration), "The MAJORANA DEMONSTRATOR Calibration System,” Nucl. Instru. Meth. A, 872 ( 2017) 16 -22.
agostini_review_2019 M. Agostini, G. Benato, J. A. Detwiler, "Discovery probability of next-generation neutrinoless double-beta decay experiments", Phys. Rev. D, 96 (2017) 053001.
mei_cosmogenic_2009 D.-M. Mei et al., "Cosmogenic production as a background in searching for rare physics processes," Astroparticle Physics, vol. 31, no. 6, pp. 417-423, 2009.
wei2017 W.-Z. Wei, D.-M. Mei, C. Zhang, “Cosmogenic activation of germanium used for tonne-scale rare event search experiments,” arXiv: 1706.05324; Astroparticle Physics, 96 (2017) 24-31.
cebrian2017 S. Cebrian, Cosmogenic activation of materials, International Journal of Modern Physics A 32 (2017) 1743006.
abgrall_majorana_2014 N. Abgrall et al., "The Majorana Demonstrator Neutrinoless Double-Beta Decay Experiment," Advances in High Energy Physics, vol. 2014, Article ID 365432, 2014.
legend N. Abgrall et al. (LEGEND Collaboration), “The Large Enriched Germanium Experiment for Neutrinoless Double Beta Decay (LEGEND), arXiv:2107.11462.
mei2024 Dongming Mei, Kunming Dong, Austin Warren, Sanjay Bhattarai, "Impact of recent updates to neutrino oscillation parameters on the effective Majorana neutrino mass in neutrinoless double-beta decay", Phys. Rev. D 110, (2024) 015026.
hehn_cosmogenic_2014 Laura Baudis et al., "Cosmogenic activation of xenon and copper", Eur.Phys.J. C75 (2015) no.10, 485.
aalseth_search_2018 C. E. Aalseth et al., "Search for Neutrinoless Double-Beta Decay in ^76Ge with the Majorana Demonstrator," Physical Review Letters, vol. 120, no. 13, pp. 132502, 2018.
smith_wimps_1990 Gerard Jungman et al., "Supersymmetric Dark Matter", Phys.Rept. 267 (1996) 195-373.
lewin_wimp_nucleus_1996 Lewin, J. D., Smith, P. F., "Review of WIMP Nucleus Scattering," Astroparticle Physics, vol. 6, pp. 87-112, 1996.
agnese_supercdms_2018 Agnese, R., et al., "Results from the Super Cryogenic Dark Matter Search Experiment at Soudan," Physical Review Letters, vol. 120, pp. 061802, 2018.
aprile_xenon1t_2018 Aprile, E., et al., "Dark Matter Search Results from a One Ton-Year Exposure of XENON1T," Physical Review Letters, vol. 121, pp. 111302, 2018.
agnese_supercdms_2019 Agnese, R., et al., "Search for low-mass dark matter with CDMSlite using a profile likelihood fit" Phys. Rev. D, vol. 99, pp. 062001, 2019.
abramoff_darkside_2020 P. Agnes et al., "Search for low mass dark matter in DarkSide-50: the bayesian network approach," European Physical Journal. C, vol. 83, 322, 2023.
akerib_lux_2017 Akerib, D. S., et al. (LUX Collaboration), "Limits on spin-dependent WIMP-nucleon cross section obtained from the complete LUX exposure," Physical Review Letters, vol. 118, pp.251302, 2017.
ahmed_cdms_2010 Ahmed, Z., et al., "Results from the Final Exposure of the CDMS II Experiment," Science, vol. 327, pp. 1619-1622, 2010.
deap M. G. Boulay for the DEAP Collaboration, “DEAP-3600 Dark Matter Search at SNOLAB,” J. Phys.
Conf. Ser. 375, 012027 (2012). arXiv:1203.0604.
pandax Mengjiao Xiao et al. (PandaX Collaboration), “First dark matter search results
form the PandaX-I experiment,” Sci. China Phys. Mech. Astron. 57, 2024 (2014). arXiv:1408.5114.
akerib_lz_2020 Akerib, D. S., et al., "Projected Sensitivity of the LUX-ZEPLIN (LZ) Experiment," Physical Review D, vol. 101, pp. 052002, 2020.
essig_dark_2012 Essig, R., et al., "Direct Detection of Sub-GeV Dark Matter," Physical Review D, vol. 85, pp. 076007, 2012.
lee_thermally_2015 C.M. Ho, R.J. Scherrer, “Limits on MeV Dark Matter from the Effective Number of Neutrinos,” Phys. Rev. D 87(2), 023505 (2013). arXiv:1208.4347.
finkbeiner_mev_dm_2016 G. Steigman, “Equivelent Neutrinos, Light WIMPs, and the Chimera of Dark Radiation,” Phys. Rev. D 87(10), 103517 (2013). arXiv:1303.0049.
agnes_darkside_2018Z. Y. Zhang et al. (CDEX Collaboration), “Constrains on Sub-GeV Dark Matter – Electron Scattering from the CDEX-10 Experiment,” Phys. Rev. Lett. 129, 221301 (2022). arXiV: 2206.04128.
aguilar_fermi_2020 D. S. Akerib et al. (LUX Collaboration), “Results of a search for Sub-GeV Dark Matter Using 2013 Lux Data,” Phys. Rev. Lett. 122, 131301 (2019), arXiV:1811.11241.
abdelhameed_cresst_2019 G. Angloher et al. “Results on MeV-scale dark matter from a gram-scale cryogenic calorimeter operated above ground,” Eur. Phys. J. C. 77, 637 (2017), arXiv:1707.06749.
armengaud_eres_2020 Liron Barak et al., “SENSEI: Direct-Detection Results on Sub-GeV Dark Matter from a New Skipper CCD,” Phys. Rev. Lett. 125, 171802 (2020).
abrams_dmtpc_2019 A. Aguilar-Arevalo et al., ,”Constrains on Light Dark Matter Particles Interacting with Electrons from DIMIC at SNOLAB,” Phys. Rev. Lett. 123, 181802 (2019).
agnes_darkside_2021 Q. Arnaud et al. (EDELWEISS Collaboration), “First germanium-based constrains on sub-MeV Dark Matter with the EDELWEISS experiment,” Phys. Rev. Lett. 125, 141301 (2020).
aramaki_prospects_2016 D. W. Amaral et al. (SuperCDMS Collaboration), “Constraints on low-mass, relic dark matter candidates from a surface –operated SuperCDMS single-charge sensitive detector,” Phys. Rev. D 102, 091101 (2020).
an_aboveground_2017 E. Aprile et al. (XENON Collaboration) “Light Dark Matter Search with Ionization Signal in XENON1T,” Phys. Rev. Lett. 123, 251801 (2019).
cao_darkside_2021 P. Agnes et al. (The DarkSide Collaboration), “Constrains on Sub-GeV Dark Matter-Electron Scattering from the DarkSide-50 Experiment,” Phys. Rev. Lett. 121, 111303 (2018).
experiment1 J.B. Albert et al. (EXO Collaboration), "Search for Neutrinoless Double-Beta Decay with the Upgraded EXO-200 Detector" Physical Review Letters, 120 (2018) 072701.
experiment2 K. Alfonso et al. (CUORE Collaboration), "Search for Neutrinoless Double-Beta Decay of ^130Te with CUORE-0", Physical Review Letters, 115 (2015) 102502.
experiment3 D. González-Díaz et al. (NEXT Collaboration), "NEXT-White: Backgrounds and Sensitivity", Journal of High Energy Physics, 2020, Issue 1, p. 189.
experiment4 S. I. Alvis et al. (Majorana Collaboration), "Search for Neutrinoless Double-beta Decay in ^76Ge with 26 kg-yr of Exposure from the Majorana Demonstrator", Physical Review Letters, 120 (2019) 132502.
KamLandZen S. Abe at el. (KamLAND-Zen Collaboration), Phys. Rev. Lett. 130, 051801.
gerda_2018 M. Agostini et al. (GERDA Collaboration), "Final Results of GERDA on the Search for Neutrinoless Double-Beta Decay," Physical Review Letters, vol. 125, pp. 252502, 2020.
legend_2020_1 N. Abgrall et al. (LEGEND Collaboration), "The large enriched germanium experiment for neutrinoless double beta decay (LEGEND)",
AIP Conference Proceedings, Volume 1894, Issue 1, id.020027.
supercdms_2014 R. Agnese et al. (SuperCDMS Collaboration), "Search for Low-Mass Weakly Interacting Massive Particles Using Voltage-Assisted Calorimetric Ionization Detection in the SuperCDMS Experiment" Physical Review Letters, vol. 112 (2014) 041302.
majorana_2015 S. I. Alvis et al. (Majorana Collaboration), "Search for Neutrinoless Double-Beta Decay in ^76Ge with 26 kg yr of Exposure from the Majorana Demonstrator," Physical Review C., 100 (2019) 025501.
gerda_2017 M. Agostini et al. (GERDA Collaboration), "Improved Limit on Neutrinoless Double-Beta Decay of ^76Ge from GERDA Phase II," Physical Review Letters, vol. 120, no. 13, pp. 132503, 2017.
supercdms_isotopes_2016 R. Agnese et al. (SuperCDMS Collaboration), "Production Rate Measurement of Tritium and Other Cosmogenic Isotopes in Germanium with CDMSlite," Astroparticle Physics, vol. 104, pp.1-12, 2019.
mei2018 D.-M. Mei et al., "Direct detection of MeV-scale dark matter utilizing germanium internal amplification for the charge created by the ionization of impurities, Eur. Phys. J. C (2018) 78:187.
mei D.-M. Mei and A. Hime, "Muon-induced background studies for underground laboratories," PRD 73, (2006) 053004.
legend_2020 N. Abgrall et al. (LEGEND Collaboration), "LEGEND-1000 Preconceptual Design Report", arXiv:2107.11462.
snowmass_2022 Snowmass 2021, "2021 Snowmass Report: Direct Dark Matter Searches and Neutrinoless Double-Beta Decay," Snowmass White Paper, Snowmass 2021: https://snowmass21.org.
zone1 G. Yang, Jayesh Govani, Hao Mei, Yutong Guan, Guojian Wang, Mianliang Huang and Dongming Mei, “Investigation of influential factors on the purification of zone-refined germanium ingot,” Crystal Research and Technology, V 49, (2014) 269-275.
zone2 G. Yang, G. Wang, W. Xiang, Y. Guan, Y. Sun, D. Mei, B. Gray, Y.-D. Chan, “Radial and axial impurity distribution in high-purity germanium crystals,” Journal of Crystal Growth,352 (1), 43-46 (2012).
zone3 G. Yang, D. Mei, J. Govani, G. Wang, M. Khizar, “Effect of annealing on contact performance and electrical properties of p-type high purity germanium single crystal,” Applied Physics A, DOI 10.1007/s00339-012-7518-x (2013).
growth1 Guojian Wang, Mark Amman, Hao Mei, Dongming Mei, Klaus Irmscher, Yutong Guan, Gang Yang, “Crystal growth and detector performance of large size high-purity Ge crystals,” arXiV: 1505.01827, Material Science in Semiconductor Processing 39 (2015) 54-60.
growth2 G. Wang, Y. Sun, G. Yang, W. Xiang, Y. Guan, D. Mei, C. Keller and Y.-D. Chan, “Development of large size high-purity germanium crystal growth,” Journal of Crystal Growth, 352 (1), 27-30 (2012).
growth3 G. Wang, Y. Guan, H. Mei, D. Mei, G. Yang, J. Govani, M. Khizar, “Dislocation density control in high-purity germanium crystal growth,” Journal of Crystal Growth, 393 (2014) 54-58.
growth4 G. Wang, Y. Sun, Y. Guan, D. Mei, G. Yang, A. Chiller, B. Gray, “Optical Methods in Orientation of High-Purity Germanium Crystal,” Journal of Crystallization Process and Technology, 3, 60-63 (2013).
growth5 S. Bhittarai et al., "Investigating Influential Parameters for High-Purity Germanium Crystal Growth," Crystals, 2024, 14(2), 177; https://doi.org/10.3390/cryst14020177.
mark Mark Amman, "Optimization of Amorphous Germanium Electrical Contacts and Surface Coatings on High Purity Germanium Radiation Detectors", arXiv:1809.03046.
de1 W.-Z. Wei, X.-H. Meng, Y.-Y. Li, J. Liu, G.-J. Wang, H. Mei, G. Gang, D.-M. Mei, C. Zhang, “Investigation of Amorphous Germanium Contact Properties with Planar Detectors Made from Home-Grown Germanium Crystals,” arXiv: 1909.04111. Journal of Instrumentation, Volume 13, December 2018, P012026.
de2 X.-H. Meng, G.-J. Wang, M.-D. Wagner, H. Mei, W.-Z. Wei, J. Liu, G. Yang, D.-M. Mei, “Fabrication and Characterization of High-Purity Germanium Detectors with Amorphous Germanium Contacts,” arXiv: 1810.05662. Journal of Instrumentation, Volume 14, February 2019, P02019.
de3 W.-Z. Wei, H. Mei, K. Kooi, D.-M. Mei, J. Liu, J.-C. Li, R. Panth, G.-J. Wang, “Development of Planar P-Type Point Contact Germanium Detectors for Low-Mass Mark Matter Searches,” arXiv: 2105.02109. Eur. Phys. J. C 82 (2022) 3, 203.
de4 R. Panth, J. Liu, I. Abt, X. Liu, O. Schulz, W.-Z. Wei, H. Mei, D.-M. Mei, G.-J. Wang, “Characterization of High-Purity Germanium Detectors with Amorphous Germanium Contacts in Cryogenic Liquids,” Eur. Phys. J. C. V 80 (2020) 667, arXiv: 2003.13792.
de5 W.-Z. Wei, R. Panth, J. Liu, H. Mei, D.-M. Mei, G.-J. Wang, “The impact of the charge barrier height on germanium (Ge) detectors with amorphous-Ge contacts for light dark matter searches,” arXiv: 2002.04462. Eur. Phys. J C 80 (2020) 472.
de6 S. Bhattarai, R. Panth, W.-Z. Wei, H. Mei, D.-M. Mei, M.-S. Raut, P. Acharya, and G.-J. Wang, “Investigation of the electrical conduction mechanisms in p-type amorphous (a-Ge) Used as a-Ge contacts for Ge detectors,” arXiv:2002.07707, Eur. Phys. J. C 80 , Article number 950 (2020).
de7 R. Panth, W.-Z. Wei, D.-M. Mei, J. Liu, S. Bhattarai, H. Mei, M. Raut, P. Acharya, K. Kooi, G.-J. Wang, “Implication of the Temperature-Dependent Charge Barrier Height of Amorphous Germanium Contact Detector in Searching for Rare Event Physics,” arXiv: 2101.09322. NIM A 1035 (2022) 166862.
|
http://arxiv.org/abs/2409.02253v1 | 20240903192613 | How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model? | [
"Saeid Asgari Taghanaki",
"Joseph Lambourne",
"Alana Mongkhounsavath"
] | cs.CV | [
"cs.CV"
] |
1/f Noise in the Heliosphere: A Target for PUNCH Science
[
September 9, 2024
========================================================
§ ABSTRACT
Large foundation models have revolutionized the field, yet challenges remain in optimizing multi-modal models for specialized visual tasks. We propose a novel, generalizable methodology to identify preferred image distributions for black-box Vision-Language Models (VLMs) by measuring output consistency across varied input prompts. Applying this to different rendering types of 3D objects, we demonstrate its efficacy across various domains requiring precise interpretation of complex structures, with a focus on Computer-Aided Design (CAD) as an exemplar field. We further refine VLM outputs using in-context learning with human feedback, significantly enhancing explanation quality. To address the lack of benchmarks in specialized domains, we introduce CAD-VQA, a new dataset for evaluating VLMs on CAD-related visual question answering tasks. Our evaluation of state-of-the-art VLMs on CAD-VQA establishes baseline performance levels, providing a framework for advancing VLM capabilities in complex visual reasoning tasks across various fields requiring expert-level visual interpretation. We release the dataset and evaluation codes at <https://github.com/asgsaeid/cad_vqa>.
§ INTRODUCTION
Large foundation models have revolutionized the AI landscape, providing unparalleled capabilities across various domains <cit.>. Vision-Language Models (VLMs), a subset of these models, integrate visual and textual information, enabling complex tasks such as image captioning, visual question answering, and multi-modal reasoning <cit.>. Despite their impressive performance, a significant challenge remains: extracting the most useful knowledge from these black-box models.
Prompt engineering has seen extensive research and application in large language models, optimizing inputs to elicit more accurate and relevant responses <cit.>. However, the multi-modal nature of VLMs introduces additional layers of complexity. These models must interpret and integrate information from both visual and textual inputs, and the optimal prompting strategy can vary significantly based on the image distribution <cit.>.
Understanding image view distribution is crucial across various domains. In mechanical design, different views of parts and assemblies enhance comprehension of complex structures, aiding design and analysis. In architecture and construction, multiple perspectives of building designs help assess structural integrity and plan activities. In robotics and autonomous driving, diverse viewpoints improve navigation and object manipulation. In surveillance and security, integrating views from multiple cameras enhances monitoring accuracy. In medical imaging, different views of scans like MRI and CT provide comprehensive insights for diagnosing diseases, requiring models to integrate information from various angles.
In this work, we address the challenge of determining which image distributions lead to better outputs from a black-box VLM. Specifically, we focus on scenarios where multiple views of objects are available, such as renderings of images taken under different conditions <cit.>. Given that we often lack information about the data on which the VLM was trained, and do not have access to the model weights, properties, or gradients, traditional methods for assessing model confidence are not applicable <cit.>.
To overcome this, we propose a novel method to measure the confidence of a VLM without requiring access to its internal parameters. Our approach involves analyzing the outputs produced by the model under different image distributions. By systematically evaluating the model's confidence across various distributions, we can infer the image distributions that the VLM "prefers," leading to more reliable and accurate outputs. Our approach is based on the hypothesis that higher consistency in a VLM's outputs, despite variations in input prompts, indicates higher model confidence. This hypothesis is grounded in the principle that a model with a robust internal representation of the input should produce consistent outputs even when the input is paraphrased. This aligns with recent work on self-consistency in language models <cit.> and relates to the concept of model calibration <cit.>.
We also apply in-context learning with human feedback (ICL-HF) to refine and improve VLM outputs. By incorporating expert knowledge through iterative feedback, we demonstrate enhancements in the quality and accuracy of VLM-generated explanations for complex 3D mechanical parts. This process provides valuable insights into the learning dynamics of VLMs in specialized domains.
Building upon these methods, we present CAD-VQA, a new dataset specifically designed to evaluate VLMs' understanding of 3D mechanical parts in Computer-Aided Design (CAD) contexts. This dataset, comprising carefully curated images, questions, and answers, addresses a gap in the field by providing a benchmark for assessing VLM performance in specialized technical domains.
The main contributions of this work are:
1. A novel method for measuring VLM confidence based on output consistency across different image distributions, without access to internal model parameters.
2. An application of in-context learning with human feedback (ICL-HF) to improve VLM performance in the specialized domain of 3D mechanical part analysis.
3. CAD-VQA: A new dataset for evaluating VLMs on CAD-related visual question answering tasks, addressing the lack of benchmarks in this domain.
4. Evaluation of state-of-the-art VLMs on the CAD-VQA dataset, establishing baseline performance levels for future research.
While we acknowledge that high consistency in model utputs could potentially result from model biases or limitations, rather than true confidence, we believe our approach provides a valuable proxy for assessing the reliability of (black-box) VLM outputs across different image distributions. Moreover, the combination of our consistency measurement technique, application of ICL-HF, and the CAD-VQA dataset offers a comprehensive framework for advancing the capabilities of VLMs in specialized visual reasoning tasks.
§ RELATED WORK
Prompt engineering for large language models has been extensively explored, as demonstrated by Reynolds and McDonell <cit.>, Liu et al. <cit.>, and Radford et al. <cit.>. These studies focus on designing effective prompts to elicit desired responses from language models, thereby enhancing their utility in various applications. Recent works such as Gao et al. <cit.>, Lester et al. <cit.>, Wei et al. <cit.>, and Sanh et al. <cit.> have further expanded on prompt engineering techniques, introducing methods like prompt tuning and instruction-based learning. However, prompt engineering for multi-modal models remains relatively underexplored, particularly in the context of image distributions and their impact on model performance.
Prompt engineering for vision-language models. While much work has been done on prompt engineering for language models <cit.>, the extension to multimodal scenarios presents unique challenges. Cho et al. <cit.> proposed a unified framework for vision-language prompt learning, demonstrating the potential of tailored prompts in improving model performance.
The complexity of evaluating black-box models without access to their internal parameters is a well-known challenge. Tsimpoukelli et al. <cit.> investigate multimodal few-shot learning with frozen language models, addressing the difficulties in adapting pre-trained models to new tasks with limited data. Similarly, Chen et al. <cit.> evaluate large language models trained on code, proposing methods to assess model confidence and performance without direct access to model internals. Our work builds on these foundations by addressing the specific challenge of determining preferred image distributions for VLMs. By focusing on scenarios with multiple views of objects, such as renderings under different conditions, we propose a novel approach to measure model confidence and optimize input data for better outputs. This contribution aims to bridge the gap in the existing literature on prompt engineering and evaluation for multi-modal models. Hendricks et al. <cit.> proposed a probing framework to assess the grounding capabilities of VLMs, highlighting the importance of understanding how these models integrate visual and linguistic information. Similarly, Cao et al. <cit.> investigated the inner workings of VLMs, providing insights into their decision-making processes.
The concept of using consistency as a measure of model performance has gained traction in recent years. Xu et al. <cit.> demonstrated that self-consistency can improve chain-of-thought reasoning in language models, which aligns with our approach of using consistency to assess VLM outputs. In the context of vision-language tasks, Frank et al. <cit.> explored the use of consistency in visual question answering, showing how it can be leveraged to improve model accuracy.
The concept of in-context learning with human feedback, which we employ in our study, draws inspiration from recent advancements in reinforcement learning from human feedback (RLHF) <cit.>. While we don't use reinforcement learning directly, the principle of incorporating human feedback to improve model outputs is similar. This approach aligns with broader trends in interactive and iterative learning paradigms <cit.>, as well as methods for fine-tuning language models with human preferences <cit.>. The integration of expert knowledge through feedback mechanisms has also been explored in various domain-specific applications <cit.>.
§ METHOD
In this work, we address the challenge of determining which image distributions lead to better outputs from a black-box VLM. Specifically, we use GPT-4o <cit.> for our experiments, but it can simply replaced by any other VLM. GPT-4o is currently known for its state-of-the-art performance in integrating visual and textual information. We focus on scenarios where multiple views of objects are available, such as renderings of images taken under different conditions. Given that we often lack information about the data on which the VLM was trained, and do not have access to the model weights, properties, or gradients, traditional methods for assessing model confidence are not applicable.
Given N image distributions {I_1, I_2, …, I_N}, our goal is to determine which distribution leads to better performance when using a black-box Vision-Language Model (VLM), such as GPT-4o <cit.>, where we do not have access to model weights, gradients, or probabilities. To achieve this, we propose a method to measure the consistency of the VLM's outputs across different image distributions.
The underlying hypothesis of this methodology is that higher consistency in the VLM's outputs, despite variations in the textual prompts, indicates higher model confidence. Model confidence refers to the certainty with which a model produces an output given an input. For a VLM, confidence can be understood as the model's ability to generate consistent and reliable outputs despite variations in the input prompts. A robust model should produce similar outputs when presented with semantically equivalent but syntactically different prompts. This robustness indicates that the model has a stable and reliable understanding of the input image, suggesting higher confidence in its outputs.
Let P be the set of paraphrased prompts and I be an image distribution. For a robust and confident model, the outputs O_P, I should exhibit minimal variance. Formally, for paraphrased prompts P_1, P_2, …, P_C, the outputs O_I, P_1, O_I, P_2, …, O_I, P_C should be similar:
Var(O_I, P_1, O_I, P_2, …, O_I, P_C) ≈ 0
Here, variance (Var) is a measure of inconsistency. Lower variance implies higher consistency, which can be interpreted as higher confidence.
Prompt paraphrasing. Given a textual prompt P that aims to extract information from an image, we generate C=3 different paraphrased commands {P_1, P_2, P_3} using the chat version of GPT-4 and manually verify them. These paraphrased commands are designed to maintain the same semantic meaning while varying the phrasing. The full prompt used for paraphrasing can be found in Appendix <ref>.
After generation, we manually review the paraphrases to ensure they meet our criteria for semantic equivalence and diversity. Any paraphrases that deviate too far from the original meaning or don't provide sufficient variation are replaced with manually crafted alternatives.
Collecting VLM outputs. For each image distribution I_n and each paraphrased command P_c, we collect the VLM's output O_n,c:
O_n,c = VLM(I_n, P_c)
where n ∈{1, 2, …, N} and c ∈{1, 2, …, C}. This results in a set of outputs {O_n,1, O_n,2, …, O_n,C} for each image distribution I_n.
§.§ Measuring Consistency
We measure the consistency of the outputs for each image distribution using three different methods:
ROUGE and BLEU Scores. We calculate the ROUGE <cit.> score i.e., ROUGE-1, ROUGE-2, ROUGE-L, and BLEU <cit.> scores for the outputs within each image distribution. Let S_n,c be the score between O_n,c and a reference output. The consistency score C_ROUGE/BLEU, n for image distribution I_n is defined as the average score across all paraphrased commands:
C_ROUGE/BLEU, n = 1/C∑_c=1^C S_n,c
BERT Embedding Cosine Similarity. We embed each output O_n,c using a BERT model <cit.> and calculate the cosine similarity between the embeddings. Let BERT(O_n,c) be the embedding of O_n,c. The consistency score C_BERT, n for image distribution I_n is defined as the average cosine similarity between all pairs of embeddings:
C_BERT, n = 2/C(C-1)∑_i=1^C∑_j=i+1^Ccos(BERT(O_n,i), BERT(O_n,j))
GPT-based Consistency Judgement. We use GPT-4o to act as a judge and provide a consistency score for the outputs within each image distribution. The detailed prompt for consistency judgment is provided in Appendix <ref>.
GPT-4o then provides a consistency score between 0 and 1, where 0 means completely inconsistent and 1 means perfectly consistent. Let G(O_n,1, O_n,2, …, O_n,C) be the consistency score given by GPT-4o. The consistency score C_GPT, n for image distribution I_n is:
C_GPT, n = G(O_n,1, O_n,2, …, O_n,C)
This approach allows us to leverage GPT-4o's natural language understanding capabilities to assess the semantic consistency of the generated descriptions, providing a more nuanced evaluation than purely statistical methods.
Determining Preferred Image Distribution. Finally, we determine the preferred image distribution by comparing the consistency scores across all image distributions. The distribution with the highest average consistency score, considering all measurement methods (ROUGE/BLEU, BERT, and GPT-based), is considered the preferred distribution. This approach allows us to identify which image distribution leads to the most consistent and reliable outputs from the VLM.
§.§ Human Expert Rating and Dataset Creation
While consistency is a key indicator of model confidence, it is not sufficient on its own as the responses could be consistently incorrect. Therefore, we involve a mechanical expert to rate the explanations provided by the VLM for each part in different image distributions. The ratings focus on both accuracy and usefulness of the explanations. The overall expert rating results across all samples and explanations (i.e., 25 samples times 3 explanations for each) are summarized in Table <ref>. The criteria for expert ratings include Relevance, Accuracy, Detail, Fluency, and Overall Quality.
We convert the options for these criteria to numerical values between 1 and 5 to calculate the values in Table <ref>. We also ask the human experts to add comments when necessary to provide additional insights.
The relevance and accuracy were evaluated by first analyzing the congruency between the name and the depicted image. A lower rating was assigned if the preliminary assessment revealed a lack of alignment. Subsequently, the rating was adjusted if the name and the content of the text did not align. A higher level of congruity indicated higher accuracy. From there, the contents were assessed for their ability to accurately describe the component design features, characteristics, industry, intended use, etc. The detail evaluation was assessed based on whether the provided data sufficed to conceptualize the design. Fluency was gauged by the grammatical correctness and the coherence of the descriptions. The overall quality was determined by the total of the scores from the indicated categories. While evaluating the different categories, an emerging trend was noticed. If the visual language model correctly identified the object's name, the subsequent details tended to align correctly. However, when the model misidentified the geometry, the details tended to correspond to the wrong item identification. For parts that were highly specialized for assembly, a more general example of industry standards was often indicated, rather than a specific standard as a starting point for further analysis by the end user.
From top-rated explanations, we developed a specialized dataset comprising CAD images paired with questions and answers extracted from the explanations. This dataset is designed to evaluate VLMs on visual question answering (VQA) tasks specific to CAD objects. By grounding our dataset in expert-validated explanations, we provide a reliable benchmark for assessing VLM performance in the CAD domain, bridging the gap between consistency and domain-specific accuracy.
§ CAD-VQA DATASET
We present CAD-VQA (Computer-Aided Design Visual Question Answering), a novel dataset designed to evaluate Vision-Language Models' understanding of 3D mechanical parts in CAD contexts.
§.§ Dataset Creation Process
Building upon the high-quality explanations generated through our iterative process of VLM output and human expert evaluation, we developed a novel dataset for evaluating Vision-Language Models on CAD tasks. The dataset creation process involved the following steps:
Selection of top-rated explanations: We chose explanations for 17 parts that received excellent ratings from human experts.
Question generation: Using Claude 3.5 Sonnet, we generated an initial set of questions based on these top-rated explanations. The questions cover various aspects including part names, geometrical features, assembly features, and functionality.
Visual focus: We designed questions to require analysis of the provided images, ensuring that answers couldn't be derived solely from common knowledge of 3D design.
Comprehensive coverage: A total of 85 multiple-choice questions were created, providing a diverse range of queries about the 17 selected parts.
Quality assurance: We conducted rigorous post-processing to ensure consistency in question style, eliminate errors, and maintain a uniform difficulty level across the dataset.
This dataset addresses a gap in the field of VLM evaluation for CAD applications. Currently, there is a scarcity of publicly available datasets specifically designed to assess VLMs' understanding of 3D mechanical parts and their features. Our dataset, while compact, represents one of the first efforts to create a benchmark for evaluating VLMs in the context of CAD and mechanical engineering.
The uniqueness of this dataset lies in its focus on:
* Specialized vocabulary and concepts from mechanical engineering and CAD
* Visual interpretation of 3D parts from multiple perspectives
* Understanding of both individual part features and their roles in larger assemblies
* Application of domain-specific knowledge to answer questions based on visual input
By providing this dataset, we aim to stimulate further research in improving VLMs' capabilities in specialized technical domains, particularly in the field of mechanical design and engineering.
To illustrate the nature of our CAD-VQA dataset, we provide a few representative examples in Table <ref>. These examples demonstrate the diversity of questions and the necessity of properly analyzing the provided images to correctly answer them.
§ RESULTS
For our preliminary experiments, we use a relatively small dataset due to the difficulty in scaling the rating process of detailed explanations by mechanical experts. Our dataset consists of 25 3D mechanical parts from the ABC collection <cit.>, each part appearing within a larger assembly context. We evaluate four different image distributions for rendering these parts. Distribution A: Each part is rendered as an individual solid. Distribution B: Each part is rendered in the assembly along with other parts, where the other parts are transparent. Distribution C: Similar to Distribution B but slightly zoomed. Distribution D: A mix of Distributions A, B, and C (two samples from each).
These distributions were chosen to cover a range of contexts, although many other rendering methods are possible. For each part, we generate three different paraphrased prompts aimed at explaining the part's function and significance within the assembly. A sample of how the data looks is shown in Figure <ref>.
Consistency Measurement. We measure the consistency of the outputs using the methods described previously: ROUGE and BLEU scores, BERT embedding cosine similarity, and GPT-based consistency judgment. The results for each image distribution are summarized in Table <ref>. The results indicate that Distribution D, which includes a mix of the different rendering methods, consistently achieves the highest scores in both consistency metrics and expert ratings. This suggests that providing multiple perspectives of the parts helps the VLM generate more accurate and reliable explanations. Additionally, the use of in-context learning with expert feedback shows a noticeable improvement in the quality of the explanations, demonstrating the effectiveness of iterative refinement in enhancing model performance.
§.§ In-Context Learning with Human Feedback
To further refine the model's performance, we use the expert ratings as feedback for in-context learning. The VLM is shown the expert ratings to learn and correct the explanations that received lower scores. After incorporating this feedback, we re-evaluate the model with human experts to assess improvement. The updated ratings are shown in Table <ref>.
Based on our consistency scores, Distribution D (a mix of single object renders, assembly renders with transparent parts, and zoomed assembly renders) performed best. We apply an in-context learning process to our dataset, using a prompt that provides the model with images, descriptions, and expert ratings for each part. The full in-context learning prompt can be found in Appendix <ref>.
For our current dataset of parts, we provide GPT-4o with a comprehensive prompt containing all parts' information simultaneously:Iimages from Distribution D for each part, descriptions per part, and their corresponding human expert ratings. The model then generates new descriptions for parts based on this extensive in-context learning.
However, for larger datasets where providing all information at once may exceed the model's context length, we suggest two alternative approaches: a Sliding Window Approach and a Sequential Processing Approach. Details of these approaches and a visual comparison can be found in Appendix <ref>.
§.§ Performance of State-of-the-Art VLMs on our CAD-VQA dataset
We evaluated several state-of-the-art Vision-Language Models on our CAD-VQA dataset to establish baseline performance levels. The models tested include Claude-3.5-Sonnet <cit.>, GPT-4o <cit.>, and Gemini-1.5-Pro <cit.>.
Table <ref> presents the accuracy of each model on our dataset:
These results demonstrate that even the most advanced VLMs face significant challenges in accurately interpreting and reasoning about CAD objects. Claude-3-Sonnet shows the highest accuracy at 61%, while both GPT-4o and Gemini-1.5-Pro achieve 54% accuracy. These scores, while above random guessing (10% for 10-option multiple choice questions), indicate substantial room for improvement in VLMs' understanding of specialized technical domains like mechanical engineering and CAD.
The performance gap between these models and human experts underscores the need for continued research and development in enhancing VLMs' capabilities in domain-specific visual reasoning tasks.
§ CONCLUSION
Our study addressed the challenge of optimizing image distributions for black-box Vision-Language Models (VLMs). Experimenting with 3D mechanical parts and GPT-4o, we evaluated four image distributions using a novel methodology based on output consistency across paraphrased prompts. The mixed distribution, combining various rendering perspectives, consistently outperformed others, indicating that multiple viewpoints enhance VLM performance in generating accurate explanations. Expert ratings validated these findings and demonstrated the effectiveness of in-context learning with human feedback in improving explanation quality. Building on these insights, we developed CAD-VQA, a new dataset for evaluating VLMs on CAD-related visual question answering tasks. This dataset addresses a gap in the field and provides a benchmark for assessing VLM performance in specialized technical domains.
Our approach of automated consistency checks, followed by expert evaluation, offers a scalable method for assessing VLM outputs. The evaluation of state-of-the-art VLMs on CAD-VQA establishes baseline performance levels, highlighting both the potential and current limitations of VLMs in interpreting specialized visual data. While our experiments focused on CAD applications, this methodology and the principles behind CAD-VQA are broadly applicable to other domains requiring specialized visual interpretation. Future work should explore scaling this approach to diverse fields, applying the dataset creation process to other specialized domains, and investigating the relationship between output consistency and model confidence through comparison with explicit confidence estimation techniques and human evaluations.
plain
§ SUPPLEMENTARY MATERIAL
§.§ Paraphrasing Prompt
The following prompt was used to generate paraphrases for our experiments:
[
enhanced,
attach boxed title to top center=yshift=-3mm,yshifttext=-1mm,
colback=white,
colframe=gray!75!black,
colbacktitle=gray!85!black,
title=Paraphrasing Prompt,
fonttitle=,
boxed title style=size=small,colframe=gray!50!black,
boxrule=0.5pt,
left=6pt,right=6pt,top=6pt,bottom=6pt
]
Please generate 3 paraphrases of the following prompt. Each paraphrase should maintain the same core meaning but vary in phrasing and complexity. Ensure a mix of minor variations (e.g., word order changes, synonym substitution) and more significant restructuring. The paraphrases should be diverse enough to test a language model's robustness to input variations, but not so different that they alter the fundamental query.
Original prompt:
“Please analyze the object shown in the image. Note that in some images, the 3D part might appear red when shown in an assembly format, while in others, it might look grey when presented as an individual part. Provide a detailed explanation of the object's name or type, its geometric features and shape, and its likely function or purpose within a larger system or assembly. Be as specific and comprehensive as possible in your description.”
Generate your 3 paraphrases below:
1. [Paraphrase 1]
2. [Paraphrase 2]
3. [Paraphrase 3]
§.§ Consistency Judgment Prompt
The following prompt was used for GPT-based consistency judgment:
[
enhanced,
attach boxed title to top center=yshift=-3mm,yshifttext=-1mm,
colback=white,
colframe=gray!75!black,
colbacktitle=gray!85!black,
title=Consistency Judgment Prompt,
fonttitle=,
boxed title style=size=small,colframe=gray!50!black,
boxrule=0.5pt,
left=6pt,right=6pt,top=6pt,bottom=6pt
]
You are tasked with evaluating the consistency of multiple descriptions of the same 3D mechanical part. These descriptions were generated by an AI model in response to slightly different prompts about the same image. Your job is to assess how consistent these descriptions are with each other in terms of content, details, and overall interpretation of the part.
Please consider the following aspects:
* Name/Type Consistency: Do all descriptions refer to the part using the same or very similar names/types?
* Geometric Features Consistency: Are the descriptions of the part's shape, size, and key geometric features consistent across all versions?
* Functionality Consistency: Do all descriptions attribute the same or very similar functions or purposes to the part?
* Detail Level Consistency: Is the level of detail provided about the part similar across all descriptions?
* Context Consistency: If the part's position or role within a larger assembly is mentioned, is this consistent across descriptions?
After analyzing the descriptions, please provide:
* A consistency score from 0 to 1, where 0 means completely inconsistent and 1 means perfectly consistent.
* A brief explanation (2-3 sentences) justifying your score.
Descriptions to evaluate:
1. [Description 1]
2. [Description 2]
3. [Description 3]
Your consistency score and explanation:
[Score]:
[Explanation]:
§.§ In-Context Learning with Human Feedback Prompt
The following prompt was used for in-context learning with human feedback:
[
enhanced,
attach boxed title to top center=yshift=-3mm,yshifttext=-1mm,
colback=white,
colframe=gray!75!black,
colbacktitle=gray!85!black,
title=ICL-HF Prompt,
fonttitle=,
boxed title style=size=small,colframe=gray!50!black,
boxrule=0.5pt,
left=6pt,right=6pt,top=6pt,bottom=6pt
]
You are an AI assistant specializing in describing 3D mechanical parts. You will be provided with information for different parts. For each part, you will receive:
1. Five images (various perspectives of the part)
2. Three descriptions of the part
3. Human expert ratings for each description
Analyze this information and generate improved descriptions. Here's the format for each part:
Part 1
[Image 1], ... , [Image 5]
Description 1
[Description text]
Relevance: [ ] Accuracy: [ ] Detail: [ ] Fluency: [ ] Overall: [ ]
Description 2
[Description text]
Relevance: [ ] Accuracy: [ ] Detail: [ ] Fluency: [ ] Overall: [ ]
Description 3
[Description text]
Relevance: [ ] Accuracy: [ ] Detail: [ ] Fluency: [ ] Overall: [ ]
..., Part 25, ...
According to the ratings, generate an improved description that:
* Accurately identifies and names the part
* Describes its geometric features and shape in detail, referencing specific views from the five images
* Explains its likely function or purpose within a larger system or assembly
* Maintains consistency with the high-rated aspects of previous descriptions
* Improves upon areas that received lower ratings
* Integrates information from all provided perspectives
Your new description should aim to maximize all five rating categories: Relevance, Accuracy, Detail, Fluency, and Overall Quality.
Please provide your improved description.
§.§ Alternative Approaches for Large Datasets
For larger datasets where providing all information at once may exceed the model's context length, we suggest two alternative approaches:
These methods allow the model to learn from a substantial amount of context while remaining within practical limits. The Sliding Window Approach processes the data in overlapping batches, while the Sequential Processing Approach passes batches to the model incrementally before generating all descriptions at once.
|
http://arxiv.org/abs/2409.02784v1 | 20240904150016 | Thermometry Based on a Superconducting Qubit | [
"Dmitrii S. Lvov",
"Sergei A. Lemziakov",
"Elias Ankerhold",
"Joonas T. Peltonen",
"Jukka P. Pekola"
] | quant-ph | [
"quant-ph"
] |
APS/123-QED
[email protected]
PICO group, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 Aalto, Finland
§ ABSTRACT
We report temperature measurements using a transmon qubit by detecting the population of the first three levels of it, after employing a sequence of π-pulses and performing projective dispersive readout. We measure the effective temperature of the qubit and characterize its relaxation and coherence times τ_1,2 for three devices in the temperature range 20 – 300 mK. Signal-to-noise (SNR) ratio of the temperature measurement depends strongly on τ_1, which drops at higher temperatures due to quasiparticle excitations, adversely affecting the measurements and setting an upper bound of the dynamic temperature range of the thermometer. The measurement relies on coherent dynamics of the qubit during the π-pulses. The effective qubit temperature follows closely that of the cryostat in the range 100 - 250 mK. We present a numerical model of the qubit population distribution and compare it favorably with the experimental results.
Thermometry Based on a Superconducting Qubit
J. P. Pekola
September 9, 2024
============================================
§ INTRODUCTION
In recent years theoretical proposals for quantum thermometry have attracted a lot of attention due to prospects of monitoring temperature with minimal invasion, properties of quantum systems with high accuracy at extremely low temperatures and opportunities for studying open quantum systems and quantum thermodynamics <cit.>. However, the number of conceptually different experimental realizations of quantum thermometers based on coherent quantum objects remains quite limited. For example, quantum thermometry was demonstrated using a nuclear magnetic resonance (NMR) techniques for polarization measurements of spin ensembles in solutions at room temperature <cit.> and using coherent quantum N00N-states <cit.>. Generation and transfer of the latter was recently realized on the basis of a superconducting qubit <cit.>. Also there are noteworthy experiments on thermometry based on nitrogen-vacancy (NV) colour centres in diamond <cit.>. Another possible platform for quantum thermometers is semiconductor quantum dots, which were used for spectroscopy-type measurements in sub-kelvin temperature range in Ref. <cit.>, exhibiting good signal-to-noise ratio and fast operation but limited coherence.
Prospects of utilizing superconducting qubits as thermometers are of dual research interest: they can facilitate studies of open quantum systems and interaction between the qubits and their environment, which is vital in the context of building large multiqubit systems such as quantum computers and simulators. On the other hand qubit thermometers can be used for realization of quantum thermometry protocols. Quantum electrodynamics (QED) circuits based on coherent superconducting transmon qubits were primarily used to measure the residual qubit population and the corresponding residual temperature in a number of works, for example in <cit.>. The residual population of a transmon in a 3D cavity was measured by Rabi oscillations on the e-f transition in Ref. <cit.> and correlation measurements in Ref. <cit.>. In Ref. <cit.> another technique of applying sequences of π-pulses was used to measure the effective qubit temperature by observing the population distribution for the three lowest energy states of a transmon in a planar geometry. A different approach was demonstrated in Ref. <cit.>, where thermometry was done by reflection-type spectroscopy measurements of a transmon qubit strongly coupled to a waveguide.
Yet qubits are sensitive to various sources of dissipation and decoherence, and the diverse nature of their environment (see, for example, <cit.>). Therefore in practice, identification of a specific thermal bath, leading to thermalization of the qubit, can be quite challenging; i.e., it may be difficult to identify what temperature the qubit is actually measuring.
In this paper we present a comprehensive and concrete study of employing a superconducting qubit for quantum thermometry at sub-kelvin temperatures. The measurement technique used in our work relies on applying sequences of π-pulses for manipulating the qubit population distribution, requiring sufficient coherence of the qubit, and then performing consecutive qubit state readout, which was described in Ref. <cit.>. We also study temperature dependencies of qubit relaxation and coherence times. They determine the working range of the thermometer in terms of temperature, and they give conatraints on the pulse durations in the measurement. While showing a solid thermalization of the qubit to the cryostat stage, the technique allows us to measure the population distribution at temperatures up to 200-220 mK, when the second excited state of the qubit was already significantly populated. This exceeds the applicability range of methods based on consideration of only the lowest qubit levels. At the same time the technique is less dependent on the signal-to-noise ratio (SNR) of the measurement setup. We compare the experimental results with a numerical model of qubit population measurements, being in a good qualitative agreement. Within the model we investigate the influence of the measured parameters on the population and effective qubit temperature obtained as the result. Finally, we address the factors limiting the dynamical range of a superconducting qubit thermometer. The paper is organized as follows: in Section <ref> we provide a brief description of a realistic model of a qubit being in thermal equilibrium with heat baths. Section <ref> explains the principle of the population distribution measurements. Section <ref> provides a brief description of the samples used in the experiment, and discussion on the main experimental results is given in Section <ref>, and finally, limits of the thermometer and ways of mitigation are discussed in Section <ref>. The paper is followed by appendices providing additional experimental data, details of the numerical simulations, experimental setup and sample fabrication.
§ QUBIT POPULATION AND EFFECTIVE TEMPERATURE
Let us consider a qubit in thermal equilibrium with a heat bath having temperature T. In this case the qubit is in a thermal state, which means that its density matrix is diagonal and the probabilities p_g and p_e, respectively, to be in the ground |g⟩ or excited state |e⟩ (qubit population) are Boltzmann distributed as
p_e/p_g = e^-ħω_ge/k_B T,
where ħω_ge is the separation between these two states, and k_B is the Boltzmann constant. Thus, by measuring the population ratio of the qubit one obtains the temperature of the thermal bath.
In a more realistic scenario when a qubit is in steady state with respect to several uncorrelated heat baths (with temperatures T^(i), where i enumerates baths), the state of the qubit is still a thermal-like state. But the temperature characterizing this state is not necessarily the temperature T^(i) of any of the thermal baths. Then Eq. (<ref>) could be interpreted as giving an effective qubit temperature T_eff. In this sense, Eq. (<ref>) can be rewritten as a definition of T_eff as
T_eff = ħω_ge/k_B(- lnp_e/p_g)^-1,
which coincides with the true temperature in equilibrium.
For the case of ohmic heat baths, one can find the energy relaxation rate γ_1 for the qubit towards T_eff using Fermi golden rule approach. The detailed description is given in Appendix <ref>. The qubit energy relaxation rate writes
γ_1 = ∑_i(Γ_↓^(i)+Γ_↑^(i))=∑_iγ_1^(i)[2n(T^(i), ω_ge)+1]
=∑_iγ_1^(i)ħω_ge/2 k_B T^(i),
where n = [exp(ħω_ge/k_B T) - 1]^-1 is the Bose-Einstein distribution.
The population ratio for the qubit then writes
p_e/p_g=Σ_i Γ_↑^(i)/Σ_i Γ_↓^(i)=Σ_i γ_1^(i) n(T^(i), ω_ge)/Σ_i γ_1^(i)[n(T^(i), ω_ge)+1]
§ POPULATION MEASUREMENTS
In this work we measured population distribution of several transmon qubits to obtain temperature using the technique described in <cit.>, which is based on applying different sequences of π-pulses and then performing a dispersive readout of the qubit state. Here we do not assume that the transmon is a two-level qubit, but include the higher levels, especially the third one |f⟩ in the analysis and protocols. We presume, that the population of a qubit thermalized at temperature T follows the Maxwell-Boltzmann distribution p_i = exp(-E_i/k_B T) /Z, with the partition function Z = ∑_i=1^3exp(-E_i/k_B T) and the i-th level energy E_i.
A transmon qubit typically has a weak anharmonicity α ( |α| ≪ħω_ge) <cit.>, so that the energy separations ħω_ge and ħω_ef are related by ω_ef = ω_ge + α/ħ≅ω_ge. For example, if a transmon with ω_ge/2π=6.65GHz and α/h = -230MHz is thermalized at temperature 300mK we can estimate the population ratios p_e/p_g ≈ 0.35 and p_f/p_g ≈ 0.12. Thus, it is important to take into account the finite population of the f-level. Limiting to the three lowest energy states of a transmon, the density matrix of a thermalized transmon reads
ρ̂=p_g|g⟩⟨g|+p_e|e⟩⟨e|+p_f|f⟩⟨f|.
Let us denote the readout response of a projective measurement of the qubit being in state |i⟩ as φ_i, which we will call “pure state response” further in the text. Then the result φ of the qubit state measurement is a linear combination of the form φ = p_g φ_g+p_e φ_e+p_f φ_f. The π_ge(π_ef)-pulses effectively swap pairs of the population probabilities p_g and p_e (p_e and p_f), which results in getting different readout outcomes, which we denote x_i and y_j, i,j = 0,1,2. An example of such a measurement is shown in Fig. <ref> a, c. We can then write a system of linear equations as
x_0 = p_g φ_g + p_e φ_e + p_f φ_f, (no pulse)
x_1 = p_e φ_g + p_g φ_e + p_f φ_f, (π_ge)
x_2 = p_e φ_g + p_f φ_e + p_g φ_f, (π_geπ_ef)
y_0 = p_g φ_g + p_f φ_e + p_e φ_f, (π_ef)
y_1 = p_f φ_g + p_g φ_e + p_e φ_f, (π_efπ_ge)
y_2 = p_f φ_g + p_e φ_e + p_g φ_f. (π_efπ_geπ_ef)
We can find functional dependencies between p_i, i∈{g,e,f} and x_j, y_k with j,k = 0,1,2 which are valid for arbitrary pure state responses φ_i. These quantities are temperature dependent given by A=p_g-p_e/p_g-p_f, B=p_e-p_f/p_g-p_e and C=p_e-p_f/p_g-p_f, which can also be expressed through the readout results x_j, y_k (see Table <ref>).
To understand the temperature behaviour of functions A, B and C, we can take the low temperature limit ħω_ge≪ k_B T, also assuming p_f → 0. Then we
can rewrite A ≈ 1 - p_e/p_g, B ≈ C ≈ p_e/p_g or
A ≈ 1 - exp(-ħω_ge/k_B T),
B ≈ C ≈ exp(-ħω_ge/k_B T).
§ SAMPLE DESCRIPTION
We used transmon qubits on three different chips I, II and III (which is reflected in the name, e.g. R2-I, R4-I, R3-II, Q2-III) made in different fabrication cycles. All transmon qubits had a fixed transition frequency ω_ge/2π≈6.5GHz or flux-tunable spectrum (ω_ge/2π = 5.5-6.8GHz) and charging energy E_c/h=210-230MHz (see Fig. <ref>) and they were weakly coupled (g/2π = 35-45 MHz) to a coplanar waveguide (CPWG) λ/4 readout resonator with the internal quality factor Q_int∼ 10^5, external quality factor Q_ext≈ 8· 10^4 and fundamental frequency ω_r/2π=4.5-6.0GHz for a dispersive state readout (|Δ_r| = |ω_ge-ω_r| ≫ g).
Experimentally found parameters of the devices are shown in Table <ref>.
The fabrication details are given in Appendix <ref>.
§ RESULTS AND DISCUSSION
In the experiment we measured qubit relaxation and coherence times τ_1 and τ_2, and qubit effective temperature T_eff, while controlling the temperature of the mixing chamber (MXC) stage of the dilution refrigerator. After a certain MXC stage temperature T_MXC was set, we waited for 10 min for thermalization of the sample stage and then performed a series of measurements, consisting of five repetitions of the following measurement sequence: (i) measurement of the MXC stage temperature T_MXC, (ii) population distribution measurement x_i, y_j, (iii) measurement of qubit relaxation time τ_1, (iv) Ramsey oscillations measurement to get τ_2^R and ω_ge and (v) Hahn echo measurement with the decay time τ_2^E. Each of these iterations took approximately 5 min. Qubit excitation pulses were applied via an AC drive antenna, and the readout pulses were sent through a waveguide. The measurement setup is shown in Fig. <ref>.
First we address the temperature dependence of the qubit decay time τ_1, shown in Fig. <ref> a. It stays almost constant in temperature range 20-150 mK. Higher than T_MXC≈170mK a noticeable effect of quasiparticle induced relaxation appears and already at T_MXC≈250-270mK decay time drops to below 0.5 μs, which is shorter than the readout pulse duration Δ t_RO=1.5-2.0 μs and becomes comparable to the duration of π-pulse sequences used for the population measurements. To compare τ_1 with theoretical prediction we use the quasiparticle relaxation model in Ref. <cit.> that gives the relaxation rate for a transmon as
γ_1^qp = 1/π ω_p^2 /ω_ge{ x_qp√( 2Δ/ħω_ge).
+ . 4e^-Δ/k_B Tcosh( ħω_ge/2 k_B T) K_0 ( ħω_ge/2 k_B T) }
with the plasma frequency ω_p = √(8 E_J E_C)/ħ, equilibrium quasiparticle density x_qp = √(2π k_B T/ Δ) exp(-Δ/ k_B T), modified Bessel function of the second kind K_0 and the superconducting gap Δ. Expression (<ref>) provides a good correspondence with the experimentally observed drop of τ_1 at temperatures above 220 mK for the gap value Δ_Al/e = 180 μV (see Fig. <ref> b). Thus, the sharp exponential drop of quasiparticle-induced suppression of τ_1 restricts the measurement range for Al-based qubits using long pulse sequences. This is clearly demonstrated in the experimental data discussed below.
As to the dephasing during the π-pulses, typical length Δ t_π of single π-pulses varied between 20 and 220 ns, with the lower limit Δ t_min determined either by the qubit anharmonicity (Δ t_min> h/|α| ≃ 1/200MHz = 5ns) or by the dynamical range of instruments and total line attenuation of the setup (see Fig. <ref>). We present experimental data of the temperature dependence of the dephasing time τ_φ in Fig. <ref> b. The times τ_φ^R and τ_φ^E are extracted from the measurements of Ramsey oscillations and Hahn echo measurements, respectively, as τ_φ^R,E = 1/γ_φ^R,E = 1/(γ_2^R,E - γ_1/2), where the decoherence rates γ_2^R,E describe Ramsey oscillations and Hahn echo experiments. While in the low-temperature limit τ_φ^R,E⪆τ_1, with τ_φ^R exhibiting a clearly visible saturation below 50-70 mK, the dephasing time starts decreasing at lower temperatures than τ_1 and reaches 1-2 μs at 150 mK, becoming difficult to measure at 220-250mK due to the finite ring-up time of the readout resonator ∼ Q_ext/ω_r≃ 1 μs. Similarly to the analysis of the relaxation time, we can estimate influence of quasiparticles on the dephasing, using the approach presented in Ref. <cit.>. The pure dephasing rate
due to tunneling of equilibrium quasiparticles is given by expression
γ_φ = E_c /πħk_B T/Δexp(-Δ/k_B T) ,
and appears to be negligible in the experimental temperature range. However, we can estimate the dephasing mechanism caused by the Josephson frequency fluctuations due to quasiparticles occupying the Andreev bound states in the tunnel junction of the transmon. The latter writes <cit.>
γ_φ∼ 4 πω_p^2/ω_ge√(x_qp^A/N_e),
where x_qp^A = exp(-Δ/k_B T) is the equilibrium quasiparticle occupation of the Andreev states and N_e = ζ^-1 g_T/2 g_K is the effective number of channels in the junction with the factor ζ^-1∼ 10^3 - 10^5 describing transparency of the junction in the subgap regime <cit.>, g_T = 1/R_n being the junction conductance and the conductance quantum g_K = e^2/h. In our case R_n ≈ 4.4-5.5kΩ, giving g_T/2 g_K ≈ 2.3-2.9, and the dependence (<ref>) for the realistic parameters is shown as dashed lines in Fig. <ref> b.
Population ratios A, B and C can be extracted from the measurement data x_i, y_j by three different methods each, which we denote as A_1, A_2, A_3, B_1, …, C_3 (see Tab. <ref>). Qualitatively speaking, the difference between the methods is the order of the π-pulses in the control sequence. Because of the qubit relaxation and relatively low population of the f-level, the resulting signals can have different signal-to-noise (SNR) ratio, giving us some guidance on how to choose the best one (see Fig. <ref> and discussion in Sec. <ref> and Appendix <ref>).
An example of population measurements in terms of datasets A_1,A_2,… C_3 (see exact expressions in Appendix in Table <ref>) for device R4-I can be seen in Fig. <ref> c. While according to expressions (<ref>), (<ref>), in the low-temperature limit A→1 and B,C → 0, there is a clear saturation at p_e/p_g ≈ 0.024, which agrees with T_eff≈ 85mK, calculated by Eq. (<ref>). Above 170 mK the population readings have increased deviation from the analytical expressions caused by the enhanced relaxation.
Measurements of the qubit effective temperature for devices R2-I, R4-I and Q2-III extracted from A_2 are shown in Fig. <ref> d and demonstrate a linear behaviour in the temperature range 120-230 mK close to one-to-one correspondence with the temperature of the mixing chamber stage T_MXC of the cryostat. At lower temperatures of the cryostat T_eff exhibits saturation, reaching T_eff^0≈ 65-85mK, which coincides with the saturation temperature estimated from the population in Fig. <ref> c. This elevated temperature compared to T_MXC is most probably caused by imperfect shielding against thermal photons and insufficient filtering of the measurement lines. For comparison, T_eff extracted using all nine methods A_1, A_2, A_3, B_1, …, C_3 are shown in Fig. <ref> a. A thorough explanation of the deviations between the experimental data and the expected behaviour above 170 mK in Figs. <ref> c, d requires consideration of the time evolution of the qubit population due to its finite lifetime, which we discuss in the following subsection.
§.§ Comparison of the Experimental Data with the Numerical Model of the Qubit Population Evolution
In order to understand the deviation of the population functions A,B and C from the exact formulas in Tab. <ref> and the qubit effective temperature T_eff from the cryostat temperature T_MXC at higher temperatures, we implement a numerical model which accounts for time evolution of the population distribution of a qubit. First, let us discuss important assumptions underlying the model, which are based on the experimental data. We assume, that the control sequences of π-pulses are short compared to the qubit dephasing and relaxation times (3Δ t_π < τ_φ, τ_1 ). Then we are interested in tracking the time evolution of the population distribution, described by the diagonal terms of the density matrix. The duration of the qubit state readout Δ t_RO is larger than or of the order of the relaxation time (Δ t_RO≳τ_1).
The model considers population of the three lowest states with finite decay times τ_1^ge and τ_1^ef of the ge- and ef-transitions, respectively.
We begin with a qubit, which is thermalized at a temperature T, setting initial populations p_g,e,f^(0)(T) and then imitate application of π_ge- and π_ef-pulses by instantaneous swapping of the corresponding level populations with finite efficiencies, designated as δ_ge,ef∈ [0,1] (see Fig. <ref> e). For example, a π_ge-pulse with δ_ge=0.9 means that a qubit state with p_g=1.0, p_e=0.0, p_f=0.0 would turn result into p_g=0.1, p_e=0.9, p_f=0.0. Then we calculate the time evolution of the population during the readout time, and multiply it by arbitrary pure state responses φ_i. Next we construct the population ratios A, B and C and calculate the effective temperature (see Fig. <ref> b). A detailed description of the model is provided in Appendix <ref>.
The numerical model can qualitatively reproduce the deviation of the qubit population functions A,B and C from the analytical expressions (see Fig. <ref> in App. <ref>), as well as the resulting deviations of the temperatures T_eff extracted with different methods in the higher temperature range, which is presented in Fig. <ref>. This divergence is a consequence of the temperature-dependent qubit decay times τ_1^ge and τ_1^ef, caused by quasiparticles. For the simulation results presented in Fig. <ref> b, we use frequencies ω_ge and ω_ef and the temperature dependent relaxation time τ_1^ge(T)=2π/(γ_1^qp(T)+γ_1^0), which correspond to the sample R4-I shown as the red line in Fig. <ref> b, and τ_1^ef≈τ_1^ge/2, valid for a transmon qubit <cit.>. For ideal π-pulses, the effective temperatures T_A1, T_A2, …, T_C3 extracted from different ratios A_1,A_2,…,C_3, respectively, demonstrate almost one-to-one behaviour for the temperature range 30-200 mK, when the qubit lifetimes are relatively long. But above 200 mK, when τ_1 is significantly suppressed, there is an increasing deviation between these dependencies. Importantly, this does not indicate lack of thermalization of the qubit, rather than showing the limits of applicability of the given thermometry protocol. Moreover, in this temperature limit the values T_A1, T_A2, …, T_C3 start to depend on the pure state responses φ_i, which eliminates the main advantage of the protocol. In fact, Eqs. (<ref>) are no longer valid, as the qubit population changes significantly during the readout pulse.
Another error in determining the effective temperature of the qubit is the finite efficiency of the pulses. As seen in Fig. <ref> b, the divergence of T_A1, T_A2, …, T_C3 grows monotonously with the temperature and is noticeable already at 100-150 mK, which is a characteristic of the protocol. The plot, showing the relative error between the extracted temperature and the qubit thermalization temperature averaged over different methods A_1,A_2,…,C_3 is shown in Fig. <ref> in Appendix <ref>.
§.§ Qubit Thermalization
In order to make sure, that the qubits used in our measurements thermalize to a single thermal bath, which is the substrate, we analyze the relaxation rate γ_1 and photon number n(ω_ge, T) as functions of the qubit effective temperature T_eff. For example, if a qubit is in a steady state with several ohmic baths, then from (<ref>) and (<ref>) we can write down the following temperature dependencies for γ_1 and n(ω_ge, T_eff):
γ_1(T )= γ_1^0[ 2 n(ω_ge, T_eff) +1 ]=γ_1^0 ħω_ge/2k_B T_eff ,
n(ω_ge, T_eff) = γ_1^0/∑_i γ_1^(i) n(ω_ge, T_MXC) + n_0,
where we denote the relaxation rate and the residual qubit population at the base temperature as γ_1^0 and n_0, respectively.
The corresponding experimental data is presented in Fig. <ref>. The photon number dependence of γ_1 (see Fig. <ref> a) exhibits two regimes: the linear growth described by Eq. (<ref>), where the derivative ∂γ_1/∂ n is twice bigger than the offset n_0, and the sharp rise due to the quasiparticle relaxation.
The dependence between the photon number n(ω_ge, T_eff), which corresponds to the qubit effective temperature, and n(ω_ge, T_MXC), defined by the cryostat temperature, characterizes the general process of the qubit thermalization (see Fig. <ref> b). In case of a single thermal bath coupled to the qubit, the dependence would be exactly one-to-one, whereas the presented data shows finite photon population n_0 in the low-temperature limit and the slope slightly less than unity, witnessing coupling to other thermal baths. One of those could be a residual radiation field, as well as a bath of TLSes or nonequilibrium quasiparticles with the effective temperature 65-85 mK, which sets the lower limit for the population of the qubit at the lower temperatures of the cryostat.
§ THERMOMETER LIMITATIONS AND WAYS OF IMPROVEMENT
§.§ Temperature Range
The Al-based transmon thermometers studied in this work showed a relatively narrow working temperature range 60-220 mK. In this subsection we discuss possibilities of widening this regime.
A higher residual temperature of the qubits T_eff = 65-85 mK compared to the base cryostat temperature T_base≈ 20 mK is most likely caused by both improper shielding of the sample stage and filtering of the measurement lines <cit.>, likely due to nonequlibrium quasiparticles <cit.> or TLSes <cit.>. Recent works demonstrated the possibility of achieving effective qubit temperatures of 30-45 mK <cit.>.
When considering the higher temperature limit of the presented method the following aspects should be taken into account: i) consideration of only the three lowest states is valid until the population of the next d-level is negligible, otherwise, at higher temperatures, the system (<ref>) should be supplemented with appropriate terms having φ_d and p_d; ii) suppression of the qubit relaxation time τ_1 due to growing quasiparticle density above 200 mK; iii) sufficiently long coherence time for the qubit state preparation.
In agreement with the model, quasiparticle suppression of τ_1 leads to a significant influence on the population measurements, which in turn affects adversely on determining the effective temperature. This is a hard limit due to the exponential growth of the quasiparticle density with temperature.
Another possibility is employment of materials with a larger superconducting gap to shift the quasiparticle suppression limit to higher temperatures. This might be done by using Nb-based tunnel junctions <cit.> with the critical temperature T_c^Nb = 7.6-8.7 K or by replacing the tunnel junction with a nanowire made of granular Al having critical temperatures T_c^grAl up to 3.15 K <cit.>, which is 2.5 times larger than critical temperature of bulk aluminium, T_c^Al, bulk = 1.2 K.
The qubit state preparation is also of big importance in the population distribution measurement presented here. In this sense, the qubit coherence time, or more strictly speaking, the dephasing time should be longer than the control pulse sequences (τ_φ≳ 3 Δ t_π). Since the dephasing rate is growing with the temperature increase, it limits duration of the pulse sequences at higher temperatures.
§.§ Signal-to-Noise Ratio and Accuracy
The parameters of the experimental setup can be fine-tuned to improve both the temperature measurement accuracy and the range of the cryostat temperatures, so that it reaches the quasiparticle limit. First of all, improvement of the π-pulse efficiency affects positively on the measurement accuracy. Secondly, as mentioned above, the short lifetime of the qubit is the main restriction of the method. We can divide the qubit state decay into two parts: i) decay during the control sequence and ii) decay during the readout. The first process leads to change of the qubit population distribution p_i and can be associated to imperfect control pulses, leading to increased deviation between T_A1, T_A2,…,T_C3. Decay during the readout pulse is defined mainly by the ring-up time of the resonator, and diminishes distinguishability between the qubit states (i.e. difference between φ_i on the I-Q plane), thus reducing the SNR, and also distorting the extracted effective temperature. The main method of improving the SNR is optimizing the ratio χ/ κ for the readout resonator, where χ is the dispersive shift and κ is the linewidth, as well as an appropriate readout frequency detuning for a given χ/ κ, which directly affects the pure state responses φ_i <cit.>.
Another factor directly affecting the efficiency of the π-pulses is temperature dependent properties (specifically S21) of the measurement lines of the cryostat and CPWGs on the chip. In particular, it is known, that superconducting CPWG resonators have a temperature dependent Q-factors and fundamental frequencies <cit.>, which can lead to changes in amplitude of the qubit drive signal. Moreover, the qubit frequency even for the single-junction transmons can slightly vary with temperature (less than 50 kHz in our case). All these factors can lead to an increased deviation of the π-pulse parameters if a calibration procedure is not applied at every cryostat temperature point.
The minimal temperature measurement error is defined by the Cramer-Rao bound from the quantum Fisher information (QFI) <cit.>, where we can use an expression for the relative error |Δ T/⟨ T⟩|_sm of a single measurement for a two-level thermometer with energy separation ħω_ge and N-fold degenerate excited level as
(Δ T/⟨ T⟩)^2_sm = (N - 1 + e^x_ge)^2/(N - 1) x_ge^2 e^x_ge,
x_ge=ħω_ge/k_B T.
An analogous expression can be derived for the three-level case considered in our work,
(Δ T/⟨ T⟩)^2_sm =
(e^x_ge+x_gf + e^x_ge + e^x_gf)^2/[x_ge^2 e^x_gf + x_gf^2 e^x_ge + (x_ge + x_gf)^2 ] e^x_ge+x_gf,
here x_gf=ħω_gf/k_B T and ω_gf = 2ω_ge - |α|.
Taking into account averaging over N = 2^17 repetitions, one gets the lower bound for the relative error (Δ T/⟨ T⟩)^2 = (Δ T/⟨ T⟩)^2_sm/N ≈ 5.0· 10^-5 or |Δ T/⟨ T⟩| ≈ 0.7 % for ⟨ T⟩ = 65 mK and qubit frequency ω_ge = 7.04 GHz. The noise equivalent temperature NET=√(|Δ T|^2 Δ t_meas) with the measurement time Δ t_meas = 29 s for our setup in the QFI-limited case would be NET_QFI≈ 2.5 mK/√(Hz). These estimates are an order of magnitude lower than the experimental errors. The data shown in Fig. <ref> a has the relative errors |Δ T_A/⟨ T_A ⟩| = 14-42 %, |Δ T_B/⟨ T_B ⟩| = 8.6-9.4 % and |Δ T_C/⟨ T_C ⟩| = 8.7-9.4 % at the base cryostat temperature T_MXC = 22 mK, which corresponds to NET≈ 28 mK/√(Hz).
Notably, if the thermometer has a constant level separation, the relative error grows exponentially at T ≪ħω_ge/k_B
(see Fig. <ref>),
which is, however, not observed experimentally in our case, as the qubit effective temperature saturates at T_eff^0≈ 65-85 mK.
Our system is a three-level quantum thermometer with a constant anharmonic spectrum, which refers to sub-optimal quantum thermometers <cit.>. Knowing the dependence of QFI on the spectrum of the system, it is possible to optimize it for a better performance in the experimentally realistic temperature range by choosing an appropriate level separation and anharmonicity.
§.§ Operation Speed
First of all, the presented method refers to the equilibrium type of quantum thermometry, meaning that the population measurement is performed after the qubit thermalization to the bath, which happens on time scales not shorter than the relaxation time τ_1. In the presented experiment the time delay between the consecutive measurements, which is necessary for the qubit relaxation after the readout and thermalization of the qubit, was ∼ 10 τ_1, which takes up most of the measurement time, and defines the speed of the response of the system. Pulse application and the qubit state readout can be limited to sub-microsecond range. Finally, the total measurement time is defined by the averaging number t_total∼ N_av· 10 τ_1 = 10^6 τ_1, with N_av≃ 10^5, meaning a time scale of seconds or minutes per point, depending on τ_1.
§ CONCLUSION
In this work we experimentally realized equilibrium quantum thermometry of the cryostat temperature, based on the population distribution measurements of the three lowest energy levels of a transmon qubit. The presented technique allowed for temperature measurements in the range of 60-220mK, in which the qubit population follows the Maxwell-Boltzmann distribution. Analysis of temperature dependencies of the qubit relaxation rate and the photon number showed that the qubit was thermalized to the cryostat. The working temperature range was defined by saturation of the effective qubit temperature in the low temperature limit and the quasiparticle suppression of the qubit relaxation time above 200 mK. We compared the experimental data to a numerical model, describing time evolution of the qubit population, showing good agreement. We believe, a similar approach could be exploited for quantum thermometry of various on-chip structures and for studies of coupling between quantum objects to different thermal baths. Finally, this experimental platform allows for exploitation of quantum thermometry algorithms including nonequilibrium techniques for faster measurements and dynamical control of the thermometers for enhanced precision.
The authors are grateful to Gershon Kurizki for fruitful discussions and Yu-Cheng Chang for assistance with measurements and fabrication.
We acknowledge the provision of facilities and technical support by Aalto University at OtaNano - Micronova Nanofabrication Centre and OtaNano - Low Temperature Laboratory.
§ RELAXATION AND EXCITATION RATES FOR COUPLING TO A THERMAL BATH
Let us consider a quantum oscillator based on a resonator with frequency ω_0 = 1/√(LC) and characteristic impedance Z_0=√(L/C)=50 Ω, where L and C are the inductance and capacitance of the resonator, respectively, to which we connect an ohmic bath with resistance R and temperature T. Then the excitation and relaxation rates Γ_↑, ↓ caused by the bath can be written as <cit.>
Γ_↑,↓ = Z_0/R∓ω_0/1-exp(±ħω_0β),
where β = 1/k_B T is the inverse temperature and k_B is the Boltzmann constant. For a more general case the resistance R should be replaced by the real part of the bath impedance Z. Thus, we can introduce the quality factor Q=Re Z/Z_0 and the resonator linewidth γ = ω_0/Q, giving us
Γ_↑,↓ = γ∓ 1/1-exp(±ħω_0β).
This can be further shortened for the sake of notation brevity to
Γ_↓ = γ[ n(ω, T)+1 ],
Γ_↑ = γ n (ω, T),
where n=1/[1-exp(-ħω_0β)] is the Bose-Einstein distribution.
The total decay rate in this case would be
Γ_1 = Γ_↓ + Γ_↑ = γ[ 2n(ω, T)+1 ].
Analogous expressions can be readily applied to the qubit-bath interaction (see for example <cit.>).
§.§ Effective Temperature of a Two-Level System
Let us consider a qubit with the ground state |g⟩ and excited state |e⟩, thermalized with bath with temperature T. Then the population ratio of the qubit levels should follow the detailed balance principle p_e/p_g=e^-ħωβ = Γ_↑/Γ_↓, and <cit.>
p_g = Γ_↓/Γ_↓+Γ_↑,
p_e = Γ_↑/Γ_↓+Γ_↑.
Also, we can write it in terms of the qubit polarization:
⟨σ_z ⟩ = -p_g + p_e = -tanh(ħωβ/2).
Energy relaxation rate is just a sum of the two rates:
γ_1 = 2 π/τ_1 = Γ_↓+Γ_↑.
§.§ Transmon Qubit Coupled to Several Baths
Here we will extend the results from the previous section to the case when a quantum object interacts with a number of uncorrelated baths with different temperatures T^(i) and coupling rates Γ_↑,↓^(i). As in previous section, the only requirement for the baths is that the corresponding rates obey the detailed balance principle. In this case we can just sum up the rates when calculating the level population:
p_g = ∑_iΓ_↓^(i)/∑_i(Γ_↓^(i)+Γ_↑^(i)),
p_e = ∑_iΓ_↑^(i)/ ∑_i(Γ_↓^(i)+Γ_↑^(i)),
p_e/p_g=Σ_iΓ_↑^(i)/Σ_iΓ_↓^(i)=Σ_iγ_1^(i) n(T^(i), ω)/Σ_iγ_1^(i)[n(T^(i), ω)+1]
The qubit energy relaxation rate writes
γ_1 = ∑_i(Γ_↓^(i)+Γ_↑^(i))=∑_iγ_1^(i)[2 n(ω, T^(i))+1].
One can use the same considerations for a resonator (harmonic oscillator). Then again the detailed balance principle should be applied to get p_n+1/p_n = ∑_iΓ_↑^(i)/ ∑_iΓ_↓^(i), which is positive, less then unity and does not depend on n. Thus we can find such a number β_eff = 1/k_B T_eff, to have p_n+1/p_n = exp(-ħωβ_eff), leading to
n = ∑_iΓ_↑^(i)/∑ _i(Γ_↓^(i)-Γ_↑^(i)) = ∑_iγ_i n(T^(i), ω)/∑_iγ_i= n(T_eff, ω).
Effective resonator temperature is then dependent on the number of photons n:
T_𝑒𝑓𝑓^r = ħω_r/k_B(lnn+1/n)^-1.
All considerations above can be generalized to the case of an arbitrary number of baths.
§ NUMERICAL MODEL
To describe the dynamics of the qubit population distribution, firstly we neglect effects of the readout resonator, and secondly, restrict ourselves by considering only the three lowest energy states of the transmon, |g⟩, |e⟩, |f⟩.
We assume that the initial qubit state is thermal, which means that its density matrix is diagonal. Then we track the dynamics of only these diagonal terms, characterizing the population distribution, which significantly simplifies the calculations.
We include only sequential decay and excitation processes with rates Γ^ge_↓,↑ and Γ^ef_↓,↑ for g-e and e-f transitions, respectively. Then the free time evolution of the qubit populations can be described by a system of differential equations:
ṗ_f = p_eΓ^ef_↑-p_fΓ^ef_↓,
ṗ_e = p_gΓ^ge_↑ + p_fΓ^ef_↓-p_e(Γ^ge_↓ + Γ^ef_↑),
ṗ_g = p_eΓ^ge_↓ -p_gΓ^ge_↑ ,
where dotted functions denote time derivatives. The system has an analytical solution:
p_f(t) = ζ_f e^α_0 t + η_f e^α_1 t + ξ_f,
p_e(t) = ζ_e e^α_0 t + η_e e^α_1 t + ξ_e,
p_g(t) = ζ_g e^α_0 t + η_g e^α_1 t + ξ_g.
We find the following coefficients:
α_0,1 = 1/2 {-(Γ^ef_↑ + Γ^ef_↓ + Γ^ge_↑ + Γ^ge_↓).
±[(Γ^ef_↑ + Γ^ef_↓ + Γ^ge_↑ + Γ^ge_↓)^2 .
- .. 4(Γ^ge_↓Γ^ef_↓ + Γ^ge_↑Γ^ef_↑ + Γ^ge_↑Γ^ef_↓)]^1/2},
ζ_f = Γ^ef_↑(p_e^(0) - ξ_e) - (Γ^ef_↓ + α_1)(p_f^(0)-ξ_f)/α_0-α_1,
ζ_e = α_0+Γ^ef_↓/Γ^ef_↑ζ_f, ζ_g = Γ^ge_↓/Γ^ef_↑α_0+Γ^ef_↓/α_0+Γ^ge_↑ζ_f,
η_f = (Γ^ef_↓ + α_0)(p_f^(0)-ξ_f) - Γ^ef_↑(p_e^(0) - ξ_e)/α_0-α_1,
η_e = α_1+Γ^ef_↓/Γ^ef_↑η_f, η_g = Γ^ge_↓/Γ^ef_↑α_1+Γ^ef_↓/α_1+Γ^ge_↑η_f,
ξ_f = 1/𝒵Γ^ef_↑Γ^ge_↑/Γ^ef_↓Γ^ge_↓, ξ_e = 1/𝒵Γ^ge_↑/Γ^ge_↓, ξ_g = 1/𝒵,
where the canonical partition function 𝒵=1+Γ^ge_↑/Γ^ge_↓ + Γ^ef_↑Γ^ge_↑/Γ^ef_↓Γ^ge_↓ imposes the normalization condition p_g + p_e + p_f = 1.
The schematic representation of the model is presented in Fig. <ref> c. We prepare the qubit with initial population p_g,e,f^(0) and, for a fully thermalized qubit in the steady state (t→∞), observe residual populations p_i^(∞) = ξ_i. To mimic state manipulation with microwave pulses, we define
pulse matrices acting on a population vector p⃗(t) = (p_g(t), p_e(t), p_f(t))^T:
M_ge = [ 1-δ_ge δ_ge 0; δ_ge 1-δ_ge 0; 0 0 1 ],
M_ef = [ 1 0 0; 0 1-δ_ef δ_ef; 0 δ_ef 1-δ_ef ],
where we introduce a phenomenological efficiency parameter 0≤δ≤ 1 to simulate effects such as variation of pulse length and amplitude, detuning and finite fidelity.
Unlike the real control pulses, these matrices act on the qubit population instantaneously. It is a realistic assumption if the pulse lengths are much smaller than the relaxation times. In this case the π-pulse duration corresponds to the time delay between application of corresponding instantaneous population transformations.
We studied influence of the pulse efficiency on the error of the qubit temperature measurement (see Fig. <ref>). Our model predicts the relative error less than 10% at 200 mK, which is comparable with the experimental data.
§ ERROR ESTIMATION
§.§ Mutual Dependencies
As described in the main text, for thermometry we use temperature dependent functions A, B and C and each of them could be found in three different ways (see Table <ref>), which gives nine different values for temperature. Though the calculation of these parameters based on six independent x_j, y_k, only six of these functions A_i, B_i, C_i are mutually independent. That could be easily shown from relation A_i× B_i=C_i, i ∈{1,2,3}. That also leads to the mutual dependencies for the corresponding measurement errors. Indeed, one can get the same relation between the relative errors of A, B and C:
(Δ C/C)^2 = (Δ A/A)^2 + (Δ B/B)^2.
From this equation one can get the relation between the absolute errors:
(Δ C)^2 = (C/A)^2 (Δ A)^2 + (C/B)^2 (Δ B)^2.
In the low temperature limit we assume p_f = 0, p_e≪ p_g ≈ 1. Using Eqs. (<ref>), (<ref>) we can get C/B ≈ 1 and C/A ≈ p_e, and as a result (Δ C)^2 ≈ (Δ B)^2 + p_e^2 (Δ A)^2. If absolute errors for A, B and C are of the same order of magnitude (it is valid for most of the experimental results), absolute and relative errors for B and C are approximately equal: (Δ C)^2 ≈ (Δ B)^2 and |Δ C/C| ≈ |Δ B/B|.
In order to find the relation between the errors of A, B and C and the measured temperature T one can use logarithmic derivatives of the functions A(T), B(T) and C(T). In order to simplify the final expressions, in the low temperature limit expressions we can write
|Δ T/T|_A = e^x/x|Δ A| = e^x-1/x|Δ A/A|,
|Δ T/T|_B = (e^x-1)^2/x e^x|Δ B| = e^x-1/x e^x|Δ B/B|,
|Δ T/T|_C = e^x/x|Δ C| = 1/x|Δ C/C|,
where x = ħω_ge / k_B T. At low temperatures x →∞, Eqs. (<ref>) and (<ref>) give approximately equal errors, meaning that the relative errors of the temperatures T_B and T_C are similar in the relevant temperature range. That agrees very well with the data presented in this paper (see Sec. <ref> and <ref>) and in Ref. <cit.>.
Note, that although the coefficient between the relative error of T and the relative error of A in Eq. (<ref>) is e^ħω_ge / k_B T times higher than the same coefficients in Eqs. (<ref>) and (<ref>), the A itself is e^ħω_ge / k_B T times higher than B and C, so it means that relative error for T_A could be the same as for T_B and T_C. The exact relation between errors for T_A and for T_B and T_C depends on particular pure state responses and voltage noise, as will be shown in the next subsection.
§.§ Further Error Analysis
One can notice that columns 1, 2 and 3 in Table <ref> have the same structure: A, B and C from one column could be written in a form A = b/c, B = a/b and C = a/c, where a, b and c are the measured voltage differences. For example, in the first column a = x_2 - y_2, b = x_0 - x_1 and c = y_0 - y_1. Also, one can write these voltage differences in the following form: a = (p_e - p_f)Δφ_i, b = (p_g - p_e)Δφ_i, c = (p_g - p_f)Δφ_i. Here Δφ_i = φ_j - φ_k for i,j,k ∈{g,e,f}, i≠ j ≠ k.
One can then make an additional assumption that the variance of a single measurement x_i, y_i is a sum of the voltage noise variance (defined by the measurement setup) and the variance of protective measurements with operator M̂ of the qubit in a thermal state with density matrix ρ̂_th: σ_x,y^2 = Tr(ρ̂_thM̂^2) - Tr(ρ̂_thM̂)^2. Based on this, one can find absolute errors for measurement differences a, b, c:
(Δ a)^2 = 2 p_e p_f Δφ_i^2
+ (p_e+p_f)p_g(Δφ_j^2 + Δφ_k^2) + σ_a^2,
(Δ b)^2 = 2 p_e p_g Δφ_i^2
+ (p_e+p_g)p_f(Δφ_j^2 + Δφ_k^2) + σ_b^2,
(Δ c)^2 = (p_g+p_e) p_f Δφ_i^2
+ (p_g+p_f)p_e(Δφ_j^2 + Δφ_k^2) + σ_b^2,
here σ_a^2, σ_b^2, σ_c^2 are the measurement voltage variances, which in general are not the same for different parameters a, b, c and could be found from the measurements. Relative errors for the voltage differences a, b and c could be written as:
(Δ a/a)^2 = 2 p_e p_f/(p_e-p_f)^2 + (p_e+p_f)p_g/(p_e-p_f)^2 F(Δφ_i)
+ σ_a^2/Δφ_i^2 (p_e-p_f)^2,
(Δ b/b)^2 = 2 p_e p_g/(p_g-p_e)^2 + (p_g+p_e)p_f/(p_g-p_e)^2 F(Δφ_i)
+ σ_b^2/Δφ_i^2 (p_g-p_e)^2,
(Δ c/c)^2 = (p_e+p_g) p_f/(p_g-p_f)^2 + (p_g+p_f)p_e/(p_g-p_f)^2 F(Δφ_i)
+ σ_c^2/Δφ_i^2 (p_g-p_f)^2,
where F(Δφ_i) = (Δφ_j^2 + Δφ_k^2)/Δφ_i^2, i≠ j ≠ k. Note, that function F(Δφ_i) reaches the minimum value of 0.5 when Δφ_i = 2Δφ_j = 2Δφ_k, i≠ j≠ k. This gives a condition for the optimal difference of the pure state responses. Importantly, this condition can be satisfied only for the one out of three differences Δφ_i at the same time.
In order to find relative errors for the A, B, C, one can use the following expressions:
(Δ A/A)^2 = (Δ b/b)^2 + (Δ c/c)^2,
(Δ B/B)^2 = (Δ a/a)^2 + (Δ b/b)^2,
(Δ C/C)^2 = (Δ a/a)^2 + (Δ c/c)^2.
Using Eqs. (<ref>, <ref>, <ref>) for relative errors for a, b and c, one can substitute it to the equations above and get the final (but extremely bulky) result.
As in a previous subsection, one can find absolute errors for A, B, C in the low-temperature limit (p_f = 0, p_e≪ p_g ≈ 1):
(Δ A)^2 ≈ 2 p_e + F(Δφ_i) + σ_b^2/Δφ_i^2 + σ_c^2/Δφ_i^2,
(Δ B)^2 ≈ p_e F(Δφ_i) + σ_a^2/Δφ_i^2,
(Δ C)^2 ≈ p_e F(Δφ_i) + σ_a^2/Δφ_i^2.
These equations illustrate several important features of errors of the functions A, B and C. Firstly, the relative errors |Δ A/⟨ A ⟩|, |Δ B/⟨ B ⟩| and |Δ C/⟨ C ⟩| grow as 1/√(p_e)∝exp(ħω_ge/2k_B T) when T → 0. Secondly, precision of the method could be limited by voltage noise, as it is for the measurements presented in this paper. Thirdly, in general, measurement precision highly depends on the differences Δφ_i of the pure state responses. On one hand, it gives some room for the measurement protocol optimization in terms of maximization of the Δφ_i^2 and minimization of F(Δφ_i). But, unfortunately, this optimization should be done for each qubit separately. Finally, one can obtain the minimum possible values for the absolute errors in the low temperature limit: (Δ A)^2 ≈ 2 p_e +0.5, (Δ B)^2 ≈ 0.5 p_e, (Δ C)^2 ≈ 0.5 p_e.
To sum up, the precision of errors for the temperature estimations from A, B and C measurements are limited by voltage noise and strongly depend on the differences Δφ_i of the pure state responses. So, in general, out of nine possible ways to extract temperature, there is no unique or universal answer as to which one is the optimal one. Instead, for every particular sample and measurement setup one needs to compare them and choose the best one.
§ SAMPLE FABRICATION
The fabrication was done by a multi-step electron-beam lithography (EBL) in a 100 kV EBL system Raith EBPG 5200.
First, pristine high-resistive (ρ > 10^4 Ohm·cm) undoped Si ⟨100⟩ 200 mm wafers were RCA-cleaned and magnetron sputtered with 10 nm of Al and 100 nm of Nb without a vacuum break. Then the wafers were spincoated with 400 nm thick layer of AR-P 6200.13 e-beam resist by spinning at 4000 rpm for 60 s and then baked at 160^∘C for 9 min. After that the samples were cleaved into ∼35×35 mm^2 pieces and loaded into the EBL system. Each wafer piece was used for fabrication of nine 7× 7 mm^2 chips. The exposure of the ground plane was done with the dose 350 μC/cm^2, 200 nA beam current and a 50 nm step size.
After that the e-beam resist was developed in AR-P 546-600 for 3 min 30 s and rinsed with IPA for 3 min. The etching of Nb was done in a RIE tool Oxford Instruments Plasmalab 80 Plus. Prior to the etching the empty chamber with a dummy quartz wafer was precleaned with a CF_4 +O_2 process using 100 sccm and 15 sccm flows of CF_4 and O_2 respectively at the pressure 600 mTorr, bias voltage 200 V and RF power 200 W during 10 min. Then the samples were loaded into the chamber and pumped to 1·10^-5 mbar. The RIE etching was done with a SF_6 +Ar process at 20 sccm flow of SF_6 and 10 sccm flow of Ar at the pressure of 15 mTorr, V_bias=360 V and RF power 100 W during 90 s. After that residues of the resist were removed with an O_2 descum process at the 40 sccm flow, pressure 250 mTorr, V_bias=320 V and RF power 150 W.
Then the sample was put into acetone for 4 min, rinsed with IPA and dried with N_2 gun. Removal of the Al stopping layer was done using AZ351B developer during 2 min. Then the sample was thoroughly rinsed with deionised water (DIW) for 10 min and dried by putting in IPA for 5 min with consecutive N_2 drying.
The next step was forming of Josephson junctions (JJs) (see Fig. <ref> c). For that the sample was spincoated by a MMA/PMMA e-beam resist bilayer. A 800 nm thick layer of MMA EL-11 was applied by two steps of spincoating at 4000 rpm and baking at 160^∘C for 2 min. Then a 200 nm layer of PMMA A4 was applied by spinning at 4000 rpm and baking at 160^∘C for 10 min.
Finally, an EBL step was performed to create a suspended mask for further JJ shadow deposition. The main and undercut doses for exposure of submicron structures was 950 μC/cm^2 and 150 μC/cm^2, respectively, with 1 nA beam current and 4 nm stepsize.
The development of the mask was done in a two-step process: (i)
development in MIBK:IPA 1:3 mixture for 25 s and (ii) in methylglycol:methanol 2:1 mixture for 15 s with a following rinsing in IPA for 30 s.
After that the samples were loaded into an e-beam evaporator Plassys MEB700S2-III UHV for JJ deposition. First, the sample was pumped down to 5·10^-7 mbar in the loadlock and an Ar milling was done to remove the resist residues at Ar pressure of 4·10^-4 mbar with ion-beam gun (IBG) parameters of 60 mA beam current and 200 V beam voltage at angles ± 12^∘ for 30 s at each angle. Then the substrate was moved into the oxidation chamber and pumped to 1·10^-7 mbar. After that the chambers were connected and Ti gettering was performed at a rate of 0.5 Å/s for 5 min. Then the first layer of 25 nm thick Al was deposited at angle -12^∘ at 1 Å/s rate at pressure below 5·10^-8 mbar in the oxidation chamber, after which the evaporation chamber was closed and an oxidation was done at a pressure of 18 mbar during 7 min. The second layer of 45 nm thick Al was deposited at +12^∘ at 1 Å/s rate at a pressure below 1·10^-7 mbar in the oxidation chamber. Finally, another oxidation was performed at pressure 20 mbar during 10 min.
The lift-off procedure was done in hot acetone at 52^∘C
for 3 h, then the sample was rinsed with IPA and dried with a N_2 gun.
Now the third step of EBL was done to create patches (see Fig. <ref> c) for a galvanic connection between JJs and the Nb ground plane. The MMA/PMMA e-beam resist bilayer was applied as described above and after that the sample was loaded into the EBL tool. The patch exposure was done with the dose of 950 μC/cm^2 with 30 nA beam current and 20 nm stepsize.
The resist was developed by putting into MIBK:IPA 1:3 mixture for 30 s and then into methylglycol:methanol 2:1 for 30 s followed by rinsing in IPA for 30 s.
For the patch deposition we again used the Plassys tool. Analogously to the previous step, the sample in the loadlock was pumped to 5·10^-7 mbar and a stronger Ar milling at zero angle was done for 2 min to remove the surface Nb and Al oxides. The Ar pressure was 4·10^-4 mbar and the IBG parameters were the following: 120 mA beam current, 400 V beam voltage.
Then the sample was moved to the oxidation chamber and pumped to 1·10^-7 mbar, after which a deposition of 150 nm of Al was performed at the rate of 2 Å/s and pressure below 1·10^-7 mbar in the oxidation chamber with consecutive oxidation at 20 mbar during 10 min.
Another lift-off step was done similarly to the one described above.
For the low-temperature measurements the wafers were scribed with a diamond tip and cleaved into separate chips. The chips were mounted on a low-temperature sample holder using BF-6 glue and then wire-bonded using FS Bondtec 5330 with Al wire.
*
|
http://arxiv.org/abs/2409.03572v1 | 20240905142611 | Extrinsic Principal Component Analysis | [
"Ka Chun Wong",
"Vic Patrangenaru",
"Robert L. Paige",
"Mihaela Pricop Jeckstadt"
] | stat.ME | [
"stat.ME"
] |
1.0
Extrinsic Principal Component Analysis
Ka Chun Wong^1, Vic Patrangenaru^1, Robert L. Paige^2, Mihaela Pricop Jeckstadt^3
Florida State University^1, Missouri S&T University^2,
Polytechnic University of Bucharest^3
USA^1,2 and Romania^3
§ ABSTRACT
One develops a fast computational methodology for principal component analysis on manifolds. Instead of estimating intrinsic principal components on an object space with a Riemannian structure, one embeds the object space in a numerical space, and the resulting chord distance is used.
This method helps us analyzing high, theoretically even infinite dimensional data, from a new perspective.
We define the extrinsic principal sub-manifolds of a random object on a Hilbert manifold embedded in a Hilbert space, and the sample counterparts.
The resulting extrinsic principal components are useful for dimension data reduction.
For application, one retains a very small number of such extrinsic principal components for a shape of contour data sample, extracted from imaging data.
Keywords: statistics on manifolds, extrinsic analysis, PCA, extrinsic mean, Kendall planar shape
MSC2020:Primary 62R30, 62H25, 62H35.
§ INTRODUCTION
Principal component analysis (PCA) is a classical tool in multivariate analysis, which plays an important role in dimension reduction.
Recalling the traditional principal component analysis, which first proposed by Pearson (1901)<cit.>, is a classical dimension reduction method for high dimensional data.
PCA is widely used in multivariate analysis, helping in searching important covariates, visualizing data and so on.
In shape analysis, mean shape and principal component help in extracting the characteristic of the shape space.
D. G. Kendall (1984)<cit.> ground breaking paper, first considered shapes of planar configuration of k labeled points (landmarks, k-ads) as points on a shape space Σ_2^k that turns out to be homeomorphic to the complex projective plane ℂP^k-2.
Statisticians started building methodology for Kendall shape, including Kent (1992)<cit.>, Ziezold (1994)<cit.> and so on.
Huckemann and Ziezold (2006)<cit.> proposed a principal component analysis for Riemannian manifolds based on geodesic distance on the intrinsic metric. That was the beginning of intrinsic PCA on manifolds.
Mardia et. al. (2022)<cit.> advanced research on nested spheres PCA.
For more reference on the subject of PCA on manifolds see <cit.> <cit.> <cit.>.
On the other hand, there are also discussion on extrinsic mean for shape analysis, or in general, means on manifolds in Patrangenaru and Ellingson(2016)<cit.>.
In particular, Patrangenaru (1998)<cit.> introduced the term of Veronese-Whitney (VW) extrinsic mean planar Kendall shape in terms of the VW embedding of ℂ P^k-2 into the space S(k-1,ℂ) of selfadjoint (k-1) × (k-1) matrices introduced by Kent(1992)<cit.>.
In depth results on the asymptotic distribution of this VW mean shape and the resulting bootstrap distribution are given in Bhattacharya and Patrangenaru (2005)<cit.>, Bandulasiri et al. (2009)<cit.> and Amaral et al. (2010)<cit.>. Results were extended to infinite dimensional planar shapes of contours in
Ellingson et al (2013)<cit.>, where a discussion on extrinsic mean of a random object on a Hilbert manifold was first considered, and applied to mean Kendall shapes of random contours; as an application, a comparison of the contour of ( the midsection of) the Corpus Callosum (CC) of Albert Einstein with the VW mean CC of senior individuals was given by Qiu et al (2014)<cit.>.
In this paper we propose a method of PCA based on the chord distance on a manifold embedded in an Eucidean space.
This approach via the PCA in the ambient space where the manifold is embedded has the advantage of being faster and conceptually more consistent that the intrinsic PCA, since from the onset the extrinsic principal submanifolds are going through the extrnsic mean, unlike the intrinsic approach that does not assure this basic compatibility, as shown by Huckeman and Ziezold(2006)<cit.>. Our novel approach consists in conducting PCA on the tangent space at the extrinsic mean of the embedded manifold, a method that is a more efficient way than the intrinsic PCA one.
Section 2 will briefly remind the notion of dimensionality for object data on manifold.
In Section 3 we recall basic result on extrinsic means, including definition, uniqueness and computation.
Section 4 is dedicated to introducing extrinsic principal component analysis, and provide some basic related results.
In Section 5, we introduce the reader to Kendall shapes of planar contours. A concrete example of a drastic dimension reduction for Kendall shape of planar contour data extracted from camera images is given here as well.
§ DIMENSIONS AND MANIFOLDS
In data analysis, the first considerations are about the dimensionality of the data; especially when this dimension is high including in image analysis, bio-informatics or functional data analysis.
Basically data dimension is the local number of covariates fully describing the data.
Functional data is assumed to be infinitely dimensional, although it is impossible to measure infinitely many covariates.
When it comes to imaging data, widely available to users, it is more difficult to define "dimension", as it depends on many factors, including RGB and relative position of the observer facing the imaged scene.
The issue of image data dimensionality could be solved only by introducing various concepts of shape.
For example two planar rigid configurations of points have the same Kendall shape if they differ by a direct similarity;
two planar rigid configurations pictured from different remote view points have the same affine shape; and,
two planar rigid configurations pictured from different arbitrary view points have the same projective shape.
3D Kendall shapes, 3D affine shapes or 3D projective shapes are similarly defined.
Different types of shapes of k-ads (configurations of k labeled landmarks) can be represented on corresponding types of shape spaces.
Such object spaces of k-ads are orbifolds -quotients of manifolds by certain pseudo-group actions.
Orbifolds are manifolds or, in general, stratified spaces having a dimension, which is the dimension of the tangent space at a given regular point on the space of orbits.
For example the space of planar Kendall shapes of contours is a Hilbert manifold - ℂ P (ℍ), the projective space of a complex Hilbert space.
The dimension of a manifold is the dimension, over the reals, of the linear space, modeling that manifold.
For example the dimension of the round unit sphere 𝕊^d = {x ∈ℝ^d+1, x=1} is d,
the dimension of the planar Kendall shape space of k-ads is 2k-4 and the dimension of the projective shape space of k-ads in 3D, is 3k-15 (see Kendall(1984)<cit.>).
In manifold statistics, we always consider the distance between objects. Once that is known, we have to define the notion of random object, or random element, according to Fréchet(1948)<cit.>.
Assume (Ω,𝒜, ℙ) is a probability space and ℬ_ℳ is the Borel σ-algebra on the manifold
ℳ. A random object (r.o.) is a function X:Ω→ℳ, s.t. ∀ B∈ℬ_ℳ, X^-1(B)∈𝒜. The probability measure Q=ℙ_X associated with X is defined via Q(B)=ℙ(X^-1(B)).
There are two main types of distances considered on a manifold ℳ (see Patrangenaru and Ellingson (2016)<cit.>).
One is geodesic distance ρ^g associated with a Riemannian structure g on ℳ.
The other is chord distance ρ_j associated with an embedding j :ℳ→ℝ^N.
A statistical data analysis on a manifold is intrinsic, if the distance considered is a geodesic distance, and, it is extrinsic, if the distance considered is a chord distance.
We can see the intrinsic metric, even in simple cases, such as that of a r.o. of a round sphere, leads to iterative algorithms for computing the intrinsic sample mean, so the calculations will be time consuming, cutting in the lifeline of the user.
Most of the time, extrinsic data analysis is faster, since the extrinsic mean is obtained immediately by projecting the mean in the ambient space on the image of the embedded manifold j(ℳ). Therefore, whenever one has a choice, it is preferable to work with a chord distance (see Bhattacharya et al(2012)<cit.>).
§ EXTRINSIC MEAN AND EXTRINSIC COVARIANCE MATRIX
In this section, we will introduce the notions of extrinsic mean and of extrinsic covariance matrix of a random object on a manifold, related notations and preliminaries.
A general reference for this section is Patrangenaru and Ellingson(2016)<cit.>.
We will also show how to compute their sample estimates.
To start with, we first focus on extrinsic mean, before we move on to the extrinsic covariance.
Assume (ℳ,ρ) is a complete metric space, with a manifold structure and Q = P_X is a probability measure on ℳ associated with a random object X.
A Fréchet mean is a minimizer of the Fréchet function which is the expected square distance from a point to the random object X
F(x) = ∫ρ^2(x,y)Q(dy).
Consider j:ℳ→^N is an embedding on ℳ to ^N, with the induced chord distance ρ_j(x,y) = ||j(x)-j(y)||.
Assume (ℳ, ρ_j) is a complete metric space such that j(ℳ) is a close submanifold of ^N.
Then we have the following
Let Q be a probability measure on ℳ with the chord distance ρ_j.
The set of minimizers of ℱ in (<ref>) is called the extrinsic mean set of Q.
If the extrinsic mean set has only one point, that point is called the extrinsic mean and it labeled
μ_j,E(Q), or μ_E(Q) or μ_j, or μ_E.
To understand the extrinsic mean, it is important to understand about the embedding j.
Assume ρ_0 is the Euclidean distance in ℝ^N. A point
x of ℝ^N such that there is a unique point p in
ℳ for which ρ_0(x,j(ℳ)) = ρ_0(x, j(p)) is called
j-nonfocalj-nonfocal point. A point which is not j-nonfocal is said to be
j-focalj-focal point.
For example, if j(x) = x is the inclusion map, than it is easy to see that the center of the unit sphere 𝕊^N = { x , ||x|| = 1} is the only focal point of j, since ρ_0(O,j(𝕊^N)) = ρ_0(O, j(p)) ∀ p ∈𝕊^N, where O is the origin.
A probability measure Q on ℳ induces a probability
measure j(Q) on ℝ^N.
A probability measure Q on ℳ is
said to be j-nonfocalj-nonfocal probability measure if the mean μ of j(Q) is a
j-nonfocal point.
Let ℱ^c is the set of j-nonfocal points.
A projection P_j : ℱ^c → j(ℳ) is a function y=P_j(x) such that for any x ∈ℱ^c, y is the unique, with ρ_0(x,j(M)) = ρ_0(x, y).
If μ is the mean of j(Q) in ℝ^N.
Then (a) the extrinsic mean set is the set of all points p∈ℳ, with ρ_0(μ,j(p)) = ρ_0 (μ,j(ℳ)) and (b) If μ_j,E(Q) exists then μ exists and is j-nonfocal and μ_j,E(Q) =j^-1(P_j(μ)).
The set of focal points of a submanifold ℳ of
ℝ^N that has no flat points (points of zero curvature) with the induced Riemannian structure, is a closed subset of ℝ^N of Lebesgue
measure 0.
Consider an embedding j:ℳ→ℝ^N.
Assume (x_1,...,x_n) is a sample from a j-nonfocal probability
measure Q on ℳ, and the function p→1/n∑^n_r=1j(p) - j(x_r)^2 has a unique minimizer on ℳ; this
minimizer is the extrinsic sample meanextrinsic sample mean as projection of the mean vector.
From Theorem <ref> the extrinsic sample mean is
given by
x_E := j^-1(
P_j(j(x)))
(Bhattacharya and Patrangenaru(2003)).
Assume Q is a j-nonfocal probability measure on the manifold ℳ and X = { X_1, … , X_n} are i.i.d.r.o.'s from Q.
(a) If the sample mean j(X) is a j-nonfocal point, then the extrinsic sample mean is given by j^-1(P_j(j(X))).
(b) X_E is a consistent estimator of μ_j,E(Q).
To sum up, we defined the extrinsic mean above, and provide some theorem about the existence of it.
Also we have mentioned about the extrinsic sample mean, which both help us in computing the extrinsic (sample) covariance matrix.
And now, we start on evaluating the extrinsic covariance matrix.
Assume ℳ is a m dimensional manifold and j:M→^N is an embedding on ℳ such that j(ℳ) is closed in ^N.
Q is a j-nonfocal probability measure on ℳ such that j(Q) has finite moments of order two (or of sufficiently high order as needed).
Assume (X_1,…,X_n) are i.i.d. ℳ-valued random objects with common probability distribution Q.
Recall the extrinsic mean μ_E(Q) = μ_j,E(Q) of the measure Q on the manifold ℳ relative to the embedding j is the Fréchet associated with the restriction to j(ℳ) of the Euclidian distance in ^N.
Let μ and Σ be the mean and covariance matrix of j(Q) respectively regarded as a probability measure on ℝ^N.
Let ℱ be the set of focal points of j(ℳ), and let P_j : ℱ^c → j(ℳ) be the projection on j(ℳ).
P_j is differentiable at μ and has the differentiability class of j(ℳ) around any nonfocal point.
In order to evaluate the differential d_μ P_j we consider a special orthonormal frame field that will ease the computations.
Assume p → (f_1(p), … ,f_m(p)) is a local frame field on an open subset of ℳ such that, for each p ∈ M, (d_pj(f_1(p)), … ,d_pj(f_m(p))) are orthonormal vectors in ℝ^N.
A local frame field (e_1(y),e_2 (y) ,…, e_N(y)) defined on an open neighborhood U ⊆ℝ^N is adapted to the embedding j if it is an orthonormal frame field and
(e_r(j(p)) = d_pj(f_r(p)), r = 1, …, m, ∀ p ∈ j^-1(U).
Let e_1,e_2 ,…, e_N be the canonical basis of ℝ^N and assume (e_1(y),e_2 (y) , …, e_N(y)) is an adapted frame field around P_j(μ)=j(μ_E).
Then d_μ P_j(e_b) ∈ T_P_j(μ)j(ℳ) is a linear combination of e_1(P_j(μ)),e_2(P_j(μ)), …,e_m(P_j(μ)):
d_μ P_j (e_b) = ∑_a = 1^m (d_μ P_j(e_b)) · e_a
(P_j(μ))e_a( P_j(μ)), ∀ b = 1, …, N.
By the delta method, n^1 2 (P_j(j(X)) - P_j(μ)) converges weakly to a random vector V having a 𝒩_N(0, Σ_μ) distribution.
Here j(X) = 1/n∑_i=1^nj(X_i) and
Σ_μ = [ ∑^m_a=1 d_μ P_j (e_b) · e_a(P_j
(μ)) e_a (P_j (μ)) ]_b=1,...,NΣ
[ ∑^m_a=1 d_μ P_j (e_b) · e_a (P_j (μ)) e_a(P_j(μ))
]_b=1,...,N^T,
where Σ is the covariance matrix of j(X_1) w.r.t. the canonical basis e_1, …, e_N.
The asymptotic distribution 𝒩_N( 0 , Σ_μ ) is degenerate and can be regarded as a distribution on T_P_j(μ)j(ℳ), since the range of d_μ P_j is a subspace of T_ P_j(μ)j(ℳ). Note that
d_μ P_j (e_b) · e_a(P_j(μ)) =0, for
a = m + 1,…, N.
We provide below a CLT, which applies to an arbitrary embedding, leading topivots and are independent of the chart used.
The tangential component tan(v) of v ∈ℝ^N w.r.t. the basis e_a (P_j
(μ))∈ T_P_j (μ)j(ℳ), a = 1,…, m is given by
tan(v) = (e_1 (P_j (μ))^T v … e_m (P_j (μ))^T v )^T.
Then the random vector (d_μ_Ej)^-1(n^1/2(P_j(j(X))-P_j(μ))) has the following covariance matrix w.r.t. the basis f_1(μ_E),⋯, f_m(μ_E):
Σ_j,E = (e_a(P_j(μ))^TΣ_μe_b(P_j(μ)))_1 ≤ a,b ≤ m =
[ ∑ d_μ P_j (e_b) · e_a (P_j(μ)) ]_a=1,...,mΣ[ ∑ d_μ P_j (e_b) · e_a(P_j (μ))]_a = 1,…, m^T .
The matrix Σ_j,E given by (<ref>) is the extrinsic covariance matrix of the j-nonfocal distribution Q ( of X_1) w.r.t. the basis f_1(μ_E),…, f_m(μ_E).
When j is fixed in a specific context, the subscript j in Σ_j,E may be omitted .
In order to find a consistent estimator of Σ_j,E, note that j(X) is a consistent estimator of μ, d_j(X)P_j converges in probability to d_μ P_j, and e_a(P_j(j(X))) converges in probability to e_a(P_j(μ)) and, further,
S_j,n = n^-1∑ (j(X_r) - j(X))(j(X_r) - j(X))^T
is a consistent estimator of Σ.
It follows that
[ ∑_a=1^m d_j(X) P_j (e_b) ·
e_a(P_j(j(X)))e_a(P_j(j(X)))]S_j,n
[ ∑_a=1^m d_j(X) P_j (e_b) ·
e_a(P_j(j(X)))e_a(P_j(j(X)))]^T
is a consistent estimator of Σ_μ, and tan_P_j(j(X))(v) is a consistent estimator of tan(v).
If we take the components of the bilinear form associated with the matrix (<ref>) w.r.t. e_1(P_j(j(X))),e_2(P_j(j(X))),...,e_m(P_j(j(X))), we get a consistent estimator of Σ_j,E, called the the sample extrinsic covariance matrix , given by
S_j,E,n=[[ ∑ d_j(X) P_j (e_b) ·
e_a(P_j(j(X)))]_a=1,...,m]
· S_j,n
[[∑ d_j(X) P_j (e_b) ·
e_a(P_j(j(X)))]_a=1,…,m]^T
§ EXTRINSIC PRINCIPAL COMPONENTS
Principal component analysis seeks a space of lower dimensionality, known as the principal subspace such that the orthogonal projection of the data points onto the subspace maximize the variance of the projected points.
To achieve this goal, we are looking for the eigenvectors of the covariance matrix as the principal components. There we select the largest eigenvalues, since they contribute to most of the variability in the data.
Similarly, the extrinsic principal components a working in a similar way.
But instead of using the covariance matrix of the data set, we use the sample extrinsic covariance matrix.
The extrinsic principal components of the j-nonfocal r.o. X on ℳ w.r.t. the basis f_1(μ_E),…, f_m(μ_E) of the tangent space T_μ_Eℳ are 1D submanifolds of ℳ going through the extrinsic mean that are obtained by taking the j-preimage of the intersection of the affine subspace generated by the eigenvectors v_i, i=1,…,m of the matrix Σ_j,E corresponding to the eigenvalues λ_i, i=1,…,m where λ_i are listed in their descending order , and by the orthocomplement of the tangent space at the extrinsic mean to ℳ, withℳ.
Here we assume the eigenvalues are simple.
If the extrinsic covariance has an eigenvalue λ with multiplicity k > 1, we will define in a similar way the extrinsic principal subset of X corresponding to this eigenvalue as follows :
We will take instead the affine subspace generated by the eigenspace of λ and by the orthocomplement of the tangent space of μ_E.
We can consider the principal subspaces generated by the d_μ j images of the first k eigenvectors of Σ_E and by the orthocomplement as an affine subspace of ^N.
This subspace intersects j(ℳ) along a subset which is locally sub-manifold whose j-preimage is the principal sub-manifold of ℳ including the first principal extrinc curves.
We will give some example with different types of data.
The extrinsic sample principal components associated with a random sample x_1,…,x_n are defined by considering in the above definition.
The probability measure Q being the empirical Q̂_n = 1/n∑_i=1^n δ_x_i.
§.§ Simulated example : Spherical Data
The following example shows the extrinsic principal components of a set of spherical data.
In this example we will illustrate the extrinsic principal components in a graphical way by a simple example.
We pseudo-randomly generate 300 points on a two dimensional unit sphere.
To highlight the result easily, the generated data points are concentrated mainly in one direction.
In this case the projection P_j any point x ∈^3 on the sphere S^2, is given by P_j(x) = x/||x||.
Results are shown in figure <ref> and table <ref>.
The red great circle in Figure <ref> is the first sample extrinsic principal component and the green circle is the second sample extrinsic principal component. The intercept of these two great circles is the extrinsic sample mean.
As we see, the data mainly distributed along the first principal component.
The first extrinsic principal component explains over 87% of the data in this example.
To look for the data projected onto the first principal component, we first project the data onto the tangent space of the extrinsic sample mean, T_P_j̅(̅X̅)̅j(ℳ), as shown in Figure <ref>.
We project the data along the tangent space to the first principal component (the vector in red color).
And the re-projected back those data onto the sphere through the origin.
Result shows in Figure <ref>.
The projected data (red points) stick on the first principal components on the sphere (the yellow line) along the shortest distance.
§ EXTRINSIC PRINCIPAL COMPONENTS FOR SHAPES OF PLANAR CONTOURS
In this section we will focus on shape analysis of planar contours.
We will introduce the corresponding shape space and some statistics on this object space.
We will also give an extrinsic principal component analysis concrete example.
The general reference for this section is Patrangenaru and Ellingson(2016)<cit.>.
Kendall(1984) <cit.> showed that the space of direct similarity shapes of k planar landmarks can be represented as the manifold ℂ P^k-2.
More general, this is extended here to direct similarity shapes of planar contours.
We focus on contours, boundaries of 2D topological disks in the plane.
To keep the data analysis stable, and to assign a unique labeling, we make the generic assumption that across the population there is a unique anatomical or geometrical landmark starting point p_0 on such a contour of perimeter one, so that the label of any other point p on the contour is the "counterclockwise" travel time at constant speed from p_0 to p.
A regular contour γ̃ is regarded as the range of a piecewise differentiable regular arclength parameterized function γ: [0, L] →ℂ, γ(0) = γ(L), that is one-to-one on [0, L).
Two contours γ̃_1, γ̃_2 have the same direct similarity shape if there is a direct similarity S : ℂ→ℂ, such that S(γ̃_1) = γ̃_2.
Two regular contours γ̃_1, γ̃_2 have the same similarity shape if their centered counterparts satisfy to γ̃_2,0=λγ̃_1,0, for some λ∈ℂ\ 0.
Therefore Σ_2^reg, set of all direct similarity shapes of regular contours, is a dense and open subset of P(𝐇), the projective space corresponding to the Hilbert space 𝐇 of all square integrable centered functions from S^1 to ℂ. (see Ellingson et al (2013)<cit.>).
The space P(𝐇) is a Hilbert manifold.
We here introduce the Veronese-Whitney (VW) embedding j:P(𝐇) →ℒ_HS=𝐇⊗𝐇 given by
j([γ]) = 1/γ^2γ⊗γ^*, [γ] ∈ P(𝐇).
The Veronese-Whitney mean ( VW mean) is the extrinsic mean for a random object X = [Γ] on P(𝐇) with respect to the VW embedding.
The VW extrinsic mean is [e_1], where e_1 is the eigenvector corresponding to the largest eigenvalue of E(1/Γ^2Γ⊗Γ^*).
The VW extrinsic sample mean can be compute in a similar way.
Given any VW-nonfocal probability measure Q on P(𝐇), then if X_1,…,X_n is a random sample from Γ, then the VW sample mean μ̂_E,n is the projective point of the eigenvector corresponding to the largest eigenvalue of 1/n∑^n_i=11/X_i^2X_i ⊗ X_i^*.
Once we have compute the extrinsic mean, the next step is the extrinsic covariance matrix.
The following result of Prentice (1984) <cit.> is also needed in the sequel.
(Prentice (1984) <cit.>)
Assume [X_i], X_i=1, i=1,...,n is a random sample from a j-nonfocal, probability measure Q on ℝP^N-1.
Then the sample (VW-)extrinsic covariance matrix S_j,E is given by extrinsic covariance
S_j,E_ab = n^-1 (η_N - η_a)^-1 (η_N - η_b)^-1∑_i (m_a · X_i)(m_b · X_i)(m· X_i)^2,
where η_a, a =1,...,N, are eigenvalues of K := n^-1∑^n_i=1 X_i X^t_i in increasing order and m_a, a =1,...,N, are corresponding linearly independent unit eigenvectors.
Here we give a proof of formula (<ref>).
Since the map j is equivariant, w.l.o.g. one may assume that j(X_E)=P_j(j(X)) is a diagonal matrix, X_E = [m_N]=[e_N] and the other unit eigenvectors of j(X)=D are m_a=e_a, ∀ a=1,...,N-1.
We evaluate d_D P_j.
Based on this description of T_[x]ℝP^N-1, one can select in T_P_j(D)j(ℝP^N-1) the orthonormal frame e_a(P_j(D)) = d_[e_N]j(e_a). Note that S(N,) has the orthobasis F^b_a, b ≤ a where, for a<b, the matrix F^b_a has all entries zero except for those in the positions (a,b), (b,a) that are equal to 2^-1 2; also F^a_a = j([e_a]).
A straightforward computation shows that if η_a, a =1,...,N, are the eigenvalues of D in their increasing order, then d_D P_j(F^b_a) = 0, ∀ b ≤ a < N and d_D P_j(F^N_a) = (η_N - η_a)^-1 e_a(P_j(D)); from this equation it follows that, if j(X) is a diagonal matrix D then the entry S_j,E_ab is given by
S_j,E_ab = n^-1 (η_N - η_a)^-1 (η_N- η_b)^-1∑_i X_i^a X_i^b (X_i^N)^2.
Taking j(X) to be a diagonal matrix and m_a = e_a formula (<ref>) follows.
§.§ Data Driven Example
Here we illustrate an example for extrinsic principal component analysis for planar contour.
Consider the samples of contours of butterfly from Sharvit et al.(1998)<cit.>.
Some samples are shown in Figure <ref>.
There are 16 contours, each have 500 sampling points.
Each sample contours is a 2 × 500 real matrix, each column represent a point of the contour.
We transfer the sample in to a 1 × 500 complex matrix, in result the whole sample data will be a 16 × 500 complex matrix.
We compute the extrinsic sample mean by using Proposition <ref>.
The result mean shape shows in Figure <ref>; it is smoother than those original sample contours, due to the averaging process. This is always expected when sharp features appear at various locations on individual observations.
And then we compute the extrinsic sample covariance matrix by using equation (<ref>).
By applying eigenvalue decomposition on the extrinsic sample covariance matrix, we can extract now the extrinsic principal components.
Figure <ref> shows the scree plot for the extrinsic PCA associated with this data set.
The first two sample extrinsic principal component explain almost 90% of this data set.
plain
|
http://arxiv.org/abs/2409.02879v1 | 20240904170600 | On holographic confining QFTs on AdS | [
"Ahmad Ghodsi",
"Elias Kiritsis",
"Francesco Nitti"
] | hep-th | [
"hep-th"
] |
equationsection
suffix=,
for
M
q/
|
http://arxiv.org/abs/2409.02204v1 | 20240903181837 | Moment-type estimators for a weighted exponential family | [
"Roberto Vila",
"Helton Saulo"
] | stat.ME | [
"stat.ME",
"60E05, 62Exx, 62Fxx"
] |
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
§ ABSTRACT
In this paper, we propose and study closed-form moment type estimators for a weighted exponential family. We also develop a bias-reduced version of these proposed closed-form estimators using bootstrap techniques. The estimators are evaluated using Monte Carlo simulation. This shows favourable results for the proposed bootstrap bias-reduced estimators.
Keywords. Weighted exponential family · Moment method · Monte Carlo simulation · software.
Mathematics Subject Classification (2010). MSC 60E05 · MSC 62Exx · MSC 62Fxx.
§ INTRODUCTION
In this work we provide closed-form estimators for the parameters of probability distributions that belongs to the following weighted exponential family <cit.>:
f(x;ψ)
=
(μσ)^μ+1 (σ+δ_ab)Γ(μ+1)
[1+δ_abT(x)] | T'(x)| T(x) exp{-μσ T(x)+μlog(T(x))},
x∈ (0,∞),
where ψ=(μ,σ), μ,σ>0, T:(0,∞)→ (0,∞) is a real strictly monotone twice differentiable function, δ_ab is the Kronecker delta function and T'(x) denotes the derivative of T(x) with respect to x.
The probability function f(x;ψ) in (<ref>) can be interpreted as a mixture of two distributions that belongs to the exponential family, that is,
f(x;ψ) =
σσ+δ_ab
f_1(x)
+
δ_abσ+δ_ab
f_2(x),
x∈ (0,∞), μ, σ>0,
where
f_j(x)
=
(μσ)^μ+j-1Γ(μ+j-1) | T'(x)| T(x) exp{-μσ T(x)+(μ+j-1)log(T(x))},
x∈ (0,∞), j=1,2.
Densities of form f_j(x), j=1,2, have appeared in <cit.> and <cit.>.
If X has density in (<ref>), from (<ref>) and (<ref>) it is simple to show that the random variable X defined as
X≡(1-B)T^-1(Z_1)+BT^-1(Z_2),
has density in (<ref>), where T^-1 denotes the inverse function of T, B∈{0,1} is a Bernoulli random variable with success parameter δ_ab/(σ+δ_ab), independent of Z_1 and Z_2, such that Z_j∼ Gamma(μ+j-1,1/(μσ)), j=1,2, that is, the density function of Z_j is given by
f_Z_j(z)
=
1Γ(μ+j-1)[1/(μσ)]^μ+j-1 z^(μ+j-1)-1exp{-z[1/(μσ)]}, z>0, j=1,2.
Table <ref> <cit.> presents some examples of generators T(x) for use in (<ref>) with a=b.
For a≠ b,
Table <ref> <cit.> provides some examples of generators T(x) for use in (<ref>).
There are a number of different methodological proposals in the literature for obtaining close-form estimators; see, for example, the moment-based type <cit.> and the score-adjusted approaches <cit.>. Closed-form estimators based on the likelihood function have also been suggested. A type of likelihood-based estimator is obtained by considering the likelihood equations of the generalized distribution obtained by a power transformation and taking the baseline distribution as a special case; see for example, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. For other proposals for likelihood-based estimators, the reader is referred to <cit.> and <cit.>.
This paper develops moment-based closed-form estimators for the parameters of probability distributions of the weighted exponential family (<ref>) in the special case where T(x)=x^-s, x>0 and s≠ 0. The main motivation for choosing this type of generator is that it provides closed-form expressions for moments (of functions) of X in (<ref>), which allows finding moment-based estimators for the corresponding parameters. Note that this type of generator includes the T(x) generators of Nakagami, Maxwell-Boltzmann, Rayleigh, gamma, inverse gamma, δ-gamma, Weibull, inverse Weibull, generalized gamma, generalized inverse gamma, chi-squared, scaled inverse chi-squared, weighted Lindley, weighted inverse Lindley, weighted Nakagami and weighted inverse Nakagami; see Tables <ref> and <ref>.
The rest of the paper is structured as follows. In section <ref> we briefly present some preliminary results. In Sections <ref> and <ref>, we describe the newly proposed moment-based estimators and some asymptotic results respectively. In Section <ref>, we perform a Monte Carlo simulation study to assess the
performance of a bootstrap bias-reduced version of the proposed estimators.
§ PRELIMINARY RESULTS
In this section, closed-form expressions for moments (of functions) of X in (<ref>) are provided.
§.§ Moments
If X has density in (<ref>), then, for any measurable function g:(0,∞)→ℝ, we have
𝔼[g(X)]
=
σσ+δ_ab 𝔼
[
g(T^-1(Z_1))
]
+
δ_abσ+δ_ab 𝔼
[
g(T^-1(Z_2))
].
Taking into account the characterization (<ref>) of X and by using the independence of B∼ Bernoulli(δ_ab/(σ+δ_ab)) with Z_1 and Z_2, we obtain
𝔼[g(X)]
=
𝔼
[
g(
(1-B)T^-1(Z_1)+BT^-1(Z_2)
)
]
=
𝔼(1-B)
𝔼
[
g(T^-1(Z_1))
]
+
𝔼(B)
𝔼
[
g(T^-1(Z_2))
]
=
σσ+δ_ab 𝔼
[
g(T^-1(Z_1))
]
+
δ_abσ+δ_ab 𝔼
[
g(T^-1(Z_2))
].
This completes the proof.
If T(x)=x^-s, x>0 and s≠ 0, then the real moments of X are given by
𝔼(X^q)
=
(μσ)^q s Γ(μ-q s)Γ(μ)[
σσ+δ_ab
+
δ_abσ+δ_ab (μ-q s)μ],
where μ-q s>0.
By using Proposition <ref>, with g(x)=x^q and T(x)=x^-s, x>0 and s≠ 0, we have
𝔼(X^q)
=
σσ+δ_ab 𝔼
(
Z_1^-q/s
)
+
δ_abσ+δ_ab 𝔼
(
Z_2^-q/s
).
As Z_j∼ Gamma(μ+j-1,1/(μσ)), j=1,2 and
𝔼(Z^ν)=θ^ν Γ(k+ν)Γ(k),
Z∼ Gamma(k,θ), ν>-k,
from (<ref>), the proof follows.
If T(x)=x^-s, x>0 and s≠ 0, then
𝔼[X^-plog(X)]
=
-1 s Γ(μ+p s) (μσ)^p/sΓ(μ){σσ+δ_ab[
ψ^(0)(μ+p s)-log(μσ)
]
+
δ_abσ+δ_ab μ+p sμ[
ψ^(0)(μ+p s+1)-log(μσ)
]
},
where μ+p s>0.
By using Proposition <ref>, with g(x)=log(x)/x^p and T(x)=x^-s, x>0 and s≠ 0, we have
𝔼[X^-plog(X)]
=
-1 s{σσ+δ_ab 𝔼[Z_1^p/slog(Z_1)]
+
δ_abσ+δ_ab 𝔼[Z_2^p/slog(Z_2)]
}.
As Z_j∼ Gamma(μ+j-1,1/(μσ)), j=1,2 and
𝔼[Z^νlog(Z)]
=
Γ(k+ν)θ^-νΓ(k)
[ψ^(0)(k+ν)-log(1/θ)],
Z∼ Gamma(k,θ),
k>-ν,
from (<ref>), the proof follows.
If T(x)=x^-s, x>0 and s≠ 0, then
𝔼[X^-slog(X^-s) 1+δ_abX^-s]
=
1σ+δ_ab
[ψ^(0)(μ+1)-log(μσ)].
By using the definition (<ref>) with T(x)=x^-s, x>0 and s≠ 0, and by making the change of variable y=x^-s, note that
𝔼[X^-slog(X^-s) 1+δ_abX^-s]
=
s(μσ)^μ+1 (σ+δ_ab)Γ(μ+1)∫_0^∞log(x^-s)
(x^-s)^μ+1 s+1exp{-μσ x^-s} dx
=
(μσ)^μ+1 (σ+δ_ab)Γ(μ+1)∫_0^∞log(y) y^μexp{-μσ y} dy
=
1σ+δ_ab 𝔼[log(Y)],
where Y∼ Gamma(μ+1,1/(μσ)).
By employing formula in (<ref>) with ν=0, k=μ+1 and θ=1/(μσ), in the last equality,
the proof readily follows.
§.§ Main formulas
By using Proposition <ref> with q=-s, we have
𝔼(X^-s)
=
1σ(
1
+
δ_abσ+δ_ab 1μ).
By using Proposition <ref> with p=0, we have
𝔼[log(X)]
=
-1 s{σσ+δ_ab
[
ψ^(0)(μ)-log(μσ)
]
+
δ_abσ+δ_ab
[
ψ^(0)(μ+1)-log(μσ)
]
}
=
-1 s[
ψ^(0)(μ+1)-log(μσ)
-
1μ(
σσ+δ_ab)
],
where in the last line the identity ψ^(0)(z+1)=ψ^(0)(z)+1/z has been used.
By using Proposition <ref> with p=s, we have
𝔼[X^-slog(X)]
=
-1 s 1σ{σσ+δ_ab
[
ψ^(0)(μ+1)-log(μσ)
]
+
δ_abσ+δ_ab μ+1μ
[
ψ^(0)(μ+2)-log(μσ)
]
}
=
1σ{
-1 s[
ψ^(0)(μ+1)-log(μσ)
-
1μ(
σσ+δ_ab)
]
-
1 sμ{δ_abσ+δ_ab
[
ψ^(0)(μ+1)-log(μσ)
]
+
1
}},
where in the last equality we have again used the identity ψ^(0)(z+1)=ψ^(0)(z)+1/z. By (<ref>) and Proposition <ref> note that 𝔼[X^-slog(X)] can be written as
𝔼[X^-slog(X)]
=
1σ{𝔼[log(X)]
-
1 sμ{δ_ab𝔼[X^-slog(X^-s) 1+δ_abX^-s]
+
1
}},
from which we can express μ as follows:
μ
=
δ_ab𝔼[h_1(X)]
+
1
σ𝔼[h_2(X)]
-
𝔼[h_3(X)]
,
where we have adopted the following notations:
h_1(x)≡x^-slog(x^-s) 1+δ_abx^-s,
h_2(x)≡x^-slog(x^-s),
h_3(x)≡log(x^-s).
Plugging (<ref>) into (<ref>) and solving for σ gives
σ =
1
-
δ_ab𝔼[h_4(X)]
+
δ_ab𝔼[h_2(X)]δ_ab𝔼[h_1(X)]
+
1
2𝔼[h_4(X)]
+
√({ 1
-
δ_ab𝔼[h_4(X)]
+
δ_ab𝔼[h_2(X)]δ_ab𝔼[h_1(X)]
+
1}^2
+
4𝔼[h_4(X)]
{δ_ab-δ_ab𝔼[h_3(X)]δ_ab𝔼[h_1(X)]
+
1 })
2𝔼[h_4(X)]
,
where
h_4(x)≡ T(x)= x^-s.
§ CLOSED-FORM ESTIMATORS
Let {X_i : i = 1,… , n} be a univariate random sample of size n from X having density in (<ref>).
By using the method of moments in (<ref>), the corresponding sample moment to obtain the estimator of σ is
σ
=
1
-
δ_abX_4
+
δ_abX_2δ_abX_1
+
1
+
√({ 1
-
δ_abX_4
+
δ_abX_2δ_abX_1
+
1}^2
+
4X_4
{δ_ab-δ_abX_3δ_abX_1
+
1 })
2X_4
,
where we have defined
X≡[ X_1; X_2; X_3; X_4 ]
=
1 n∑_i=1^n[ h_1(X_i); h_2(X_i); h_3(X_i); h_4(X_i) ],
with h_1,h_2,h_3 and h_4 being as in (<ref>) and (<ref>), respectively.
Plugging (<ref>) in (<ref>) and using the method of moments, the sample moment to obtain the estimator of μ is
μ
=
δ_abX_1
+
1
σX_2
-
X_3
.
§.§ Case a≠ b
In this case, from (<ref>) and (<ref>), we obtain the new estimators for σ and μ as
σ
=
1
X_4
and
μ
=
1
σX_2
-
X_3
.
respectively, where X_2, X_3 and X_4 are as given in (<ref>). Note that, in this case, σ coincides with the maximum likelihood (ML) estimator <cit.>.
§.§ Case a= b
In this case, from (<ref>) and (<ref>), we obtain the new estimators for σ and μ as
σ
=
1
-
X_4
+
X_2X_1
+
1
+
√({ 1
-
X_4
+
X_2X_1
+
1}^2
+
4X_4
{1-X_3X_1
+
1 })
2X_4
and
μ
=
X_1
+
1
σX_2
-
X_3
,
respectively, where X_1 (with a=b), X_2, X_3 and X_4 are as given in (<ref>).
It is simple to observe that the weighted probability distributions in (<ref>) belongs to the exponential family with vector of sufficient statistics given by (T(x),log(T(x)))^⊤. Furthermore, for exponential families with sufficient statistics (T(x),log(T(x)))^⊤ it is well-known that the moment-type estimators are in fact the maximum likelihood estimators <cit.>. The estimators μ and σ (in (<ref>) and (<ref>) for case a≠ b, and (<ref>) and (<ref>) for case a= b) were derived using the moment-type method (modification of the method of moments) with vector of statistics
(T(x),log(T(x)),T(x)log(T(x)),T(x)log(T(x)) 1+δ_abT(x))^⊤,
for
T(x)=x^-s, x>0 and s≠ 0.
By using the maximum likelihood equations corresponding to an extension of the weighted probability model in (<ref>), simple closed-forms for the estimators of μ and σ, denoted by μ_∙ and σ_∙, respectively, were derived in reference <cit.> (case a≠ b) and <cit.> (case a= b). Note that these estimators are not, in fact, the maximum likelihood estimators corresponding to the probability distribution in (<ref>), so we cannot say with certainty that the estimators provided by the moment-type method, μ and σ, are the same as μ_∙ and σ_∙ in the special case T(x)=x^-s, x>0 and s≠ 0. In other words, we cannot apply the results obtained in <cit.>. However, to our surprise, through a simple but laborious calculation and some simulations, we found that estimators μ and σ (in (<ref>) and (<ref>) for case a≠ b, and (<ref>) and (<ref>) for case a= b) coincide with the estimators μ_∙ and σ_∙ obtained in <cit.> (case a≠ b) and <cit.> (case a= b).
§ ASYMPTOTIC BEHAVIOR OF ESTIMATORS
Let X_1,…, X_n be a random sample of size n from the variable X with PDF (<ref>). If we further let X=(X_1,X_2,X_3,X_4)^⊤ and
X≡[ h_1(X); h_2(X); h_3(X); h_4(X) ],
with X_i, i=1,…,6, as given in (<ref>), and h_1,h_2,h_3 and h_4 being as in (<ref>) and (<ref>), respectively.
By applying strong law of large numbers, we have
X a.s.⟶𝔼(X),
where “a.s.⟶” denotes almost sure convergence.
Hence, continuous-mapping theorem <cit.> gives
σ=g_1(X) a.s.⟶
g_1(𝔼(X))
and μ=g_2(X) a.s.⟶
g_2(𝔼(X)),
with
g_1(x_1,x_2,x_3,x_4)
≡
1
-
δ_abx_4
+
δ_abx_2δ_ab
x_1
+
1
+
√({ 1
-
δ_abx_4
+
δ_abxδ_ab
x_1
+
1}^2
+
4x_4
{δ_ab-δ_abx_3δ_ab
x_1
+
1 })
2x_4
and
g_2(x_1,x_2,x_3,x_4)
≡δ_ab
x_1
+
1
g_1(x_1,x_2,x_3,x_4) x_2
-
x_3
.
Furthermore, by Central limit theorem,
√(n)[X-𝔼(X)]𝒟⟶ N_4( 0, Σ),
where Σ denotes the covariance matrix of X and “𝒟⟶” means convergence in distribution.
So, delta method provides
√(n)[
[ μ; σ ]
-
[ g_2(𝔼( X)); g_1(𝔼( X)) ]]
(<ref>)=√(n)[
[ g_2(X); g_1(X) ]
-
[ g_2(𝔼( X)); g_1(𝔼( X)) ]]
𝒟⟶
N_2( 0, AΣ A^⊤),
with A being the partial derivatives matrix defined as
A
=
.
[ ∂ g_2( x)∂ x_1 ∂ g_2( x)∂ x_2 ∂ g_2( x)∂ x_3 ∂ g_2( x)∂ x_4; ∂ g_1( x)∂ x_1 ∂ g_1( x)∂ x_2 ∂ g_1( x)∂ x_3 ∂ g_1( x)∂ x_4 ] |_ x=𝔼( X).
For simplicity of presentation, we do not present the partial derivatives of g_j, j=1,2, here. Analogously to the calculation of 𝔼( X), the second moments of the components of X can be determined which is sufficient to guarantee the existence of the matrix Σ.
The following result shows that for generators of type T(x)=x^-s, x>0 and s 0, the strong consistency property and a Central limit type theorem for the estimators σ and μ are satisfied.
If T(x)=x^-s, x>0 and s 0, then
g_1(𝔼( X))=σ and g_2(𝔼( X))=μ, where g_1 and g_2 are given in (<ref>) and (<ref>), respectively. Moreover, from (<ref>),
√(n)[
[ μ; σ ]
-
[ μ; σ ]]
𝒟⟶
N_2( 0, AΣ A^⊤),
where A was given lines above and Σ is the covariance matrix of X.
The proof follows immediately from (<ref>) and (<ref>), because
g_2(𝔼( X))
=
δ_ab𝔼[h_1(X)]
+
1
σ𝔼[h_2(X)]
-
𝔼[h_3(X)]
(<ref>)=μ
and
g_1(𝔼( X))
=
1
-
δ_ab𝔼[h_4(X)]
+
δ_ab𝔼[h_2(X)]δ_ab𝔼[h_1(X)]
+
1
2𝔼[h_4(X)]
+
√({ 1
-
δ_ab𝔼[h_4(X)]
+
δ_ab𝔼[h_2(X)]δ_ab𝔼[h_1(X)]
+
1}^2
+
4𝔼[h_4(X)]
{δ_ab-δ_ab𝔼[h_3(X)]δ_ab𝔼[h_1(X)]
+
1 })
2𝔼[h_4(X)]
(<ref>)=σ.
Thus completes the proof.
§ SIMULATION STUDY
In this section, we carry out a Monte Carlo simulation study for evaluating the performance of the proposed estimators. Particularly, we evaluate a bias-reduced version of the proposed moment-type estimators, as they are biased <cit.>. For illustrative purposes, we only present the results for the weighted inverse Lindley distribution. Then, by considering the parameters μ=ϕ, σ=λ/ϕ and generator T(x)=x^-1 of the weighted inverse Lindley distribution, given in Table <ref>, from (<ref>) and (<ref>) the bootstrap biased-reduced moment-type estimators for λ and ϕ are given by
λ^* = 2λ - 1/B∑_b=1^Bλ^(b),
ϕ^* = 2ϕ - 1/B∑_b=1^Bϕ^(b),
where λ^(b) and ϕ^(b) are the b-th bootstrap replicates from the b-th bootstrap sample,
λ
=
1
-
X_4
+
X_2X_1
+
1
+
√({ 1
-
X_4
+
X_2X_1
+
1}^2
+
4X_4
{1-X_3X_1
+
1 })
2X_4
ϕ
and
ϕ
=
X_1
+
1
1
-
X_4
+
X_2X_1
+
1
+
√({ 1
-
X_4
+
X_2X_1
+
1}^2
+
4X_4
{1-X_3X_1
+
1 })
2X_4
X_2
-
X_3
,
with
X_1
=
1 n∑_i=1^nX_i^-1log(X_i^-1) 1+X_i^-1,
X_2 =1 n∑_i=1^nX_i^-1log(X_i^-1),
X_3 =1 n∑_i=1^nlog(X_i^-1),
X_4 =1 n∑_i=1^nX_i^-1.
For assessing the performance of the proposed bootstrap biased-reduced moment-type estimators, we calculated the relative bias (RB) and root mean square error (RMSE), given by
RB(θ^*) = |1/N∑_i = 1^Nθ^*(i) - θ/θ| , RMSE(θ^*) = √(1/N∑_i = 1^N (θ^*(i) - θ)^2),
where θ∈{λ,ϕ} and θ^*(i)∈{λ^*(i),ϕ^*(i)} are the true parameter value and its i-th bootstrap bias-reduced estimate, and N is the number of Monte Carlo replications.
The simulation scenario considers the following setting: sample size n ∈{20,50,100,200,400,1000}, ϕ∈{0.5,1,3,5,9}, and λ=1. The number of Monte Carlo replications was N=1,000 and the number of bootstrap replications was B=200. The numerical evaluations were implemented using the software; see <http://cran.r-project.org>.
Figures <ref> and <ref> show the results of Monte Carlo simulation study to assess the performance of the proposed bootstrap biased-reduced moment-type (MOM) estimators. For comparison purposes, we also considered the results of the classical maximum likelihood estimator (MLE) discussed by <cit.>. Figures <ref> and <ref> show that, as expected, both biases and RMSEs of the estimators approach zero as the sample size increases. However, the bias is much lower for smaller samples for the proposed estimator. Moreover, the RMSE is similar for both estimators. Finally, the results do not seem to be affected by the parameter ϕ.
*Acknowledgements
The research was supported in part by CNPq and CAPES grants from the Brazilian government.
*Disclosure statement
There are no conflicts of interest to disclose.
10
[Bebbington et al., 2007]Bebbington2007
Bebbington, M., Lai, C. D. and Zitikis, R. 2007.
“A flexible Weibull extension.”
Reliability Engineering & System Safety 92:719–726.
[Bernardo and Smith, 1993]Bernardo1993
Bernardo, J. M. and Smith, A. F. M. 1993.
Bayesian Theory,
Wiley.
[Billingsley, 1969]Billingsley1969
Billingsley, P., 1969, Convergence of Probability Measures,
John Wiley & Sons.
[Burr, 1942]Burr1942
Burr, I. W. 1942.
“Cumulative frequency functions.”
Annals of Mathematical Statistics 13(2):215–232.
[Cook, 2008]Cook2008
Cook, J. D. 2008.
Inverse gamma distribution,
Online: <http://www. johndcook. com/inverse gamma.
pdf>, Technical. Report.
[Cheng and Beaulieu, 2001]Cheng-Beaulieu2001
Cheng, J. and Beaulieu, N. C. 2001.
“Maximum-likelihood based estimation of the Nakagami m parameter.”
IEEE Communications Letters 5(3):101–103.
[Cheng and Beaulieu, 2002]Cheng-Beaulieu2002
Cheng, J. and Beaulieu, N. C. 2002.
“Generalized moment estimators for
the Nakagami fading parameter.”
IEEE Communications Letters 6(4):144–146.
[Dagum, 1975]Dagum1975
Dagum, C. 1975.
“A model of income distribution and the conditions of existence of moments of finite order.”
Bulletin of the International Statistical Institute. (Proceedings of the 40th Session of the ISI, Contributed Paper) 46:199–205.
[Davidson and Daniel, 1974]Davidson1974
Davidson, R. R. and Daniel, D. L. 1974.
“Moment-type estimation in the exponential family.”
Communications in Statistics 3(11):1101-1108.
[Dunbar, 1982]Dunbar1982
Dunbar, R. C. 1982.
“Deriving the Maxwell Distribution”.
Journal of Chemical Education 59:22–23.
[Gompertz, 1825]Gompertz1825
Gompertz, B. 1825.
“On the nature of the function expressive of the law of human mortality and on the new model of determining the value of life contingencies.”
Philosophical Transactions of the Royal Society
of London 115:513–585.
[Johnson et al., 1994]Johnson1994
Johnson, N. L., Kotz, S. and Balakrishnan, N. 1994.
Continuous Univariate Distributions.
New York: John
Wiley & Sons.
[Khan et al., 2008]khan2008
Khan, M. S., Pasha, G. R. and Pasha, A. H. 2008.
“Theoretical analysis of inverse Weibull distribution.”
WSEAS Transactions on Mathematics 7(2):30–38.
[Kim and Jang, 2021]Kim2021
Kim, H.-M. and Jang, Y.-H. 2021.
“New closed-form estimators for weighted Lindley distribution.”
Journal of the Korean Statistical Society 50:580–606.
[Kim et al., 2022]Kim2022
Kim, H.-M., Kim, S., Jang, Y.-H. and Zhao, J. 2022.
“New closed-form estimator and its properties.”
Journal of the Korean Statistical Society 51: 47–64.
[Laurenson, 1994]Laurenson1994
Laurenson, D. 1994.
Nakagami Distribution.
Indoor Radio Channel Propagation Modeling by Ray Tracing Techniques.
[Lee and Gross, 1991]Lee1991
Lee, M. and Gross, A. 1991.
“Lifetime distributions under unknown environment.”
Journal of Statistical Planning and Inference
29:137–143.
[Nadarajah and Kotz, 2005]Nadarajah2005
Nadarajah, S. and Kotz, S. 2005.
“On some recent modifications of Weibull distribution.”
IEEE Transactions on Reliability 54:561–562.
[Nascimento et al., 2014]Nascimento2014
Nascimento, A. D. C., Bourguignon, M., Zea, L. M., Santos-Neto, M., Silva, R. B. and Cordeiro, G. M. 2014.
“The gamma extended Weibull family of distributions.”
Journal of Statistical Theory and Applications 13(1):1–16.
[Nawa and Nadarajah, 2023]Nawa2023
Nawa, V. M. and Nadarajah, S. 2014.
“New Closed Form Estimators for the Beta Distribution.”
Mathematics 11:2799.
[Rahman et al., 2014]Rahman2014
Rahman, G., Mubeen, S., Rehman, A. and Naz, M. 2014.
“On k-Gamma and k-Beta Distributions and Moment Generating Functions.”
Journal of Probability and Statistics 2014: Article ID 982013, 6 pages.
[Ramos et al., 2016]RLR2016
Ramos, P. L., Louzada, F. and Ramos, E. 2016.
“An Efficient, Closed-Form MAP Estimator for Nakagami-m Fading Parameter.”
IEEE Communications Letters
20(11):2328–2331.
[Ramos et al., 2018]RLSL2018
Ramos, P. L., Louzada, F., Shimizu, T. K. O., Luiz, A. O. 2018.
“The inverse weighted Lindley distribution: Properties, estimation and an application on a failure time data.”
Communications in Statistics - Theory and Methods 48(10):2372–2389.
[Rayleigh, 1880]Rayleigh1880
Rayleigh, J. W. S. 1880.
“On the resultant of a large number of vibrations of the same pitch and of arbitrary phase.”
Philosophical Magazine Series 5(10):73–78.
[Stacy, 1962]Stacy1962
Stacy, E. W. 1962.
“A Generalization of the Gamma Distribution.”
Annals of Mathematical Statistics 33(3):1187–1192.
[Tamae et al., 2020]Tamae2020
Tamae, H., Irie, K. and Kubokawa, T. 2020.
“A score-adjusted approach to closed form estimators for the gamma and beta distributions.”
Jpn. J. Stat. Data Sci. 3:543–561.
[Vila et al., 2024a]Vila2024
Vila, R., Nakano, E. and Saulo, H. 2024.
“Closed-form estimators for an exponential family derived
from likelihood equations.”
Preprint, Avaliable at <https://arxiv.org/pdf/2405.14509>.
[Vila et al., 2024b]Vila2024b
Vila, R., Nakano, E. and Saulo, H. 2024.
“Novel closed-form point estimators for a weighted exponential family derived from likelihood equations.”
Stat 13(3):e723.
[Xie et al., 2022]Xie2022
Xie, M., Tang, Y. and Goh, T. N. 2002.
“A modified Weibull extension with bathtub-shaped failure rate function.”
Reliability Engineering & System Safety 76(3):279–285.
[Ye and Chen, 2017]YCh2016
Ye, Z-S. and Chen, N. 2017.
“Closed-Form Estimators for the
Gamma Distribution Derived from Likelihood Equations.”
The American Statistician 71: Issue 2.
[Zhao et al., 2021]Zhao2021
Zhao, J., Kim, S. and Kim, H.-M. 2021.
“Closed-form estimators and bias-corrected estimators for the Nakagami distribution.”
Mathematics and Computers in Simulation 185:308–324.
|
http://arxiv.org/abs/2409.02107v1 | 20240903175904 | Highly complex novel critical behavior from the intrinsic randomness of quantum mechanical measurements on critical ground states -- a controlled renormalization group analysis | [
"Rushikesh A. Patil",
"Andreas W. W. Ludwig"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.dis-nn",
"quant-ph"
] |
decorations.pathreplacing,calligraphy,shapes
[baseline=.05ex][gray, thick] (-0.5,-0.5) – (-0.5,0.5);
[gray, thick] (-0.5,-0.5) – (0.5,-0.5);
[gray, thick] (0.5,-0.5) – (0.5,0.5);
[gray, thick] (-0.5,0.5) – (0.5,0.5);
[black] (0.5,0.5) circle (2pt) node[anchor=west]h_1;
[black] (-0.5,0.5) circle (2pt) node[anchor=east]h_4;
[black] (0.5,-0.5) circle (2pt) node[anchor=west]h_2;
[black] (-0.5,-0.5) circle (2pt) node[anchor=east]h_3;
psmallmatrix
([ )
Ubboldmn
S
sgn
citation
ıi
𝔱
𝒥_pb
χ_t
γ̂
ρ̂
t_ℋ
η̂
Department of Physics, University of California, Santa Barbara, CA 93106, USA
§ ABSTRACT
We consider the effects of weak measurements on the quantum critical ground state
of the one-dimensional (a) tricritical and (b) critical quantum Ising model, by measuring in (a) the local energy and in (b) the local spin operator in a lattice formulation. By employing a controlled renormalization group (RG) analysis
we find that
each
problem exhibits highly complex novel scaling behavior, arising from the intrinsically indeterministic (`random') nature of quantum mechanical measurements, which is governed by a measurement-dominated RG fixed point that we study within an ϵ expansion. In the tricritical Ising case (a) we find (i): multifractal scaling behavior of energy and spin correlations in the measured groundstate, corresponding to an infinite hierarchy of independent critical exponents and, equivalently, to a continuum of universal scaling exponents for each of these correlations; (ii): the presence of logarithmic factors multiplying powerlaws in correlation functions, a hallmark of `logarithmic conformal field theories' (CFT); (iii): universal `effective central charges' c^( eff)_n for the prefactors of the logarithm of subsystem size of the nth Rényi entropies, which are independent of each other for different n, in contrast to the unmeasured critical ground state, and (iv): a universal (“Affleck-Ludwig”) `effective boundary entropy' S_eff which we show, quite generally, to be related to the system-size independent part of the Shannon entropy of the measurement record, computed explicitly here to 1-loop order.
–
A subset of these results have so-far also been obtained within the ϵ expansion for the measurement-dominated critical point in
the critical Ising case (b).
Highly complex novel critical behavior from the
intrinsic randomness of quantum mechanical measurements on
critical ground states
- a controlled renormalization group
analysis
Andreas W. W. Ludwig
September 9, 2024
=================================================================================================================================================================================
§ INTRODUCTION
Effects of measurements have recently attracted substantial attention especially in the context of measurement-induced quantum phase transitions in deep quantum circuits, and related problems, which exhibit novel universality classes of phase transitions in such non-equilibrium quantum systems
<cit.>.
Another class of quantum systems subjected to measurements was recently introduced in Ref. [GarrattWeinsteinAltman2022], and subsequent
works <cit.>, considering the effects of measurements on one-dimensional quantum critical ground states. Ref. [GarrattWeinsteinAltman2022] considered a Luttinger liquid, and provided a field theory formulation with measurements acting on the one-dimensional zero-time slice in space-time, exhibiting a version of the Kosterlitz-Thouless transitions, while Ref. [WeinsteinSajithAltmanGaratt,YangMaoJian,MurcianoSalaYueMongAlicea] similarly considered several types of measurements, with and without postselection, performed on the ground state of the critical one-dimensional quantum Ising model.[Compare also Refs. LeeJianXu,ZouSangHsieh,Myerson-JainXuHughes,AshidaFurukawaOshikawa
which consider different but related problems of effects of decoherence on 1d critical ground states, which we do not study in the present paper]
The aim of the present paper is to exhibit novel universality classes of critical behavior with highly complex and novel scaling behavior that can emerge when (weak) measurements are performed (without postselection) on quantum critical ground states. In the examples we discuss such critical behavior originates from a measurement-dominated fixed point occurring at a finite measurement strength, which we treat using a controlled
renormalization group (RG) analysis, i.e. an ϵ expansion. Physically, the complexity of the scaling behavior originates from the intrinsic indeterministic (“random”) nature of quantum mechanical measurements. While so-far analytically-based tools for understanding measurement-induced transitions in deep quantum circuits have been largely elusive, problems involving measurements performed on one-dimensional quantum critical ground states are typically simpler technically, and thus more susceptible to a controlled RG analysis, as we demonstrate in the examples we study. Yet, they exhibit similar complex scaling behavior as that in the deep circuits.
The first problem we study, “problem (a)”, consists of measurements with the local energy operator on the ground state of the tricritical quantum Ising model, within a lattice formulation.
After introducing replicas this can be written as the field theory of the (1+1)-d tricritical Ising model in space-time with a perturbation acting solely on the τ=0 equal time-slice describing the measurements. This perturbation is
relevant in the RG sense, and it flows to an infrared fixed point at finite measurement strength which can be controlled within an ϵ expansion. This is analogous to the Wilson-Fisher ϵ=4-d expansion, except that here the dimension of space-time is always two, and a small parameter ϵ is obtained by generalizing the tricritical Ising model to the
tricritical q-state Potts model, and expanding about q=4 where the perturbation becomes marginal [which is in essence, an expansion in the small parameter (4-q)]. This allows for a systematic calculation of all universal scaling properties at the infrared fixed point.
One reflection of the complexity of the finite measurement strength fixed point appears in correlation functions of the spin and energy operator taken in the measured ground state with measurement outcomes m⃗. When raised to the Nth power and averaged over measurement outcomes with the Born probability these Nth moments decay, for spin and energy correlations, with independent exponents, one for each moment order N. Thus, associated with each of the two observables (spin and energy) there is an infinite hierarchy of scaling exponents, in contrast to standard critical behavior. This is referred to as multifractal scaling. Since this scaling behavior has its origin <cit.>
in a universal
scaling form of the entire probability distribution for the correlation function,
also the non-integer N moments will scale giving rise to a continuous spectrum of scaling dimensions for each, the spin and the energy correlations.
Another exotic feature of the measurement-dominated fixed point that we find is that of a so-called “logarithmic conformal field
theory" <cit.>.
While at RG fixed points associated with conventional critical points
correlation functions decay with powerlaws, here we observe in certain averaged correlation functions a powerlaw multiplied by a logarithm. This arises because at the finite measurement fixed point a rescaling of distances does not act diagonally on all observables, but may act in the form of a two-dimensional [in simplest manifestation] non-diagonalizable “Jordan-form" matrix. When translated into the behavior of averaged correlation function this amounts to the presence of the multiplicative logarithm.
The entanglement entropies exhibit further complex universal behavior at the finite measurement strength critical point. While the universal coefficients of the logarithm of subsystem size in the nth Rényi entropy of the unmeasured ground state,
1 3 c_n,
are all related to the central charge c, i.e. c_n= c 2[1+1 n],
at the finite measurement critical point the universal coefficients 1 3 c^( eff)_n are all unrelated to each other and have a more complicated n dependence already to first order in ϵ which we calculate, and which is further modified in higher order in ϵ. This represents another hierarchy of independent universal quantities, similar to those encountered in spin and energy correlations functions discussed above. (Specifically, these are correlation functions of the “n-twist field”, Sect. <ref>.) We furthermore show that the universal quantities c^( eff)_n also appear in the coefficient
of the linear temperature dependence of the extensive measurement averaged Rényi entropies of the full mixed thermal Gibbs state of the system at finite temperature.
The problem of performing measurements on a quantum critical ground state can be viewed as a problem of the unmeasured critical (here conformally invariant) field theory in space-time with a
defect at the zero-time slice, and the finite measurement strength fixed point represents a scale-invariant, in fact conformally invariant defect. After folding along the slice it becomes a boundary condition on the (doubled) unmeasured conformal field theory (CFT). In general, to any
boundary of a CFT is associated a universal constant, the “Affleck-Ludwig” boundary entropy <cit.>. Here we establish quite generally that for problems of measurements on 1d quantum critical ground states, the corresponding universal “effective boundary entropy” S_ eff is the constant, system size independent piece of the Shannon entropy of the measurement record. (We have computed it here explicitly to lowest order in the ϵ expansion.)
It may be viewed as a boundary analog of the
“effective central charge”
at measurement-induced transitions in deep quantum circuits, which arises from the universal finite-size
dependence
of the Shannon entropy of the measurement record of the (bulk) space-time of the circuit <cit.>.
Lastly, we address the problem of weak measurements
performed on the ground state of the critical Ising model
with the Pauli σ̂^z_i operator at lattice sites i, via an extension of the controlled RG analysis and ϵ expansion developed for the tricritical Ising case. We find that these measurements lead to similar complex scaling behavior
governed by another measurement-dominated RG fixed point, occurring at a finite measurement strength. In particular, we obtain (to two-loop order) an infinite hierarchy of independent multifractal critical exponents for the set of measurement averaged moments of the
σ̂^z_i correlation function, leading again to a continuous spectrum of critical exponents and an independent scaling exponent of the typical connected correlation function.
Moreover, we find, in analogy to the tricritical Ising case, independent coefficients 1 3 c^ (eff)_n of the logarithm of subsystem size for the measurement averaged n-th Rényi entropies for different values of n, which we compute to leading order in the ϵ expansion. The presence of multiplicative logarithms in measurement averaged correlation functions (logarithmic CFT features) is currently being studied as well in the Ising case.
The remaining parts of the paper are
structured in the following manner: In Section <ref>, we introduce the O'Brien-Fendley model and discuss its zero temperature phase diagram which has a critical point in the universality class of the tricritical Ising point.
For this quantum tricritical ground state, in Section <ref>, we describe a measurement protocol with explicit Kraus operators
corresponding to weak measurements.
In Section <ref>, we develop a replica field theory to analyze the problem of described weak measurements on the tricritical ground state.
We analyze
the
infrared behavior of the obtained replica field theory using a controlled perturbative RG expansion and
determine
the new `non-trivial' fixed point
in an ϵ-expansion.
In Section <ref>, we determine the long distance behavior of measurement averaged moments of correlation functions for the spin σ̂^z and the energy Ê operator (defined in Section <ref>) and
demonstrate
the logarithmic CFT features of the measurement-dominated fixed point.
In Section <ref>, we calculate the measurement averaged n^th Rényi entanglement entropies and the von Neumann entanglement entropy.
In Section <ref>,
we discuss the Shannon entropy of the measurement record and the relationship of its constant universal part with the `effective boundary entropy'.
In Section <ref>, we discuss the case of Ising critical point under measurements with the σ̂^z spin operator.
Section <ref>
is
reserved for conclusions and discussion of results.
.8cm
§ THE O'BRIEN-FENDLEY MODEL
A
variety
of
quantum mechanical systems
in one-dimensional space with different microscopic appearance are known to exhibit
a quantum critical point in the universality class of the tricritical Ising model,
see e.g.
Ref. <cit.>.
In
the present paper, we will consider
one such microscopic realization convenient for our purposes,
the O'Brien-Fendley chain introduced in Ref. O'BrienFendley.
The O'Brien-Fendley chain is a 1d quantum chain with spin-1/2
(qubit)
degrees of freedom at each site and is described by the Hamiltonian H
H=H_I+λ_3H_3
H_I=-∑_j(σ̂^z_jσ̂^z_j+1+σ̂^x_j)
H_3=∑_j (σ̂^x_jσ̂^z_j+1σ̂^z_j+2+σ̂^z_jσ̂^z_j+1σ̂^x_j+2)
where
σ̂^a (a = x,y,z) are
the standard Pauli matrices.
Note that at λ_3=0, the Hamiltonian H reduces to the Hamiltonian H_I of
the critical 1d quantum Ising chain.
As seen by inspection, the
term H_3 in the Hamiltonian is invariant under the Kramers-Wannier (K-W) transformation given by
σ̂^z_jσ̂^z_j+1=τ^x_j+1/2
σ̂^x_j=τ^z_j-1/2τ^z_j+1/2.
Since there are no RG relevant K-W self-dual operators
at the Ising critical point,
for sufficiently small λ_3≠ 0 the chain in Eq. <ref>
is described by a K-W invariant line of
second order transitions parametrized by λ_3, all in the Ising universality class.
However, as discussed in Ref. O'BrienFendley, for large enough λ_3 the spectrum becomes gapped, and since the Hamiltonian is self-dual under the K-W transformation,
there is
a line of first order phase transitions
on the phase boundary between the ferromagnetic and the paramagnetic phase.
The phase diagram of the chain along the K-W line
is shown in
Fig. <ref>
(from Ref. O'BrienFendley).
The renormalization group (RG) unstable critical point at λ_3=λ_tc≈ 0.856, which separates the Ising second order phase transition line from the first order phase transition line, lies in the universality class of the tricritical Ising model.
In the present paper, we will consider performing measurements on the ground state of the
tricritical quantum Ising Hamiltonain H of Eq.(<ref>) at λ=λ_tc.
.8cm
§ MEASUREMENT PROTOCOL
AND REPLICA TRICK
Coming first back to ordinary Ising case λ_3=0, the ground state of the critical quantum Ising chain H_I subject to measurements with operators
σ̂^x_j
or σ̂^z_jσ̂^z_j+1 has been investigated in Ref. WeinsteinSajithAltmanGaratt, YangMaoJian and MurcianoSalaYueMongAlicea.
Measurements with σ̂^z_jσ̂^z_j+1 and σ̂^x_j
on the Ising critical ground state
result in the same universal effects <cit.>,
because both operators represent (after subtraction of their expectation values) the energy operator
𝔢
of the Ising critical point
with scaling dimension X_𝔢=1,
up to corrections from subleading operators which are RG-irrelevant. In other words, their connected equal time correlation functions (denoted by a subscript _c)
in the Ising critical ground state, describing the correlations of the subtracted operators, decay asymptotically with the same exponent,
⟨σ̂^z_iσ̂^z_i+1σ̂^z_jσ̂^z_j+1⟩_c∼1/|i-j|^2X_𝔢and ⟨σ̂^x_iσ̂^x_j⟩_c∼1/|i-j|^
2X_𝔢
as |i-j|>>1.
We now move on to the O'Brien-Fendley chain which, as recalled above, has the same underlying
lattice spin-1/2 (qubit) degrees of freedom as the ordinary Ising chain. Therefore, it is natural to study effects of measurements of similar operators for the O'Brien-Fendley chain.
Interestingly, equal-time correlation functions of the same operators
σ̂^z_jσ̂^z_j+1 and σ̂^x_j in the ground state of
the tricritical point of the O'Brien-Fendley chain
decay asymptotically
(after subtraction of the expectation values)
with the critical exponent X_ℰ=
1/5 of the energy scaling operator ℰ of the Ising tricritical point <cit.>,
⟨σ̂^z_iσ̂^z_i+1σ̂^z_jσ̂^z_j+1⟩_c ∼1/|i-j|^2X_ℰand ⟨σ̂^x_iσ̂^x_j⟩_c ∼1/|i-j|^2X_ℰ
as |i-j| ≫ 1.
Unlike the ordinary transverse field
quantum Ising model, the subdominant contributions to both σ̂^z_jσ̂^z_j+1 and σ̂^x_j are now not RG irrelevant.
Specifically, these two lattice operators
are represented <cit.> by
σ̂^z_jσ̂^z_j+1∼χ(x)≡
A I +Bℰ(x)+
Cℰ'(x) + D ℰ^” +
...,
σ̂^x_j∼χ'(x)≡ A I -B ℰ(x) +
Cℰ'(x) - D ℰ^” +
...,
(A, B, C, D =non-universal constants),
where I is the identity field with A the tricritical expectation value of the corresponding lattice operator,
while ℰ (with scaling dimension X_ℰ=1/5), ℰ' (with scaling dimension X_ℰ'=6/5), and
ℰ^” (with scaling dimension X_ℰ^”=3)
are the energy, the subdominant energy, and the further subleading energy
scaling operators at the
Ising tricritical point,
the first two of which are RG-relevant as bulk operators; the ellipses denote more subleading operators.
Forming
the difference
of σ̂^z_jσ̂^z_j+1 and σ̂^x_j,
we define the operator Ê_j+1/2 given by
Ê_j+1/2≡1/√(2)(σ̂^z_jσ̂^z_j+1-σ̂^x_j),
which changes sign under the K-W duality transformation, Eq. <ref>. Owing to Eq. <ref>,
this operator
is a lattice representation of
the energy scaling operator ℰ of the
tricritical Ising
critical point which is consequently also odd under K-W duality, with corrections from solely RG irrelevant (K-W odd) operators.
For the same reason, the linear combination
Ê'_j+
1/2≡1/√(2)(σ̂^z_jσ̂^z_j+1+σ̂^x_j),
with the opposite sign than in
Eq. <ref>, is even under K-W-duality and is (after subtraction of its expectation value) a lattice representation of the subleading energy operator ℰ'(x) which, consequently, is K-W even (together with all occurring subleading operators) [To re-iterate, these results
can be understood by using Kramers-Wannier duality. The O'Brien-Fendley chain is invariant under Kramers-Wannier duality throughout the line in its phase diagram depicted in Fig. <ref>.
The energy scaling operator, ℰ(x), and the subleading energy scaling operator, ℰ'(x) are, respectively, odd and even under the K-W transformation at the tricritical Ising point.
Since σ̂^z_jσ̂^z_j+1-σ̂^x_j is odd under the K-W transformation (see Eq. <ref>), it cannot contain any contribution from the subleading energy field ℰ'(x) in the continuum limit and is given by the energy field ℰ(x) with corrections from solely RG irrelevant (K-W odd) operators.].
Note that Ê_j+1/2 lies on the link of the lattice connecting site at j and j+1, and
(Ê_j+1/2)^2=
=1/2((σ̂^z_jσ̂^z_j+1)^2+(σ̂^x_j)^2-σ̂^x_jσ̂^z_jσ̂^z_j+1-σ̂^z_jσ̂^z_j+1σ̂^x_j) = 1
implying that
Ê_j+1/2 has eigenvalues ± 1.
We note in passing that, as verified by inspection, the operator Ê'_i+
1/2
also squares to the identity and therefore also has eigenvalues ± 1. [Ê'_i+1 2 does not commute with the operator Ê_i+1 2.]
In the following, we will describe a protocol for performing measurements on the ground state of the O'Brien-Fendley chain.
.6cm
§.§
Measuring Lg on Even Links
Note that the operators Ê_i+1/2 on neighbouring links do not commute with each other,
because σ̂^z_i and σ̂^x_j on the same site do not commute.
However, if we take Ê_i+1/2 operators on alternate links, say even links (those where i is even), all of them commute with each other
(see Fig. <ref>).
Our measurement protocol consists in measuring the operators Ê_i+1/2
on even links. [
We note that the operator Ê_i+1/2 (for an even link i) has support on the two lattice sites i and i+1, and one can check that each of its eigenvalues ± 1 is two-fold degenerate.
This means that there will be two (linearly independent) eigenstates of Ê_i+1/2, which can be chosen orthogonal, that will be associated
with each of the measurement outcomes +1 and -1.
This does not imply that the
post-measurement state
is ambiguous.
The state that results after observing
any set of measurement outcomes is uniquely obtained by acting with the projector (or more generally, in the case of weak measurements, a Kraus operator) corresponding to the eigenspace associated with the measurement outcomes on the `incoming' state before measurement. In the case of interest to us this will be, as we will discuss below, the ground state of the tricritical O'Brien-Fendley chain.
See e.g. Eq. <ref> below. (Measurement operators with two eigenvalues which are not both non-degenerate, have also been discussed in a different context in Ref.
MajidyAgrawalGopalakrishnanPotterVasseurHalpern2023
.)] [We note that performing weak measurements with the operator Ê_i+1/2 only on even links i does not imply that the
system will effectively collapse onto a trivial `staggered' state.
Compare for example with
Refs. WeinsteinSajithAltmanGaratt and YangMaoJian, which consider performing measurements with the Pauli operator σ̂^x_i on all sites i of the critical quantum Ising chain (See Eq. <ref> for the Hamiltonian).
In terms of the Majorana formulation of the quantum Ising chain,
this measurement operator reads
σ̂^x_i=γ̂_2iγ̂_2i+1, where γ̂_2i is a Majorana operator.
Thus measuring σ̂^x_i corresponds to performing measurements only on the even-links of the underlying Majorana chain, however a `staggered' state is not observed <cit.>.
In fact,
even though measurements are performed only on the even links of the Majorana chain,
it has been observed that for Born-rule measurements <cit.> the long-distance critical properties of the system are the same as that of the unmeasured state, i.e. the Ising critical ground state (which is not `staggered').
In the same spirit, in our measurement protocol of our system we also
would not expect to see a trivial `staggered' state even though we are performing measurements only on the even-links of the chain (which are now the physical links of the spin-chain and not of the underlying Majorana chain).
In our case, the critical behaviour of the unmeasured state, however, does get modified dramatically due to the presence of measurements as we will
discuss
in detail in the subsequent sections of the present paper.
– A general proof of the impossibility of obtaining a trivial `stagggered' state, for our (and also the above) system will follow from Eq. <ref> with Ô_1 :=
Ê_i+1/2
Ê_j+1/2. This equation implies that the expectation value of Ô_1, a two point function, is unmodified by Born-rule measurements and thus exhibits the algebraic decay with distance |i-j| characteristic of the unmeasured (tri-)critical ground state, which would be in contradiction with an exponential decay in a trivial `staggered' state.]
Then on each even link, we can define the (weak-) measurement Kraus
operator,
K̂_i+1/2,± :=
1 ±λÊ_i+1/2/√(2(1+λ^2)),
and the measurement operators Ê_i+1/2 at different even links will commute with each other. For notational convenience, we will drop the
lattice-position offsets
of +1/2 from now on and label both
operators as K̂_i and Ê_i, i.e. by just i, which denotes the site at the left end of the link. [
I.e.
K̂_i+1/2,m_i→K̂_i,m_i where m_i=±, and correspondingly for Ê]
When λ=1, the
Kraus operators
in Eq. <ref>
reduce to projection operators
K̂_i, m_i=
1/2 ( 1 + m_iÊ_i)
onto the eigenstates
of Ê_i with eigenvalues m_i=± 1.
The parameter 0 ≤λ≤ 1 controls the `strength' of the measurement. When λ=0 no measurements are performed at all.
When the eigenvalue m_i is measured at site i, the measurement changes a (normalized) quantum state |ψ⟩ to the following (normalized) state after this measurement
|ψ⟩→K̂_i,m_i|ψ⟩
||K̂_i,m_i|ψ⟩||
.
Each measurement outcome m_i= ± 1 at an even link
i
occurs with `Born-rule' probability
p_B(m_i) = ⟨ψ|
(K̂_i,m_i)^†K̂_i,m_i|ψ⟩ =
=
1 2 (1 + λ^2) (
1 + λ^2+ 2 m_i λ⟨ψ|Ê_i |ψ⟩ ),
which depends on the incoming state |ψ⟩.
The measurement operators for each even i satisfy the condition
∑_m_i = ± 1
(K̂_i,m_i)^†K̂_i,m_i = 1_i,
where the right hand side denotes the identity
operator [We note, continuing a previous footnote, that degeneracies of the measurements operators Ê_i
and thus of the Kraus operators does not affect the identity
Eq. <ref>. For a similar situation with degeneracies of the eigenvalues of the measurement operators, see the already previously mentioned Ref.
MajidyAgrawalGopalakrishnanPotterVasseurHalpern2023
].
This ensures the normalization of the Born-rule probabilities p_B(m_i) defined above.
Eq. (<ref>) is referred to as the
POVM [standing for“Positive Operator Valued Measure”] condition.
Let us take the quantum state on which we perform measurements to be the ground state
|0⟩ of the O'Brien-Fendley chain at the tricritical point.
Since the measurement operators Ê_i on the even links i commute with each other, the state obtained after measuring on all even links with measurement outcomes m⃗ := {m_i} is
|Ψ_m⃗⟩=
∏_i∈evenK̂_i, m_i|0⟩
||
∏_i∈evenK̂_i, m_i|0⟩ ||
=
K̂_m⃗|0⟩√(⟨0|K̂^†_m⃗K̂_m⃗|0⟩),
where
K̂_m⃗ := ∏_i∈evenK̂_i, m_i,
and
p_0(m⃗)= ⟨0|K̂^†_m⃗K̂_m⃗|0⟩
is
the Born-rule probability to obtain measurement outcomes m⃗= {m_i}.
We will refer to the state obtained upon performing measurements and corresponding to a particular set of measurement outcomes as a `quantum trajectory'.
It will be convenient to write the RHS of Eq. <ref> as
K̂_i, ±=1 Nexp{±λ̃Ê_i},
where 0≤λ̃ = arctanh(λ)<∞, and N is
a suitable
normalization factor.
Then
we can write the product
in Eq. <ref>
in the following form
K̂_m⃗
=
1 N^L/2exp{λ̃∑_i = even
m_iÊ_i},
where L denotes the number of sites.
Let us define the variable t_i s.t.
t_i=λ̃m_i.
Since the measurement outcome
is m_i=± 1, the variable t_i takes on
values ±λ̃.
We can reformulate the measurements by “softening” the
variable t_i=±λ̃
to
take on continuous values -∞ < t_i < +∞ drawn from some
distribution P(t_i) which we take to be symmetric under t_i→ - t_i. Sometimes, it may be convenient to choose a Gaussian distribution whose variance is a measure of the `strength' of measurements λ̃.
The formulation given
in Eqs. <ref>,<ref>
above in terms of discrete measurement outcomes m_i=± 1 is simply
a special case of this where the
distribution P(t_i) is
the
(normalized)
sum of two delta functions peaked at t_i=±λ̃.
It turns out that only the cumulants of the random
variable t_i determined by the distribution P(t_i) [We consider only such distributions P(t_i) for which all cumulants
exist, i.e. are finite.] will enter our formulation below, and the essential physics will turn out to depend only on the second cumulant and will thus be insensitive to other details of the
distribution P(t_i).
This then also covers the case where, with some probability, no measurement is performed at a site, corresponding to
the
symmetric distribution P(t_i) which is a
(normalized) weighted sum of three delta functions, peaked at t_i=0 and at t_i=±λ̃.
The corresponding reformulated Kraus operators
K̂_t⃗≡1 ( N')^L/2exp{∑_i∈even t_iÊ_i},
t⃗≡{t_i}_i ∈ even,
with a suitable choice of normalization factor N',
satisfy again the required POVM
condition
[
∏_i∈ even (∫_-∞^+∞ dt_i P(t_i) )
]
(K̂_t⃗)^†K̂_t⃗
= 1.
This
follows from Eq. <ref> for any P(t_i) symmetric under t_i → - t_i [the role of m_iλ being played by tanh(t_i)].
§.§ Calculation of Observables and Replica Trick
Consider now a general measurement average (denoted by an `overbar') of the quantum mechanical expectation of N (potentially different) operators
Ô_1, Ô_2, ..., Ô_N, where each of these we
consider here to be a local operator or a product of local operators. We will compute this average using the Born-rule probability distribution
p_0(m⃗), Eq. <ref>.
We will also assume, for now, that
each
operator
𝒪̂_i
commutes
with the Kraus operator K̂_m⃗,
but we will relax this assumption at the end of this section.
Then averaging over measurement outcomes we
obtain the measurement-averaged expectation values
[⟨Ô_1⟩_m⃗ ...
⟨Ô_N⟩_m⃗]
=
∑_m⃗ p_0(m⃗)
⟨0|K̂^†_m⃗Ô_1 K̂_m⃗|0⟩
...
⟨0|K̂^†_m⃗Ô_N K̂_m⃗|0⟩
p_0^N(m⃗)
=
lim_R→ 1∑_m⃗(⟨0|K̂^†_m⃗Ô_1 K̂_m⃗|0⟩
...
⟨0|K̂^†_m⃗Ô_N K̂_m⃗|0⟩×
×[⟨0|K̂^†_m⃗K̂_m⃗|0⟩]^R-N)
=
lim_R→ 1∑_m⃗(⟨0|Ô_1K̂^†_m⃗K̂_m⃗|0⟩
...
⟨0|Ô_NK̂^†_m⃗K̂_m⃗|0⟩×
×[⟨0|K̂^†_m⃗K̂_m⃗|0⟩]^R-N).
Note that when N=1
the last factor in the above equation
disappears since (R-N )→ 0 in the required
R → 1
limit. Thus, the average
of a single expectation value of an operator or of a product of operators Ô_1 (such as e.g. those appearing in a 2-point function)
is unaffected by measurements,
[⟨Ô_1⟩_m⃗]=
∑_m⃗⟨0| Ô_1
K̂^†_m⃗K̂_m⃗|0⟩
=
⟨0|Ô_1
|0⟩,
where the last equality follows from the POVM condition,
Eq. <ref>.
Coming back to Eq. <ref>, if we replicate the Hilbert space R times, Eq. <ref> can be written as,
[⟨Ô_1⟩_m⃗ ...
⟨Ô_N⟩_m⃗]
=lim_R→ 1∑_m⃗R ⊗⟨0|𝒪_1^(1)𝒪_2^(2)…𝒪_N^(N) (K̂^†_m⃗K̂_m⃗)^⊗ R|0⟩^⊗ R
=lim_R→ 1Tr(𝒪̂_1^(1)𝒪̂_2^(2)…𝒪̂_N^(N)(|0⟩⟨0|)^⊗ R∑_m⃗(K̂_m⃗^†K̂_m⃗)^⊗ R).
Here,
the trace `Tr' is now performed in the replicated Hilbert space, and superscripts on the operators indicate
which Hilbert space factor, in the R-fold tensor product Hilbert space, they act on.
For the measurement protocol
discussed in subsection <ref>,
after
“softening" the measurement outcomes to take on continuous values,
we can replace 𝐊̂_m⃗ in Eq. <ref> by 𝐊̂_t⃗ in Eq. <ref>, and also replace the sum ∑_m⃗ by the
integral over t_i
as in
Eq. <ref>. Thus, we will make the following substitution in Eq. <ref>
∑_m⃗ (K̂_m⃗^†K̂_m⃗)^⊗ R→ [
∏_i∈ even (∫_-∞^+∞ dt_i P(t_i) ) ] (K̂_t⃗^†K̂_t⃗)^⊗ R.
Moreover, since Ê_i is an hermitian operator,
we have
K_t⃗^†= K_t⃗,
and using Eq. <ref>, we can
write
(K̂_t⃗^†K̂_t⃗)^⊗ R=1/(𝒩')^RLexp{2∑_i= even t_i(∑_a=1^RÊ_i^(a))}.
As discussed in Section <ref>, P(t_i) is a symmetric distribution under t_i→-t_i.
Here {t_i}_i∈
even are independent random variables with joint distribution P̃({t_k})=∏_i
∈
even
P(t_i), and the first and second moments of the distribution are given by
t_i =0,
t_i t_j
= 2 Δ̃δ_i,j.
Here, Δ̃
quantifies the strength of the measurements, and we assume that higher
cumulants
of P(t_i) vanish; and
they will be shown to not change the physics at long distances in App. <ref>.
Then using Eq. <ref> and Eq. <ref>
we obtain using the cumulant expansion
[We note that the normalization of the distribution P(t_i) is chosen such that it satisfies Eq. <ref> and hence it is not normalized as a probability distribution. However, multiplying and dividing P(t_i) by an appropriate overall trivial constant we can use the formula for cumulant expansion, which is valid for probability distributions.]
∫_-∞^∞ (∏_i
=
evend t_i P(t_i))(K̂_t⃗^†K̂_t⃗)^⊗ R∝
∝exp{Δ̃∑_i=even4(∑_a=1^RÊ_i^(a))^2}
∝exp{4Δ̃∑_i=even∑_a,b=1
a≠ b^RÊ_i^(a)Ê_i^(b)}
where in Eqs. <ref> and Eq. <ref>, we have dropped
unimportant overall multiplicative constants and consequently
replaced the equality signs by proportionality signs, and in Eq. <ref> we have used Ê_i^2=1 (see Eq. <ref>).
Using Eqs. <ref>, <ref> and <ref>, we can
then write the measurement averaged moments of expectation values as
[⟨Ô_1⟩_m⃗ ...
⟨Ô_N⟩_m⃗] ∝lim_R→ 1Tr(𝒪̂_1^(1)𝒪̂_2^(2)…𝒪̂_N^(N)×
× (|0⟩ ⟨0|)^⊗ Rexp{4Δ̃∑_i=even∑_a,b=1
a≠ b^RÊ_i^(a)Ê_i^(b)})
In the derivation of Eq. <ref>, we assumed that operators 𝒪̂_i commute with the Kraus operator K̂_m⃗ [For the O'Brien-Fendley chain, we will consider calculating measurement averaged moments of the correlation function for operators σ̂^z_i and Ê_j. As noted earlier, the operators Ê_j at even sites j commute with each other, and hence the operator Ê_j at an even site j also commutes with the Kraus operator K̂_m⃗ (see Eq. <ref>).
Moreover, if we choose the operator σ̂^z_i to lie on an odd sites i, it commutes with Ê_j=1/√(2)(σ̂^z_j+1σ̂^z_j-σ̂^x_j) for all even sites j (see Fig. <ref>) and thus, it also commutes with the Kraus operator 𝐊̂_m⃗.
Therefore, the positions of operators for which we study the correlation functions in this paper can always be slightly “tuned" such that they commute with the Kraus operator 𝐊̂_m⃗].
We will close this section by discussing the case when this assumption is not satisfied.
See also the discussion at the end of App. <ref>.
Since
each operator 𝒪̂_i is either a `local'
operator
or a product of `local' operators, it will commute with most
Kraus operators
K̂_j,m_j in
the product K̂_m⃗=∏_jK̂_j,m_j, and it might not commute with only a few K̂_j,m_j which have support on the same sites j as the operator 𝒪̂_i.
We expect such local commutator terms of operator Ô_i to generically be subleading in scaling dimension [We note that in Ref. <cit.> they have made a similar observation in a related context.], such that the leading order long distance behavior of the measurement averaged moments of the ground state expectation value of
operator
𝒪̂_i is still given by Eq. <ref>.
.8cm
§ FIELD THEORY REPRESENTATION AND MEASUREMENT RG FIXED POINT
§.§ Field Theory Representation
In field theory language, the ground state density matrix of the O'Brien-Fendley chain at the
Ising
tricritical point can be written
as a path integral [Compare also analogous discussions for different systems in Refs.
<cit.>, <cit.>
.] over the cut cylinder shown in Fig. <ref>,
|0⟩⟨0|= lim_β→∞e^-β H/Z=1/Z∫ D ϕ e^-S_*|ϕ(x,0^-)⟩⟨ϕ(x,0^+)|
S_*= ∫ d τ∫ d x {1/2(∂_xϕ)^2+1/2(∂_τϕ)^2+
g^*_3ϕ^6},
Dϕ= ∏_τ=-β/2^+β/2∏_xdϕ(x,τ),
where
|0⟩ is the ground state of the tricritical Ising Hamiltonian, and
S_* is the effective Landau-Ginzburg
(-Zamolodchikov <cit.>)
fixed point action of the Ising tricritical point,
defined on the 2d space-(imaginary)time
geometry
in Fig. <ref>. [
In the present case of two dimensions there is no meaning to a perturbative study of the Landau-Ginzburg action about the Gaussian theory, since the field ϕ is dimensionless by naive
power-counting, and a non-perturbative tool is needed. This is provided in Ref. Zamolodchikov1986
where it is shown that non-perturbative field identifications following from the exact equations of motion are exactly those obtained from the corresponding unitary minimal model CFT.]
We can insert the path integral representation from Eq. <ref> into Eq. <ref>, and replace the local operators
(or products of local operators)
Ô_i with the corresponding continuum fields 𝒪^(a_i)_i(x,0^-)
in the respective replica copy “a_i".
Following
Eqs. <ref>,
<ref>,
we
can also replace the
measurement operator Ê^(a)_i in Eq. <ref> by the corresponding continuum energy scaling operator
ℰ^(a) in replica copy “a".
The field ℰ is expressed in terms of the Landau-Ginzburg field ϕ by
ℰ(x,τ)= :ϕ^2 :(x,τ),
where `: :' indicates
standard `normal ordering' [
subtraction of the singular terms in the operator product expansion [OPE]]
of the field ϕ^2 <cit.>.
Thus, in
continuum
language,
we obtain the following expression for the averages
[⟨Ô_1⟩_m⃗ ...
⟨Ô_N⟩_m⃗]∝
lim_R→ 1∫_ϕ^(a)(x,0^-)
=ϕ^(a)(x,0^+)[∏_a=1^RD ϕ^(a)] 𝒪_1^(1)𝒪_2^(2)…𝒪_N^(N)
e^- 𝕊
where, -𝕊=
∑_a=1^R
(-1)
S_*^(a)
+
Δ∫_-∞^+∞ dx Φ(x)
Φ(x) :=
∑_a,b=1
a≠ b^Rℰ^(a)(x,0) ℰ^(b)(x,0) and Δ=(constant)×Δ̃.
Due to the trace in Eq. <ref>, the τ=0^- and τ=0^+ boundaries of the cut cylinder in Fig. <ref> will be glued, and the field configurations ϕ^(a)(x,0^-) and ϕ^(a)(x,0^+) are identified for all replica indices “a" as shown in Eq. <ref> [Important differences in the gluing of boundary field configurations could occur if the fields 𝒪_i^(a) cannot be expressed locally in terms of the Landau-Ginzburg field ϕ and its normal ordered higher powers. This issue is addressed in App. <ref>].A different perspective to verify the form of defect interaction appearing in Eq. <ref> and <ref> is to consider symmetries of the system, and in particular
Kramers-Wannier duality.
Note that although our ground state |0⟩ is
invariant under Kramers-Wannier duality,
an individual quantum trajectory will generally not be invariant under it.
However, since we average over all measurement outcomes,
we expect this symmetry to be restored in an average sense.
Thus, the K-W symmetry will appear as an
average (“weak") symmetry
of the ensemble of quantum trajectories.
This
implies, in particular,
that although the total replica action (in IR) will be not invariant if we take ℰ^(a)→-ℰ^(a) in a single replica, the action will be invariant if we perform the transformation ℰ^(a)→-ℰ^(a) for all replica indices (a) simultaneously.
The
most
RG relevant
perturbation
supported on the τ=0 time-slice
and
in the presence of
this
average (“weak") symmetry
is of
the
form ℰ^(a)ℰ^(b) for replica indices a≠ b [Note that a term with equal replica indices of form ℰ^(a)ℰ^(a), can be evaluated
using point splitting and the operator product expansion <cit.> (OPE)
ℰ^(a)×ℰ^(a)=I^(a)+ℰ'^(a)
where the operator
ℰ'=
:ℰℰ:
= :ϕ^4:,
in every replica “a",
is irrelevant
as an operator with support on the 1-dimensional τ=0 time slice at the Ising tricritical point.].
Finally, we must consider the sum of terms ℰ^(a)ℰ^(b) over all possible pairs of unequal replica indices for the action to be symmetric under permutation of replica indices.
This gives us Eq. <ref> with Φ(x) in Eq. <ref> back [Due to high scaling dimensions, terms with more than four ℰ^(a) fields and pairwise unequal replica indices are irrelevant under RG at the tricritical Ising
point.
The term with exactly four ℰ^(a) fields (with pairwise unequal replica indices) is relevant at the tricritical Ising point and is less relevant than Φ(x) in Eq. <ref>.
Moreover, we argue in App. <ref> and <ref> that this term is expected to be irrelevant at the new IR
fixed point.
].
§.§ Controlled Perturbative Renormalization Group Analysis
At the
Ising tricritical point,
the scaling dimension of the field ℰ= :ϕ^2: is
X_ℰ =1/5
<cit.>.
Thus, the RG eigenvalue of the coupling constant Δ in Eq. <ref> is y_Δ=1-2X_ϵ=3/5>0,
implying that the perturbation is relevant.
To study the effect of this perturbation, we will use
a perturbative RG
analysis,
controlled by a small parameter ϵ.
To obtain such a small parameter ϵ,
we will consider
the following
generalization
of the
action
in Eq. <ref>,
where
-𝕊=
∑_a=1^R
(-1)
S_*^(a)
+
Δ∫_-∞^+∞ dx Φ(x)
Φ(x)=∑_a,b=1
a≠ b^Rℰ^(a)(x, 0)ℰ^(b)(x,0).
But now we consider, instead of Eq. <ref>,
the more general fixed point described by the
action
S_*=∫ d τ∫ d x {1/2(∂_xϕ)^2+1/2(∂_τϕ)^2+
g^*_m-1ϕ^2(m-1)},
where
ℰ=:ϕ^m-2:,
and where m≥ 4 is an even integer.
Note that
setting
m=4 in the above equations, we recover the problem at hand, i.e. the
action
and the field ℰ given by Eq. <ref> and <ref>, respectively.
For any
integer m≥ 3 (even or odd), the
fixed point action in Eq. <ref>
describes exactly <cit.>
the
multi-critical points famously known as the m^th
unitary minimal model conformal field theories (CFTs)
of central
charge <cit.>
c(m)=1-6 m(m+1).
(The same comment as in
footnote <cit.> applies to this Landau-Ginzburg action.)
We note that for arbitrary integer values m ≥ 3, the operator ℰ in Eq. <ref> is no longer the `energy' field of the Ising multi-critical point described by the Landau-Ginzburg action in
Eq. <ref> (which would be :ϕ^2:).
However, we will restrict ourselves to only even values of m [We will relax this restriction in Sect. <ref>, where we will consider an odd m minimal model, namely the Ising CFT.], and keep using the symbol ℰ for
the field in Eq. <ref>
for the following reason.
For the central charges c(m), Eq. <ref>, with even integer values of m≥ 4, there is another critical model with the same central
charge, in addition to that described by the action in Eq. <ref>.
This is the tricritical q-state Potts model, where the value of q is given <cit.>
by
√(q) = 2 cosπ m, m≥ 4 ( even).
When q=2, this is of course the tricritical Ising model, which is described by the Landau-Ginzburg action
in Eq. <ref>
above, but for other values of the number q of
Potts states in Eq. <ref>, e.g. for q=3,
it is a
slighly different theory than the one in Eq. <ref>,
with the same central charge <cit.>.
This will not be of relevance for the observables of interest to us, which
turn out to be present in both theories (see also below). For example and of particular interest to us, when
m≥ 4 is even, the operator ℰ from Eq. <ref> is precisely the same operator as the energy (`thermal') operator in the
tricritical q-state Potts model of the same central charge <cit.>. (In CFT language, that operator is the so-called Kac-Table primary operator φ_1,2 which has the scaling dimension listed in Eq. <ref> below.)
When m→∞, the value of q approaches q=4, describing the q=4 state
tricritical Potts model, which turns out to be the same as the critical (ordinary) q=4-state Potts model at central charge c=1 <cit.>.
Moreover, for even m, all operators that appear when performing repeated operator product expansions of ℰ with itself are operators present simultaneously
in both, the
tricritical q-state Potts model and the Landau-Ginzburg multicritical point described by Eq. <ref> [As already mentioned, the energy operator of the tricritical q-state Potts model is the so-called Kac-Table operator φ_1,2, and under repeated OPEs with itself, it generates the set of Kac-Table operators φ_1,n, all of which are common to both critical systems. (This set of operators forms an operator algebra closed under the operator product expansion.)],
and all correlation functions of an arbitrary number of ℰ operators are exactly the same in both systems.
Since, as we will discuss shortly, we will be interested in computing the RG equation (beta function) for the coupling constant Δ in the generalized model
Eq. <ref>
for even values of m,
which is
uniquely
determined [in a given RG scheme] (to arbitrary loop order) by the set of the correlation functions of an arbitrary number of ℰ operators (which, as just mentioned, are the same for both systems), we can use either the Landau-Ginzburg
formulation of Eq. <ref> or equivalently the tricritical q-state Potts model formulation, both yielding the same result for this RG
equation [See also the discussion in App. <ref>.].
Specifically, we will proceed as follows. We are interested in the properties of the
replica
field theory in Eq.
<ref> – <ref>
when m=4, describing the effects of the quantum mechanical measurements on the tricritical Ising ground state, as described in the previous sections. We will study the generalization of this field theory to large even values of the parameter m which, as already mentioned, provides an expansion parameter ϵ that is small when m (even) is large. This is a pure field theory problem.
We will find that for large even values of m, the field theory in
Eq. <ref>
will exhibit a fixed point at a non-vanishing value
Δ_* of the coupling constant Δ, controlled by the parameter ϵ=3/(m+1), small when the even integer m is large.
At this fixed point, we compute universal properties (including critical exponents) of a variety of observables
perturbatively controlled by the small parameter ϵ. This is the same logic as in the familiar Wilson-Fisher ϵ=
4-d expansion in dimensions d smaller than 4. In contrast, here we always remain in 2=(1+1)
dimensions, but we vary the central charge c(m) by varying the even integer m [This is an expansion in √(3 2(1-c(m)))
about c(m)=1, which can equivalently be viewed (as discussed above) as an expansion in
3 2π√(4-q) of the tricritical q state Potts model about the q=4 state Potts model.]. (This type of ϵ-expansion within conformal perturbation theory in two dimensions was first performed in Refs. <cit.>
and subsequently used in many works.).
This approach allows us to establish that at the
finite-Δ_* fixed point the system has an
extremely rich universal scaling behavior (to be discussed in subsequent sections), which we can access in a controlled manner perturbatively in ϵ (in the sense of an ϵ-expansion).
Physically, this
rich scaling behavior originates, as
m → 4, from the intrinsic
randomness resulting from the
indeterministic outcomes of the quantum mechanical measurements performed on the ground state of the Ising tricritical point.
We now proceed to discuss the RG equation for the coupling constant Δ in Eq. <ref>.
For an arbitrary even integer
m≥ 4,
the scaling dimension of ℰ in Eq. <ref> (with action S_* in Eq. <ref>) is <cit.>,
X_ℰ=1 2 - 3 2 (m+1).
Thus, the RG eigenvalue of the coupling constant Δ
is
y_Δ=
1-X_Δ
=1 - 2X_ℰ = 3 (m+1)def=ϵ
which is greater than zero, implying that the perturbation is relevant and we will flow away from the
unperturbed fixed point at Δ=0 for any given
m.
To obtain the 1-loop RG equation for the coupling constant Δ, we will need the OPE of the operator
Φ(x) (from Eq. <ref>
with ℰ from Eq. <ref>) with itself (see for example
Refs. JLCardy_1986RGOPE,LUDWIG198797,cardy_1996,LUDWIGWIESE).
For any conformal minimal model with even m≥ 4 in Eq. <ref>, the fusion rule [As already recalled in footnote <cit.>, the operator ℰ is the Kac-Table operator ℰ=φ_1,2]
of ℰ=:ϕ^m-2: with itself is given by,
ℰ×ℰ=I+ℰ'
where ℰ'=:ϕ^2m-4: is another scaling field in the m^th
Landau-Ginzburg theory
and it is irrelevant on
the 1-dimensional τ=0 time slice,
for any m≥4 [The scaling dimension of ℰ' in
the m^th
minimal model is 2(m-1/m+1)>1 for any even m≥4 (see <cit.>)
].
In tricritical q-state Potts language where, as mentioned above, ℰ is the (leading)
energy operator, ℰ' is simply the subleading energy operator.
From Eq. <ref>,<ref>, one obtains the OPE
Φ(x_1) Φ(x_2) ∼b |x_1 - x_2|Φ(x_2) +… ,
where
b = 4 (R-2),
and
the
ellipsis indicates fields which are irrelevant when supported on the τ=0 time-slice
for m≥ 4 (which includes the m=4 Ising tricritical case, Eq. <ref>), and can be ignored.
To 1-loop order, the RG equation is then given by
<cit.>,
<cit.>,
<cit.>
d Δ d ℓ
= y_ΔΔ + b Δ^2 + O(Δ^3).
Thus, when the number of replicas is R<2, there is a new fixed point at a non-vanishing positive value
Δ_*=
-1 b y_Δ + O (y_Δ^2)
=
ϵ/4(2-R) + O (ϵ^2) >0
ϵ=3/m+1.
We are interested in the limit R→ 1 relevant for measurements
(satisfying the Born rule)
<cit.>,
and
since Δ describes a second cumulant, we are interested in a non-negative value Δ≥ 0.
Finally, we also note that the RG analysis for the replica action
from Eq. <ref>
(with action S_* in Eq.
<ref> and m≥ 4)
has been
performed to two loop order [in a dimensional regularization [by ϵ=3/(m+1)] RG scheme, with minimal subtraction of poles in ϵ]
in Ref. <cit.>, and from
this analysis we
obtain the following fixed point coupling up to second order in ϵ,
Δ_*=
ϵ/4(2-R)+ϵ^2/4(2-R)^2+ O(ϵ^3),
a result that will be used further below.
Note that
the 1-loop results in Eq. <ref> and Eq. <ref> match
as they should because the 1-loop contribution b to the
RG equation is independent of the used RG scheme.
.8cm
§ CORRELATION FUNCTIONS
In this section, we will discuss measurement-averaged moments of various correlation functions in the ground state of the tricritical O'Brien-Fendley chain.
Following the logic of the preceding section, we will first express these measurement-averaged moments at the Ising tricritical point as correlation functions of corresponding fields in the replica Landau-Ginzburg field theory describing the Ising tricritical point as in
Eqs. <ref>,
<ref>,
<ref>.
Subsequently, we will consider the generalization of these correlation functions to the sequence of field theories with parameter m≥ 4 (even) in Eqs. <ref>, <ref>, <ref>,
which possess a small expansion parameter ϵ when m is large. We will identify the generalization of the fields of the Ising tricritical point to general values of (even) m, and calculate their correlation functions in the replica field theory at the RG fixed point controlled by ϵ (discussed in the preceding section). This provides, analogous to the ordinary Wilson-Fisher ϵ = (4-d)-expansion, a controlled expansion of critical exponents and other universal properties in an expansion in ϵ. Just as in the case of Wilson-Fisher expansion the case of d=2 and d=3 dimensions is of particular interest, of particular interest to us is the case of m=4.
.5cm
§.§ Measurement-averaged moments of the spin-spin correlation function
Lg
If we consider two lattice spin operators σ̂^z
at the Ising tricritical point
at
sites i and j (also see footnote <cit.>),
we can use Eqs. <ref>,
<ref>,
<ref>,
<ref>
to
express the measurement-averaged
N^th moment of their
ground state
correlation
function in replica field theory language:
At long wavelengths and to leading order in scaling dimension, the lattice operator σ̂^z is represented <cit.> at the Ising tricritical point of the O'Brien-Fendley chain
by the spin field σ(x)=ϕ
appearing in the action Eq. <ref>.
The N-th moment average ⟨σ̂^z_iσ̂^z_j⟩^N
is then given in continuum language by the following correlation function in the replica field theory for the Ising tricritical point,
discussed in
Eqs. <ref>,
<ref>,
<ref>,
<ref>
⟨σ̂^z_iσ̂^z_j⟩^N∼⟨𝔖^{α_i}(x, 0) 𝔖^{α_i}(y, 0)⟩,
where
we have defined 𝔖^{α_i}(x, τ) as
𝔖^{α_i}(x, τ)
:=
[ ∏_i=1^Nσ^(α_i)(x,τ) ],
and
1 ≤α_i≤ R are pairwise distinct
replica indices in the R-replica theory.
As R→ 1,
the physics at long distances is determined by the
new, measurement-dominated fixed point discussed in the previous section, and the
correlation function in Eq. <ref> will asymptotically exhibit power law behavior,
⟨𝔖^{α_i}(x, 0) 𝔖^{α_i}(y, 0)⟩∝1/|x-y|^2X^(σ),R=1_N,
as |x-y|→∞ .
Here
the power law exponent
X^(σ),R=1_N
characterizes the scaling behavior of the measurement averaged N^th moment of the ⟨σ̂^z_iσ̂^z_j⟩ correlation function at the Ising tricritical point.We can evaluate the power law exponent
X^(σ),R_N
in an expansion in ϵ=3/m+1
at the new fixed point Δ_*
discussed in the previous Section <ref>,
by computing the above correlation function
with the
generalized replica action in
Eq. <ref> – <ref>.
To this end, we use the spin field of the generalization of the Ising tricritical point to the tricritical q-state Potts model for
√(q) = 2 cosπ m with m≥ 4 even, which has scaling dimension <cit.>
X_σ=m^2-4/8m(m+1),
(m≥ 4, even),
and which we denote by the same symbol σ(x)
as that used above in the tricritical Ising case. The tricritical Potts spin field is known [
As already mentioned, for even m≥ 4
the tricritical q-state Potts energy operator
ℰ
corresponds to the Kac-table primary field φ_1,2,
while the leading tricritical Potts spin field σ
corresponds to the Kac-Table primary field φ_m/2, m/2 <cit.>. The OPE of these two fields reads φ_1,2×φ_m/2, m/2=
φ_m/2, m/2+1 + φ_m/2, m/2-1=
φ_m/2, m/2 + φ_m/2, m/2+2, where in the last equality we have used the symmetry of the Kac Table, h_r,s=
h_m-r, m+1-s, and φ_m/2, m/2+2 denotes the subleading tricritical Potts spin field σ'.]
to have a natural OPE with the tricritical Potts energy operator,
σ×ℰ = σ + σ',
where the subleading tricritical Potts spin field σ' has scaling dimension
X_σ'=9m^2-4/8m(m+1).
We note
in passing that the
scaling dimensions of the
tricritical q-state Potts spin and subleading spin fields also
match those of the following fields
in the Landau-Ginzburg formulation [The only difference for even m ≥ 6 is that in the Potts formulation there are two different spin and subleading spin fields, degenerate in scaling dimension. <cit.>
This doubling of spin fields in the Potts formulation will be of no relevance for us since we will only be interested in the two-point function of the tricritical q-state Potts spin fields.],
σ(x)=:ϕ^m/2-1:(x)
σ'(x)= :ϕ^3(m/2-1):(x) .
At m=4, this gives back the spin field and the subleading spin field of the tricritical Ising CFT,
σ(x)=ϕ
and σ'(x)= :ϕ^3:.
Let us
denote, for general even values of m,
the coefficient of the field σ in the OPE of ℰ and σ
in the tricritical q-state Potts model
by c_σℰσ.
Then,
the OPE of 𝔖^{α_i}
(recalling its definition in Eq. <ref>)
with Φ(x) is given by
Φ(x) ·𝔖^{α_i}(y,0)
∼N(N-1)c^2_σℰσ/|x-y|^2X_ℰ 𝔖^{α_i}(x,0)+ …
where the ellipsis indicates
fields
which, at ϵ=0 (m=∞),
have scaling dimensions different from those of
𝔖^{α_i}(x,0).
Upon
using
the
OPE in
Eq. <ref>, the 1-loop RG equation for 𝔖^{α_i}(x,0)
is given by
d g_{α_i} d ℓ
=(1-N X_σ) g_{α_i} + 2N(N-1)c^2_σℰσΔ g_{α_i}+…,
where the scaling dimension X_σ of the tricritical Potts spin field σ(x) was
recalled in Eq. <ref>,
and
we have defined g_{α_i} as the coupling constant for the term
∫ dx 𝔖^{α_i}(x,0)
when added to the
action.
The
ellipsis in the above equation indicates
not only the higher order terms but also quadratic terms involving couplings other than g_{α_i} and Δ.
This yields the decay exponent for the
correlation function of the replicated product of tricritical Potts spins (i.e. 𝔖^{α_i}(x,0)) at the new fixed point Δ_* of the field theory
Eq. <ref> – <ref>
to first order in ϵ,
⟨𝔖^{α_i}(x,0) 𝔖^{α_i}(y,0)⟩∝1/|x-y|^2X^(σ),R_N,
X^(σ),R_N=
NX_σ -
N(N-1)c^2_σℰσ/2(2-R) ϵ+
𝒪(ϵ^2) .
The OPE coefficient c_σℰσ can also be expanded in powers of
ϵ about ϵ=0 (corresponding to m=∞).
The m=∞ conformal minimal model corresponds to the
4-state
tricritical Potts model,
and we can replace the
OPE coefficient
c_σℰσ in the above equation by
its value
in the
4-state
tricritical Potts model <cit.>; any
ϵ-dependence of the OPE coefficient c_σℰσ will only yield contributions of order
𝒪(ϵ^2)
to Eq. <ref>.
The OPE coefficient
in the tricritical 4-state Potts model is
equal <cit.>
to c_σℰσ=
1/√(2), yielding
X^(σ),R_N=
N/8[
1-1 3(1+6(N-1)/(2-R) ) ϵ
+𝒪 ( ϵ^2 )
].
In the limit R→ 1 the above
expression
provides, as m → 4, an expansion in ϵ=3/(m+1)
(in the sense of the ϵ-expansion) of
the scaling
dimension
of the measurement averaged N^th moment of
the
⟨σ̂^z_iσ̂^z_j⟩ correlation
function at the Ising tricritical point,
X^(σ),R=1_N=
N/8[
1-1 3(1+6(N-1) ) ϵ
+𝒪 ( ϵ^2 )
].
Note that this expression shows that for the first moment, N=1, the first order correction in ϵ
to the exponent X_σ
(observed without measurements) vanishes, consistent with the
expected absence of corrections to X_σ arising from Born-rule measurements in the tricritical Ising case, m=4, to any order in ϵ due to the result in Eq. <ref>.
0.5cm
§.§ Measurement averaged moments of the energy-energy correlation function Lg
.2cm
Let us now consider, again at the Ising tricritical point, two energy operators Ê (given in Eq. <ref>) on links i+1/2 and j+1/2 of the lattice
(also see footnote <cit.>).
We can again use
Eqs. <ref>,
<ref>,
<ref>,
<ref>
to
express
the measurement averaged N^th moment of the correlation function ⟨Ê_i+1/2Ê_j
+1/2⟩
in field theory language.
In continuum language, as discussed in Section <ref>, the lattice operator Ê is given by the energy scaling field ℰ
at the Ising tricritical point.
Thus,
from Eq. <ref>, the measurement averaged N^th moment
of ⟨Ê_iÊ_j
⟩ (as before, we have dropped the lattice offset +1/2 on the Ê operators for notational simplicity)
can be written as
⟨Ê_iÊ_j⟩^N∼⟨𝔈^{α_i}(x, 0) 𝔈^{α_i}(y, 0)⟩
where we have defined,
𝔈^{α_i}(x, τ)
:=
[ ∏_i=1^Nℰ^(α_i)(x,τ) ]
and α_i are pairwise distinct replica indices in the R-replica field theory. Again, as R→ 1,
the physics at long distances is determined by the
new, measurement-dominated fixed point discussed in the previous section, and the
correlation function in Eq. <ref>
will asymptotically exhibit power law behavior,
⟨𝔈^{α_i}(x, 0) 𝔈^{α_i}(y, 0)⟩∝1/|x-y|^2X^( E),R=1_N,
as |x-y|→∞ .
Here
the power law exponent
X^( E),R=1_N
characterizes the scaling behavior of the measurement averaged N^th moment of the
⟨Ê_i
Ê_j
⟩
correlation function at the Ising tricritical point.
Analogous
to the discussion of the moments of the
spin operator
in the preceding subsection, we can evaluate
the power law exponent
X^( E),R_N
in an expansion in ϵ=3/m+1
at the new fixed point Δ_*
discussed in the previous Section <ref>,
by computing the above correlation function
with the
generalized replica action in
Eq. <ref> – <ref>.
Unlike the product
𝔖^{α_i}
of replicated spin fields in Eq. <ref>,
the product 𝔈^{α_i}
of replicated energy fields
doesn't turn out to be
<cit.>
a scaling operator at the new RG fixed point (even to 1-loop order).
The corresponding scaling operators at the new fixed point
are instead
given by a linear
superposition of
𝔈^{β_i} with different possible sets of replica indices {β_1, ⋯, β_N}.
These scaling operators at the new fixed point transform in irreducible representations
of the permutation group S_R of the R replicas
(as introduced in Ref. <cit.>).
Thus, the correlation function
⟨𝔈^{α_i}(x, 0) 𝔈^{α_i}(y, 0)⟩
will be expressed as a sum of power laws
which, at large distances,
turn out to be dominated <cit.>
by
the leading (smallest)
scaling dimension in the sum.
Details are provided in App. <ref>. In a theory with R replicas, we
obtain
⟨𝔈^{α_i}(x, 0) 𝔈^{α_i}(y, 0)⟩∝1/|x-y|^2X^(ℰ),R_N, |x-y|→∞
where for the case of Born rule measurements (R→ 1),
X^(ℰ),R=1_N=1=1/2-ϵ/2+𝒪(ϵ^3)
X^(ℰ),R=1_N>1=N/2[1+ϵ-(3N-5)ϵ^2+𝒪(ϵ^3)].
Analogous to the case of the moments of
the spin operator, as m→ 4, the above expression for X^(ℰ),R=1_N provides an expansion
in ϵ=3/(m+1) (in the sense of the ϵ-expansion) of
the scaling dimension of the measurement averaged N^th moment of the ⟨Ê_i
Ê_j
⟩
correlation function at the Ising tricritical point.
We note that for the first moment, N=1,
the above scaling dimension
matches with the scaling dimension of the energy operator
X_ℰ (from
Eq. <ref>)
at the unperturbed fixed point up to second order in ϵ=3/(m+1).
This is again consistent with Eq. <ref> which implies that, in the tricritical Ising case, m=4, there should be no corrections
to X_ℰ
at any order in ϵ arising from measurements following the Born-rule.
We close this section by noting that it turns out
<cit.>
that the Nth moments of the
correlation function of the subtracted energy operator describing the deviation from its expectation value in a fixed quantum trajectory,
δÊ_i
:= Ê_i -
⟨Ê_i⟩,
⟨δÊ_i δÊ_j⟩=
⟨Ê_i Ê_j⟩
-
⟨Ê_i⟩ ⟨Ê_j⟩,
decay with a single power law, and not with a sum of different power laws as the Nth moments listed in Eq. <ref>. That is, at the Ising tricritical point, the moments of the resulting connected correlation function of energy operators decay with a single power law,
⟨δÊ_i δÊ_j⟩^N
= [
⟨Ê_iÊ_j⟩
-
⟨Ê_i⟩⟨Ê_j⟩ ]^N∝
∝1
|x-y|^
2X̃^( E),R=1_N,
N ≥ 1,
where the expression for X̃^( E),R=1_N is that on the right hand side of
Eq. <ref>, however now valid for all positive integers N, including N=1.
In the language of the replica field theory, this is written in the form
⟨𝔈_NNR(x, 0) 𝔈_NNR(y, 0)⟩∝1/|x-y|^2X̃^(ℰ),R=1_N,
as |x-y|→∞
where the field 𝔈_NMR(x, 0), transforming in a specific irreducible representation of the permutation group S_R of the replicas, is defined in Eq. <ref> of
App. <ref>.
§.§ Multifractality of Scaling Dimensions
We know
from the POVM condition, Eq. <ref>,
that the measurement-averaged
first moment of
correlation functions
such as
⟨σ̂^z_iσ̂^z_j⟩ and ⟨Ê_iÊ_j⟩
exhibits
the same power law behavior as
in the unmeasured ground state of the tricritical Ising Hamiltonian.
(This has also been verified within the ϵ expansion to the order we have evaluated it - see
Eqs. <ref>,
<ref> above.)
From Eqs. <ref> and <ref>,
however,
one sees
that for
each of the two operators σ̂^z_i and δÊ_i
we have obtained an infinite number of independent scaling dimensions which are associated
with the measurement-averaged higher
integer moments of their correlation functions.
This is a signature of multifractality.
In a fixed quantum trajectory both correlation functions
⟨σ̂^z_iσ̂^z_j⟩ and ⟨δÊ_iδÊ_j⟩ are bounded from above and are non-negative
for sufficiently weak measurement strength [
Since the operators σ̂^z_i and Ê_i have eigenvalues ± 1,
in any quantum state
the correlation functions
⟨σ̂^z_iσ̂^z_j⟩ and ⟨δÊ_iδÊ_j⟩ are
both bounded
from above and below.
[E.g., the correlation function ⟨σ̂^z_iσ̂^z_j⟩
lies in the interval
[-1,1]
(consider (σ̂^z_i-σ̂^z_j)^2), and analogously ⟨δÊ_iδÊ_j⟩=⟨Ê_iÊ_j⟩-⟨Ê_i⟩⟨Ê_j⟩
lies in the interval [-2,2]).]
If the given correlation function (either ⟨σ̂^z_iσ̂^z_j⟩ or ⟨δÊ_iδÊ_j⟩) in the
critical ground state |0⟩ is oscillating between positive and negative values
with a period consisting of a certain number of lattice sites
as we vary j, we can choose to look at a subset of sites j for which the correlation function is strictly positive.
Now, if we consider performing weak measurements (small λ) on the ground state |0⟩, a particular quantum trajectory will be given by |ψ_m⃗⟩=
K̂_m⃗|0⟩/
√(p_0(m⃗)
) (see Eq. <ref> and Eq. <ref>). The correlation function
⟨σ̂^z_iσ̂^z_j⟩ (or ⟨δÊ_iδÊ_j⟩=⟨Ê_iÊ_j⟩-⟨Ê_i⟩⟨Ê_j⟩) in this quantum trajectory and for a given value of |j-i| will be an analytic function of the measurement strength λ.
Since λ=0 corresponds to performing no measurements at all and given that the correlation function ⟨σ̂^z_iσ̂^z_j⟩ (or ⟨δÊ_iδÊ_j⟩) was positive-valued in the ground state |0⟩, for sufficiently small values of λ the correlation function will also be positive in a given quantum trajectory obtained upon measurements.
For sufficiently weak measurement strength λ, we can thus
restrict ourselves to correlation functions ⟨σ̂^z_iσ̂^z_j⟩ and ⟨δÊ_iδÊ_j⟩ which are bounded from above and are non-negative.].
This is in line with
the field theory calculation
which represents a controlled perturbative RG calculation in the
(weak) strength of measurements.
Then, the Nth moments
of these correlation functions
for integer values of N are known to determine the entire probability distribution
of these correlation functions
(and are analytic functions of N).
(See e.g. Refs. LUDWIG1990infinitehierarchy,fellerintroduction,Witten2019,Boas1954EntireFunction,Titchmarsh1939.)
Thus, the exponents X^(σ),R=1_N
and X̃^( E),R=1_N, while initially defined for integer values of N, are in fact defined for
real values of N (by analytic continuation).
Hence, to
each of the two operators
σ̂^z_i and δÊ_i is associated a continuous spectrum of scaling dimensions, obtained by continuously varying N.
Moreover, physically, while correlation functions represent random
observables which are not self-averaging (as also reflected by the non-linear dependence of
Eqs. <ref> and <ref>
on the moment order N), their logarithm is self-averaging, and a cumulant expansion in the logarithm of the correlation function corresponds to a Taylor expansion of X^(σ),R=1_N
and X̃^( E),R=1_N in N about N=0 (see, e.g. Refs. DERRIDA-PhysRepts1984,LUDWIG1990infinitehierarchy,ZabaloGullansWilsonVasseurLudwigGopalakrishnanHusePixley,LiVasseurFisherLudwig,JianShapourianBauerLudwig).
This provides the typical scaling exponents,
X^(σ),R=1_typ=
lim_N→ 0X^(σ),R=1_N/N,
X̃^( E),R=1_typ=
lim_N→ 0X̃^( E),R=1_N/N
where
log⟨σ̂^z_iσ̂^z_j⟩=
-2X^(σ),R=1_typlog|x-y|,
log⟨δÊ_iδÊ_⟩=
-2X̃^( E),R=1_typlog|x-y|.
Specifically,
Eq. <ref>,
X^(σ),R=1_typ=
1/8[1+5/3ϵ+𝒪(ϵ^2)],
and
Eq. <ref>,
X̃^(ℰ),R=1_typ
=1/2[1+ϵ+5ϵ^2+𝒪(ϵ^3)],
provide, as m→ 4, an expansion in ϵ=3/(m+1) of the scaling exponents of,
respectively, the typical spin-spin and the typical (connected) energy-energy correlation function with Born-rule measurements on the ground state of the tricritical Ising Hamiltonian.
§.§ Logarithmic Correlation Functions
Until now, we have used the replica field theory formalism with
an arbitrary
number R of replicas and its R→1 limit to evaluate various quantities averaged with Born-rule probabilities (see Eq. <ref>).
In taking the R → 1 limit, the scaling dimensions of two operators which are distinct when R ≠ 1
may become equal at R=1.
Such a collision of scaling dimensions while taking replica limits can give rise to “logarithmic correlation functions” at the measurement-dominated fixed point. The corresponding
logarithmic factors multiplying the power law decay of certain correlation functions at criticality are a hallmark of so-called logarithmic CFTs, which are a class of non-unitary CFTs.
(See e.g. Ref.
Gurarie1993,GurarieLudwig2005,VasseurJacobsenSaleur,CardyLogarithm1999,DavisCardy2000,CardyLogarithm2013).
We demonstrate in the present subsection that the indeterministic (random) nature of measurement outcomes performed on a critical ground state generates critical states that carry these hallmarks of logarithmic CFTs.
In this section, we address these logarithmic CFT features of the measurement-dominated fixed point, and
in particular, highlight correlation functions which contain multiplicative logarithms of distance on top of a power law decay. As discussed in Sect. <ref>, the scaling operators at the new fixed point Δ_*(ϵ) (Eq. <ref>) transform in irreducible representations of the symmetric group S_R and the correlation functions of such operators exhibit a pure power law decay at the new fixed point.
Let us consider two operators 𝒪 and 𝒪̃ both transforming in irreducible representations of the symmetric group S_R s.t. the two correlation functions ⟨𝒪𝒪⟩ and ⟨𝒪̃𝒪̃⟩ are pure power law decaying at the new fixed point Δ_*(ϵ),
i.e.
⟨𝒪(x,0) 𝒪(y,0)⟩=A(R)/|x-y|^2X(R),
⟨𝒪̃(x,0) 𝒪̃(y,0)⟩=Ã(R)/|x-y|^2X̃(R).
Let us suppose that the correlators have colliding scaling dimensions in the replica limit R→ 1,
lim_R→ 1 (X̃(R)-X(R))=0 (but X̃(R)≠ X(R) if R≠1 ),
and that the
amplitudes
A(R) and Ã(R) of the correlators can be normalized s.t.
lim_R→ 1 (Ã(R)+A(R))=𝒦,
where 𝒦 is a constant.
Then as R→ 1, we can write,
Ã(R) =-A(R)+
𝒦
X̃(R) =X(R)+a(R-1)
where
a≠ 0
is also a
constant independent of R.
Then in the limit R→ 1,
⟨𝒪̃(x,0) 𝒪̃(y,0)⟩+ ⟨𝒪(x,0) 𝒪(y,0)⟩=
=Ã(R)/|x-y|^2X̃(R)+A(R)/|x-y|^2X(R)
=-A(R)+
𝒦/|x-y|^2X(R)+2a(R-1)+A(R)/|x-y|^2X(R)
=1/|x-y|^2X(R)[
𝒦
+2aA(R)(R-1)log |x-y|+
+𝒪((R-1))]
Now the last term in the above equation clearly vanishes as R→1 and therefore we are left with,
lim_R→ 1 [⟨𝒪̃(x,0) 𝒪̃(y,0)⟩+ ⟨𝒪(x,0) 𝒪(y,0)⟩]=
=lim_R→ 1𝒦+2aA(R)(R-1)log |x-y|/|x-y|^2X(R=1).
Therefore, in addition to Eq. <ref> and <ref>, if we have
lim_R→ 1A(R)(R-1)=finite
or equivalently,
A(R)=𝒪(1/R-1)
we see that the following correlation function will be logarithmic at the new fixed point,
lim_R→ 1 [⟨𝒪̃(x,0) 𝒪̃(y,0)⟩+ ⟨𝒪(x,0) 𝒪(y,0)⟩].
In App. <ref>, we identify two such operators 𝒪 and 𝒪̃ which transform irreducibly under the symmetric group S_R,
𝒪=1/R-1𝔈_20R=1/R-1∑_a,b=1
a≠ b^Rℰ^aℰ^b
and
𝒪̃ =1/(R-1)(R-2)𝔈_22R
=1/(R-1)(R-2)∑_a,b=1
a≠ b; a,b≠ 1,2^R(ℰ^a-ℰ^1)(ℰ^b-ℰ^2).
Here the field 𝔈_NMR, transforms in a specific irreducible representation of the symmetric group S_R and is defined in Eq. <ref> of
App. <ref>.
The normalization factors in front of 𝔈_20R and 𝔈_22R are chosen such that they satisfy the criterion in Eqs. <ref> and <ref>.
(See App. <ref>.)
In App. <ref>, we show that the scaling dimensions of above two operators (which have unequal scaling dimensions at a generic R≠ 1) become equal to each other at R=1.
Then, following our analysis above, the correlator (see App. <ref> for its detailed form)
lim_R→ 1 [⟨𝒪̃(y,0) 𝒪̃(x,0) ⟩+ ⟨𝒪(y,0) 𝒪(x,0)⟩]=
4⟨ℰ^1(y,0)ℰ^1(x,0)ℰ^2(y,0)ℰ^3(x,0)⟩
-3⟨ℰ^1(y,0)ℰ^2(y,0)ℰ^3(x,0)ℰ^4(x,0)⟩
is logarithmic at all fixed points Δ_*(ϵ) parameterized by even m. In particular, as m→ 4, we obtain
the result that the following correlator averaged over
measurements
with Born-rule probability should be logarithmic at the Ising tricritical
point,
4⟨Ê_iÊ_j⟩⟨Ê_i⟩⟨Ê_j⟩-3⟨Ê_i⟩^2⟨Ê_j⟩^2∝log|j-i|+𝒪(1)/|j-i|^2X^(ℰ),R=1_N=2,
where X^(ℰ),R=1_N=2 is given by Eq. <ref>
and 𝒪(1) denotes a constant.
Finally, we note that in logarithmic CFTs the dilation operator of the scale transformations is not diagonalizable, and the logarithmic correlation functions are associated to Jordan cells of the dilation operator <cit.>.
In particular, the Jordan cell or the `logarithmic pair (C,D)' associated to the logarithmic correlation function in Eq. <ref> is formed by the following scaling operators in the R→1 replica limit,
D= 𝒪+𝒪̃
→ ℰ^1ℰ^2+∑_α≠ 1^Rℰ^αℰ^1+∑_α≠ 2^Rℰ^αℰ^2-∑_α,β=1
α≠β^Rℰ^αℰ^β,
and
C=(X(R)-X̃(R))𝒪→-a×∑_α,β=1
α≠β^Rℰ^αℰ^β
where the universal constant a
is defined in Eq. <ref>.
The correlation functions of the operators C and D are given by,
⟨ D(x,0)D(y,0)⟩ =Ã(R)/|x-y|^2X̃(R)+A(R)/|y-x|^2X(R),
⟨ C(x,0)D(y,0)⟩ =A(R)(X(R)-X̃(R))/|x-y|^2X(R),
⟨ C(x,0)C(y,0)⟩ =A(R)(X(R)-X̃(R))^2/|x-y|^2X(R),
and thus in the limit R→1 (see Eq. <ref>),
⟨ D(x,0)D(y,0)⟩ ⟶2ζ×(log|x-y|+𝒪(1))/|x-y|^2X^(ℰ),R=1_N=2,
⟨ C(x,0)D(y,0)⟩ ⟶-ζ/|x-y|^2X^(ℰ),R=1_N=2,
⟨ C(x,0)C(y,0)⟩ ⟶ 0
where ζ:=lim_R→1a(R-1)A(R).
.8cm
§ ENTANGLEMENT ENTROPIES
At long wavelengths, the n^th Rényi entanglement entropy
of the ground state of a translationally invariant,
i.e. non-random
(1+1)d CFT
can be expressed as the logarithm of
the correlation function of
two n-twist fields
by considering n copies of the corresponding 2D CFT <cit.>. (See also Ref. HolzheyLarsenWilczek1994.)
Following our calculations of measurement averaged moments of correlation functions and, in particular, the measurement averaged logarithm of correlation functions
in Sect. <ref>,
we
will now also
calculate the average of the logarithm of the twist field correlation functions
(which are
the Rényi entropies) using the controlled
perturbative RG expansion.
This will involve calculating the correlation function of multiple copies of twist fields
in the generalized replica action given in Eq. <ref> – <ref>.
Since the twist fields are geometrical in nature, their generalization to higher m theories appearing in Eq. <ref> is natural.
This will allow us to evaluate the
universal coefficient of the logarithm of subsystem size of the measurement averaged Rényi entanglement entropies at the tricritical Ising point in an expansion in ϵ=3/(m+1).
To calculate the n^th Rényi entropy, we will have to consider n copies of a state, corresponding to a given set of measurement outcomes, and `glue' the copies to form a n-sheeted Riemann surface (see App. <ref>).
Moreover, to be able to
perform the average over measurement outcomes of the logarithm of the n-twist field correlation function,
we will have to introduce another replica index k. Overall we need to introduce R=nk+1 replica copies, where the additional copy comes due to the Born rule probability factor p_0(m)
(analogous to Eq. <ref>),
and the limit k → 0 or R → 1 will correspond to the Born-rule measurement averaged Rényi entropy(s) <cit.>.
The details of these calculations are given in
App. <ref> and we
obtain the following expression
for the measurement averaged
n^th Rényi entropy S_n,A
S_n,A=1/1-nd/dk|_k=0⟨∏_j=1^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))⟩_Δ_* .
Here, Τ^(j)_n(u,0) denotes the twist field, where
the superscript j
specifies
which copy of
the Riemann surface (out of k copies) the twist field corresponds to, and the subscript n signifies that we are dealing with twist fields for a
n-sheeted
Riemann surface
(compare Fig. <ref>).
The subscript
Δ_* indicates that the correlation function is evaluated at the
measurement-dominated
fixed point.
We now evaluate the scaling dimension of the replicated twist field,
occurring in the correlation function in Eq. <ref>,
in the replica field theory Eq. <ref>, <ref>, <ref>
to 1-loop order in the ϵ expansion using the OPE.
The coefficient
of the twist field
in the OPE of the twist field
with the perturbation Φ(x)
is calculated in App. <ref> and is equal to kI_n, where I_n is
defined as
I_n=n/2π∫_0^∞ds 1-s^n-1/(1-s)(1+s^n)+
𝒪 (ϵ ).
Then the 1-loop RG equation for the scaling dimension of the twist fields [ ∏_j^kΤ^(j)_n ]
can be obtained as usual, e.g., from the RG equation for a coupling constant g_(n,k) for the term ∫ dx [∏_j^kΤ^(j)_n] when added to the action. This yields
dg_(n,k)/dl=(1-kd_n)g_n,k+2kΔ I_ng_n,k+…
where d_n:=c/12(n-1/n) is the scaling dimension of twist field 𝒯_n
in the unperturbed CFT
<cit.>, and the
ellipsis represents terms that will end up contributing only to corrections of second order in ϵ.
Then [We note that the above result for the scaling dimension of the k^th moment
of the n-twist field at the new Δ_*-fixed point is linear in k to 1-loop order.
However, in analogy with observations made in
Ref. LiVasseurFisherLudwig in a related context,
we expect non-linearities in k to appear in higher orders in ϵ.
We plan to address the calculation of these non-linearities in future work.]
⟨∏_j=1^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))⟩_Δ_*∝(u-v/a)^-2d^*_n,k
where, d^*_n,k=kd_n-2kΔ_* I_n+𝒪((Δ_*)^2).
Here,
we have inserted
a
short distance cutoff a
to make the final result dimensionless.
The
measurement-averaged
n^th Rényi entanglement entropy for an interval
of length l=|u-v| is then given, using Eq. <ref>, by
S_n,A =c^(eff)_n/3lnl/a+𝒪(1) where,
c^(eff)_n =
c(m) (1+1/n/2)-3 I_n/(n-1)ϵ+𝒪(ϵ^2),
where c(m) is the central charge of the unperturbed theory,
Eq. <ref>,
while I_n is the integral in Eq. <ref>.
To obtain the
measurement-averaged
von Neumann entanglement entropy, we
take the limit n→ 1 in the above equation. From Eq. <ref>,
one obtains
dI_n/dn|_n=1=1/2π∫_0^∞ds ln(s)/s^2-1=π/8.
Thus, using Eqs. <ref>, <ref> and <ref>, the
measurement-averaged
von Neumann entropy is
S_1,A=lim_n→1S_n,A
=c^(eff)_n=1/3lnl/a+𝒪(1) ,
where c^(eff)_n=1 at the
measurement-dominated fixed point Δ_*
is given by
c^(eff)_n=1=
c(m)
-3π/8ϵ+𝒪(ϵ^2).
This provides, as m → 4, an expansion in ϵ=3/(m+1) of the “effective central charge”
at the measurement-dominated fixed point of the tricritical Ising ground state.
Note that, unlike
in translationally invariant (i.e., unmeasured and thus non-random) CFTs
<cit.>,
here the measurement averaged
n^ th
Rényi entanglement entropies S_n,A are not simply
a 1/2(1+1/n)
multiple
of the von Neumann entropy.
Rather, the universal coefficients of the logarithm
of subsystem size
are all independent
of each other
for different measurement averaged Rényi entanglement entropies.
This feature is similar to having a hierarchy of
independent (“multifractal”) scaling dimensions
for measurement-averaged moments of correlation functions of operators,
as discussed in Section <ref>.
Note that the n^th Rényi entropy being
1/2(1+1/n) times the von Neumann entanglement entropy plays a central role in the calculation of entanglement spectrum shown in Ref. CalabreseLefevre2008 for the usual (unmeasured) 1d critical ground states.
Thus, we expect
the entanglement spectrum of the present system under measurements
to
exhibit qualitatively different universal features when compared to the (unmeasured) 1d critical ground states.
If we have our system at finite temperature, the unmeasured state is given by Gibbs
state ρ=e^-β H/Z, in contrast to the (pure) critical ground state.
Following
steps that parallel
the pure state
calculation above (using twist fields), we find
that the measurement averaged n^th Rényi entanglement entropy can be written as
S_n,A=1/1-n(d/dk|_k=0𝒵(β)_A/𝒵(β)_∅)
where now,
𝒵_𝒜(β)= ∑_𝐦⃗Tr((K_𝐦⃗e^-β H K_𝐦⃗^†)^⊗ nk+1𝒮^k_n,A)
𝒵_∅(β)= ∑_𝐦⃗Tr((K_𝐦⃗e^-β HK_𝐦⃗^†)^⊗ nk+1).
(See Eq. <ref> for the definition of 𝒮^k_n,A). Then following the discussion in App. <ref>,
one
verifies
that the average entanglement entropy is still given by Eq. <ref>, but
now the twist field correlation function
in this equation is calculated on a cylinder
of
finite circumference β instead of a plane.
Since we are interested in evaluating this correlation function
of twist-fields
at the
new fixed point Δ_*, which is conformally invariant,
we can use the
conformal transformation
w→ z=β/2πlog w
to map the twist field correlation on the plane (given in Eq. <ref>) to the twist field correlation function on the cylinder.
Thus, the measurement averaged n^th Renyi entanglement entropy for a region of length l≡|u-v| at finite (inverse) temperature β is given by
S_n,A= c^(eff)_n/3ln(β/π asinhπ l/β)+𝒪(1),
where the universal coefficient c^(eff)_n is given
by Eqs. <ref>, <ref>.
At inverse temperature β<<l,
this reduces to
S_n,A/l ∼ c^(eff)_nπ/31/β.
If we take the subsytem size l to approach
the length L of the total system,
the universal coefficients c_n^eff
will also appear in the measurement averaged (not entanglement) Rényi entropies calculated for the full mixed state of the system at finite temperatures [Since the subsystem size approaches the system size L, the twist fields will be sitting at the ends of the one dimensional quantum system and such correlator will no longer be a pure power law, and will depend on the full operator content of the theory. However, the leading order piece proportional to the system size will not depend on the boundary condition and will be the same as in Eq. <ref>.].
At finite temperatures satisfying the condition β<<L, the measurement averaged Rényi entropy of the mixed state of the system is
then
given by
R_n/L∼ c^(eff)_nπ/31/β+𝒪(1/β^2),
which is extensive in
the total system size L.
We also note that, since the Rényi entropy of the full system is a self-averaging quantity,
it is thus represented by its average in Eq. <ref>.
This expression should be contrasted with
the Rényi-index n dependence of
the
extensive Rényi entropies
for
the unmeasured, i.e. non-random
1d quantum critical
system
at thermal equilibrium,
which is given by
<cit.>,
<cit.>
R_n/L=
c(m) · [1 2 (1 + 1 n)] π/31/β
+𝒪(1/β^2),
where c(m), Eq. <ref>,
is the central charge of the corresponding
unmeasured 2D CFT.
We note that due to the n-dependence of the universal coefficient c_n^eff
(from Eq. <ref>, <ref>)
in Eq. <ref>,
the leading order finite temperature behavior of the measurement averaged n^th Rényi entropy of the full mixed state of the system does not satisfy the simple relation in Eq. <ref>, which is valid for translationally invariant (unmeasured, non-random) CFTs.
.8cm
§ EFFECTIVE “GROUND-STATE DEGENERACY" LG
At measurement-induced phase transitions of deep random quantum circuits with measurements in the bulk of the space-time of the circuit there exists a universal quantity known as the “effective central charge” which is defined in terms of the replica limit R→ 1 of the derivative with respect to R of the universal finite-size correction of the free energy of the circuit on a cylinder or strip of finite circumference or width. In a sense, it replaces the notion of the central charge which is zero at R=1, in the non-unitary CFT describing the transition. The “effective central charge” has been shown in Ref. ZabaloGullansWilsonVasseurLudwigGopalakrishnanHusePixley,KumarKemalChakrabortyLudwigGopalakrishnanPixleyVasseur
to represent the universal finite-size scaling behavior of the Shannon-entropy of the measurement record of the circuit, the latter providing an
expression for
the logarithm of the partition function in the language of the measurements performed on the circuit.
The “effective central charge” is not equal to the universal coefficient of the logarithm of subsystem size of the entanglement entropy at measurement-induced transitions in these deep random quantum circuits.
In CFTs with boundary (or defect, before folding
<cit.>, <cit.>) there is a quantity referred to as the “ground-state degeneracy g" or “zero-temperature entropy" S:= ln g which is a universal constant associated with any specific conformally invariant boundary RG fixed point. It plays a role for boundary (defect) CFTs analogous to the role played by the central charge in a bulk CFT. In particular, in unitary CFTs it decreases upon boundary RG flows, a property often referred to as the “g-theorem”
<cit.>,<cit.>.
The defect (boundary, after folding) piece
ln Z_d
(subscript d standing for “defect”)
of the logarithm of the partition function of a CFT on a
cylinder
of large length β and
circumference
L≪β
has [With periodic boundary conditions in
the spatial direction of size L,
we have to consider the CFT with a defect on a torus with radii β and L, where β denotes the inverse temperature (see Fig. <ref>).
After folding the torus at τ=β/2 and at τ=0 (the location of the defect), we obtain a finite
`double-sheeted' cylinder of circumference L and length β/2.
The τ=0 boundary of this `doubled-sheeted' cylinder is associated with the defect, while the boundary at τ=β/2 is `trivial' and it moves off to infinity in the limit β→∞ of interest,
as we are interested in the ground state of the system.
Thus, the defect free-energy can be thought of as being associated
with the boundary free-energy of this semi-infinite
(`double-sheeted') cylinder]
a non-universal contribution f_d per unit
length
L
of the defect, plus a universal constant,
length-L
independent contribution S=ln g,
ln Z_d= f_d·L
+ S,
S=ln g.
In the
defect (boundary, after folding) CFT of interest in the present paper the universal quantity S(R)= ln g(R) depends on the number R of replicas and must vanish in the R→ 1 limit due to the POVM condition Eq. <ref>.
In general we obtain for the type of measurement problems on a critical ground state discussed in the present paper, analogous to the logic used in Ref. ZabaloGullansWilsonVasseurLudwigGopalakrishnanHusePixley
to obtain the Shannon entropy of a deep circuit with bulk measurements,
an expression for the partition function Z_R of our system from Eq. <ref> and Eq. <ref> upon setting all operators to the identity,
Z_R=1+r=
∑_m⃗
p(m⃗) [p(m⃗)]^r
or:
ln Z_R=1+r=
=ln{∑_m⃗ [p(m⃗)
+ r p(m⃗) ln (p(m⃗)) + O(r^2)
]
}
and:
d dr_|r=0ln Z_R=1+r=
∑_m⃗ p(m⃗) ln (p(m⃗))=
= - S_Shannon({p(m⃗)}
= f_d,eff·L + S_eff,
where
S_Shannon({p(m⃗)}
=
-
∑_m⃗ p(m⃗) ln (p(m⃗))
is the Shannon entropy of the measurement record, while
f_d,eff := ( d dr_|r=0 f_d(R=1+r) ) =
non-universal
and
S_eff:= d dr_|r=0 S(R)=
d dR_|R=1ln g(R)=
=
( d dR_|R=1 g(R) )/g(R=1) := ln g_eff
= universal
The “effective boundary entropy” S_eff, the logarithm of the “effective boundary degeneracy” g_eff, therefore characterizes the universal constant, i.e. system-size-L independent part of the Shannon entropy of the measurement record on the critical ground state.
The Shannon entropy thus expresses the Born-rule averaged defect free energy in terms of a quantity directly related to the measurements.
The “effective boundary entropy” S_eff
plays a role in our problem of measurements on the quantum critical ground states
analogous to the role played by the “effective central charge" in the deep circuits with bulk measurements where, as already mentioned, the latter
describes
the universal finite-size scaling information contained in the measurement record in the space-time bulk of the circuit.
The value of g_0 at our ultraviolet
defect (boundary) fixed point Δ=0 is
S_0=ln g_0=0
since there the space-time has no defect at all and thus there is no
defect (boundary)
contribution to the free energy of the space-time.
We have calculated the universal boundary entropy S(R)=ln g(R) at our new fixed point Δ_*,
Eq. <ref>,
using our ϵ-expansion.
Following Ref. AffleckLudwig1991 and JengLudwig, we obtain
g(R) = g_0 + δ g(R), where g_0=1,
ln g(R) = ln [1 + δ g(R)]
= δ g(R) + O (δ g(R) )^2,
δ g(R)=
-
π^2/24R(R-1) (R-2)^2ϵ^3 + O(ϵ^4).
We note that ln g(R), the universal constant contribution from the boundary to the logarithm of the partition function vanishes as R→ 1 to the order in ϵ we are considering, consistent with requirement from the POVM condition enforcing a partition function equal to unity in this limit.
The “effective ground state degeneracy”
g_eff
of the
defect (boundary)
fixed point Δ_* is thus found to lowest non-vanishing order in the ϵ expansion to be
g_eff = 1 + δ g_eff,
δ g_eff :=
(d
dR)|_R=1δ g(R)
= -π^2/24ϵ^3 + O(ϵ^4).
Thus,
the universal constant contribution to the Shannon entropy of the measurement record on the tricritical Ising ground state is, owing to Eq. <ref>, given by the following expansion
[ S_Shannon({p(m⃗)}) ]_ universal part
=
=-S_eff= - ln g_eff = π^2/24ϵ^3 + O(ϵ^4),
as m→4.
We close this section by noting that the `boundary entropy' <cit.> has recently also been discussed in the different context of decoherence in Refs.
ZouSangHsieh,AshidaFurukawaOshikawa.
.8cm
§ MEASUREMENTS OF LG IN THE QUANTUM ISING MODEL
(a): So far we addressed in this paper the problem of
measurements with the energy operator
Ê_j+1/2,
Eq. <ref>, performed on the ground state of the tricritical quantum Ising model (using the formulation by O'Brien and Fendley). We developed an ϵ=3/(m+1) expansion for the resulting rich and novel critical properties, where m ≥ 4 is an even integer characterizing a subset of minimal model CFTs which can be represented by tricritial q-state Potts models (√(q)=2 cosπ m, Eq. <ref>). The tricritical Ising model itself, the aim of our study, corresponds to the smallest even value m=4.
(b): In the present section we address the problem of
measurements with the Pauli spin operator σ̂^z_i, performed on the ground state of the critical (not tricritical) quantum Ising model. We will again develop an ϵ=3/(m+1) expansion for corresponding resulting rich and novel critical properties of this system. Here, however, m≥ 3 is an odd integer that characterizes another (complementary) subset of minimal model CFTs, which can be represented as a subset of Ising multicritical points (namely those with m= odd). The Ising model itself, the aim of our study, corresponds to the smallest odd value m=3.
There is a common principle unifying problems (a) and (b), which allows us to apply the logic we have already developed for (a) also to (b) - with some modifications:
As described above, the continuum field E describing the measurement operator Ê_j+1/2
Eq. <ref>, <ref>
in the
O'Brien-Fendley model
is the energy operator
of the tricritical Ising point.
This, as already mentioned above,
corresponds to the so-called Kac-Table primary field φ_1,2 of the minimal model CFT
[see, e.g., Refs. BelavinPolyakovZamolodchikov1984, di1997conformal]
with m=4 describing the tricritical Ising model. Moreover, for arbitrary even values of m ≥ 4, the same
Kac-Table primary field φ_1,2 of the minimal model CFT corresponds to the energy operator ℰ in the corresponding
tricritical q-state Potts model that we use to define our ϵ expansion, i.e. ℰ=φ_1,2 for m≥ 4 even. At the same time, this operator is represented in the Landau-Ginzburg description with action Eq.
<ref>
by
tricritical Ising:
E = φ_1,2 = :ϕ^m-2:,
when m≥ 4, m= even,
(see Eq. <ref>).
On the other hand, for the m=3 conformal minimal model, describing the critical Ising model, the continuum field representing the spin operator σ̂^z_i is also represented by the Kac-Table φ_1,2 operator at that value m=3.
Moreover, for arbitrary odd values of m ≥ 3, the same
Kac-Table primary field φ_1,2 of the minimal model CFT is
represented in the Landau-Ginzburg description with action Eq. <ref> again by φ_1,2 = :ϕ^m-2:, with the crucial difference that since now m= odd, this field now changes sign under the Ising Z_2 symmetry ϕ→ -ϕ.
We will use this field
as the generalization of the spin operator of the Ising model (m=3 minimal model CFT) to the minimal model CFT at odd m ≥ 3, and will denote it by the symbol
critical Ising:
𝒮 := φ_1,2 = :ϕ^m-2:,
when m≥ 3, m= odd.
The operator 𝒮 plays, for the ϵ=3/(m+1) expansion with odd m≥ 3 in the Ising case, exactly the same role that the operator ℰ played for the ϵ=3/(m+1) expansion with even m ≥ 4 in the tricritical Ising case that we have already described in this paper.
Various
discussions and calculations discussed so-far for the tricritical Ising case can literally be taken over to the Ising case by simply replacing ℰ by 𝒮, and by replacing “m≥ 4, m= even” by “m≥ 3, m= odd”.
However, before proceding with this,
we
need to understand a subtlety of the Ising case, m=3. This will be done below after briefly reviewing again our ϵ expansion approach.
Let us recapitulate that in Sect. <ref> we
started with a quantum critical ground state in the universality class of the
Ising tricritical
point (m=4^th minimal model) and considered the problem of performing measurements
of the lattice energy operator Ê_j+1/2 in Eq. <ref> (which
is odd
under Kramers-Wannier duality)
on it.
In the process of analyzing this
problem within an ϵ expansion,
we introduced
the
generalized replica action in Eq. <ref>, with action S_* corresponding to the m^th minimal model CFT (Eq. <ref>) and a field ℰ in this CFT.
We motivated the choice of the field ℰ in these higher minimal model CFTs by restricting
to only the even m minimal model CFTs
and using the tricritical q-state Potts formulation of the even m minimal model
CFTs,
where the field ℰ in Eq. <ref> corresponds to the
energy operator in the tricritical q-state Potts model, which for q=2 is the same as the tricritical Ising point.
This provided the basis for the ϵ
expansion, where the control parameter ϵ=
3/(m+1) is small when the even integer m≥ 4 is large.
Until now, for us the
generalized replica field theory
in
Eqs. <ref> – <ref>
with even
m≥ 4 minimal model
CFTs
only served as a tool to
perturbatively study the replica field theory for measurements at the Ising tricritical point (i.e. the replica field theory at m=4) in
the
small parameter ϵ=3/(m+1) via the ϵ expansion we developed in Sect. <ref>
above.
However, as purely a defect field theory problem with replica action in Eqs. <ref> – <ref>, it is clear that the perturbative RG analysis
presented
in Sect. <ref>
applies to any m^th minimal model with m≥ 4,
including also
the minimal model
CFTs
with odd m.
This is because the field ℰ, Eq. <ref>,
being the
energy operator
of the tricritical q-state Potts models, when m ≥ 4 is even, was only used to motivate the choice of field ℰ in the
minimal models with larger parameter m>4, and the field theory problem is perfectly well defined with just Eqs. <ref> – <ref>
also for odd values of m, as far as the RG analysis is concerned [
Unlike the even m minimal model CFTs, there does not exist a tricritical q-state Potts model with the same central charge as the odd m minimal model CFTs.
However, this is immaterial to the RG analysis of the replica action in Eq. <ref> for the multicritical point given by an odd m minimal model CFT in Eq. <ref> and with the symbol ℰ replaced by 𝒮=:ϕ^m-2:.
We note that for the odd m minimal model CFTs the field 𝒮=:ϕ^m-2: is one of the spin fields of the multicritical point and not an energy field, and this distinction is also inconsequential to the RG analysis of the replica action presented in Sect. <ref>. As already mentioned above, for both odd m and even m minimal model CFTs, the field :ϕ^m-2: is the so-called Kac-Table operator φ_1,2. See also footnote <cit.>
].
Excluding
Sect. <ref> on the (replicated) tricritical Potts spin correlation functions, the results from perturbative RG analysis for the entanglement entropies in Sect. <ref> and the (replicated) ℰ=:ϕ^m-2: field correlation functions in Sect. <ref> also readily extend to all
m≥ 4 (both even and odd)
minimal model CFTs
in Eq. <ref>. However, the physical meaning of this replica field theory in
Eq. <ref>, and the corresponding ϵ=3/(m+1) expansion, is completely different in the case of m ≥ 3 with m= odd, as compared to the case of m≥ 4 with m= even, discussed in Sect. <ref>: The former case
will serve to describe
the ϵ=3/(m+1) expansion for the problem of σ̂^z_i measurements on the ground state of the critical quantum Ising Hamiltonian. In this case, it is useful to rename the operator
ℰ, Eq. <ref> as 𝒮, Eq. <ref>, which is now odd under the Ising Z_2 symmetry. (In the Ising case,
m=3, 𝒮 is the continuum field representing the
Pauli σ̂^z_i operator that is measured in this case.)
For general m≥ 3 with m=
odd
the replica field theory action is the same as in Eqs. <ref> – <ref>, but with the field ℰ replaced by 𝒮, i.e.
-𝕊 =
∑_a=1^R
(-1)
S_*^(a)
+
Δ∫_-∞^+∞ dx Φ(x)
Φ(x) =∑_a,b=1
a≠ b^R𝒮^(a)(x,0)𝒮^(b)(x,0),
S_* =∫ d τ∫ d x {1/2(∂_xϕ)^2+1/2(∂_τϕ)^2+
g^*_m-1ϕ^2(m-1)},
with the notation 𝒮 as defined in Eq. <ref>.
For the m=3 minimal model,
the lowest value of m= odd, describing the Ising critical point, however, there is an
additional subtlety that arises in the RG analysis of the replica action in Eqs. <ref> – <ref>.
For minimal models m≥ 4, the
terms denoted by the ellipsis
“…" in the OPE in Eq. <ref>
of the perturbation Φ (Eq. <ref>
and
Eq. <ref> for m= even and odd, respectively), with itself contained only irrelevant fields localized
on the τ=0 time-slice. Hence
we obtained
Eq. <ref> at 1-loop order, which described the RG flow of Δ.
Precisely at the Ising critical point m=3 however, for an arbitrary number of replicas R, the OPE in Eq. <ref>
of the perturbation Φ,
Eq. <ref>,
with itself
contains a term
which is
exactly
marginal.
In particular, in the case of m=3, the following term
∑_α𝔢^(α) where 𝔢=:ϕ^2:=:ϕ^2m-4:|_m=3
appears on the RHS of Eq. <ref>. Here,
:ϕ^2m-4: is irrelevant on the τ=0 time-slice for minimal models with m≥ 4
but it
is exactly marginal at m=3, i.e.
at
the Ising critical point.
This term, however,
turns out to come (see Eq. <ref> in App. <ref>)
with a coefficient (R-1) in the OPE in Eq. <ref>, and hence it
vanishes
in the limit
R→1.
Moreover,
we show
in App. <ref>
that in the limit R→1 such a term
cannot
be generated under the
RG in any order of the
coupling constant Δ of the perturbation Φ.
Since the exactly marginal term in Eq. <ref> cannot be produced under the RG in the R→ 1 limit at m=3, the Ising case [Recall from the discussion above that, when m>3, this term is replaced by an irrelevant operator on the time-slice which can be ignored.],
the RG analysis performed in Sect. <ref>, <ref> and <ref> will also
provide an expansion in large odd m (small ϵ= 3/(m+1))
for the generalized replica theory
in Eqs. <ref> – <ref> all the way down to
m=3, i.e. down to the Ising critical point.
Thus in the replica limit R → 1, the 1-loop RG equation derived in Eq. <ref> also applies to an expansion in ϵ and m= odd, down to
the Ising critical point.
To discuss this case in detail, let us consider the 1d quantum Ising model at its
critical
(not tri-critical)
point described by the Hamiltonian in Eq. <ref>, which lies in the universality class of the m=3 minimal model CFT.
Let us consider performing (weak-) measurements with operator σ̂_i^z at all sites i
on the ground state of the critical Ising model (Eq. <ref>).
Since (σ̂_i^z)^2=1 and σ̂_i^z at different sites commute with each other, one
immediately verifies
that the details from sections <ref> and <ref>,
where the
O'Brien-Fendley chain at its Ising tricritical point was discussed,
generalize straightforwardly
to the case of
measurements with σ̂_i^z
on the ground state of the critical quantum Ising model, where the measurement operator
Ê_i
of the former is now replaced by σ̂_i^z.
In particular, the measurement averaged moments
of correlation functions for this measurement protocol are given by
[⟨Ô_1⟩_m⃗ ...
⟨Ô_N⟩_m⃗] ∝lim_R→ 1Tr(𝒪̂_1^(1)𝒪̂_2^(2)…𝒪̂_N^(N)×
× (|0⟩⟨0|)^⊗ R exp{4Δ̃∑_i=integer∑_a,b=1
a≠ b^R(σ̂^z_i)^(a)(σ̂^z_i)^(b)}).
Here, the state |0⟩
now denotes
the ground state of the
critical Ising Hamiltonian listed in Eq. <ref>.
In the above equation, in contrast to
the corresponding equation Eq. <ref>
of the tricritical point for the O'Brien-Fendley chain, we have a sum over all sites i and
operators (σ̂^z_i)^(a).
This is because in
the present
section we are performing measurements at all sites i with operator σ̂^z_i,
instead of with operator Ê_i
for even
i.
Just as in the tricritical Ising case, Eqs. <ref>, <ref>,
we have gone over to a formulation using continuous (“softened”) measurement outcomes t_i with a symmetric distribution P(t_i), which we again for now first assume to be a zero-mean Gaussian Eq. <ref> (only second cumulant non-vanishing).
In continuum language, one
now sees that
equation
Eq. <ref> above
reduces to
Eq. <ref>
but where the action S_* in
Eq. <ref>
is now
the
Landau-Ginzburg(-Zamolodchikov) action of the Ising critical point, i.e.
S_*=∫ d τ∫ d x {1/2(∂_xϕ)^2+1/2(∂_τϕ)^2+
g^*_2ϕ^4},
as opposed to that of the Ising tricritical point in Eq. <ref>.
Also, in contrast to Eq. <ref>, the perturbation Φ(x) is now
given by
Φ(x) =∑_a,b=1
a≠ b^R𝔰^(a)(x,0) 𝔰^(b)(x,0),
where the field 𝔰(x,τ) is the the continuum field corresponding to the lattice operator σ̂^z_i at the Ising critical point with
the scaling dimension 1/8 <cit.>.
As discussed at the start of this section, the field 𝔰 is given by the Kac's table field φ_1,2 at the Ising critical point, and thus can be obtained by taking m→3 limit of the field 𝒮 in Eq. <ref>.
The special symbol 𝔰(x,τ) for the field 𝒮 in the Ising case m=3 (and only for m=3) is used to stress the additional subtlety arising in this case.
In particular, substituting m=3 in the generalized replica field theory in Eqs. <ref>, <ref>,
and <ref>, we precisely recover the replica field theory for measurements performed with the operator σ̂^z_i at the Ising critical point.
We note that the role played by the
average (“weak")
Kramers-Wannier symmetry mentioned in the last paragraph of Sect. <ref> is now played by the
average (“weak")
Ising Z_2 symmetry. Thus, the perturbation
in Eq. <ref> represents the most RG relevant perturbation invariant under
the (“average”) Ising Z_2 and replica permutation symmetries. Less relevant interaction terms which can be thought of as being associated with higher cumulants of the distribution P(t_i) are discussed in App. <ref> and are found to be irrelevant at the new fixed point Δ_*. This means that our result will also be valid for
weak
measurements where the distribution P(t_i) is a (normalized) sum of delta functions.
Since the replica field theory for measurements performed with the operator σ̂^z_i at the Ising critical point corresponds to the limit m→ 3 of the generalized
replica theory in
Eq. <ref> with odd m≥ 3,
we see,
following the discussion on the analogy between replica field theories in Eqs. <ref> – <ref> for even m and replica field theories in Eqs. <ref> – <ref> for odd m, that the expansion in ϵ=3/(m+1)
Δ_*=
ϵ/4+ϵ^2/4+ O(ϵ^3)
(obtained by taking R→1 in Eq. <ref>) provides,
as m → 3, the location (in coupling constant space) of the
measurement-dominated fixed point which
governs the IR physics
of measurements with the spin-operator σ̂_i^z performed at all sites on the ground state of the critical quantum Ising model [In App. <ref>, we will show that the higher (>2) cumulants of P(t_i) (analogous to the case of tricritical Ising point) are inconsequential to the IR physics of measurement averaged quantities at Ising critical point.
In particular, the higher cumulants generate terms where an even number (≥ 2) of pairwise unequal replica copies of spin field s(x,τ) interact with each other on the τ=0 time slice.
Out of these the 4-replica and 6-replica terms are relevant, the 8-replica term is marginal, and all the other higher replica terms are irrelevant at the m=3 minimal model CFT, i.e. the Ising critical point.
Moreover, the aforementioned relevant and marginal terms at the Ising critical point (m=3)
are irrelevant at fixed points described by the large-m minimal model CFTs.
Then following the standard reasoning used in the case of the ϕ^6 interaction at the Wilson-Fisher fixed point in d=4-ϵ dimensions, the relevant and marginal terms at the Ising critical point are expected to be irrelevant at the
new
fixed point Δ_* even at m=3. This is discussed in more detail in App. <ref>.
Also see App. <ref> for a general argument for the irrelevance of higher cumulants (2k≥ 4) based on avoided level crossings.].
By reasoning completely parallel to the tricritical case discussed above,
the measurement-averaged n^th Rényi entanglement entropy S_n,A for an interval of length l of the ground state of the critical Ising chain
subjected to σ̂^z_i measurements at all sites is given by the ϵ-expansion from Eq. <ref>,
S_n,A=c^(eff)_n/3lnl/a+𝒪(1) where,
c^(eff)_n=
c(m)(1+1/n/2)-3 I_n/(n-1)ϵ+𝒪(ϵ^2)
I_n=n/2π∫_0^∞ds 1-s^n-1/(1-s)(1+s^n)+
𝒪 (ϵ ),
but now
in the limit m→ 3
with ϵ=3/(m+1).
Finally, since the field 𝔰(x,τ) is
the m→3 limit of the
field 𝒮
defined
in Eq. <ref>
for general odd-m minimal model CFTs (which in turn forms the analogue of the field ℰ Eq. <ref> defined for even-m minimal model CFTs),
the measurement averaged N^th moment of the correlation function of the σ̂_i^z operator at the Ising critical point will be given by
m→ 3
limit of Eq. <ref>,
X^(σ̂_ Is),R=1_N=1=1/2-ϵ/2+𝒪(ϵ^3)
X^(σ̂_ Is),R=1_N>1=N/2[1+ϵ-(3N-5)ϵ^2+𝒪(ϵ^3)].
This shows that at the Ising critical point, the scaling dimensions of the measurement averaged moments of the σ̂_i^z correlation function
exhibit
multifractal scaling
(see Sect. <ref>).
In particular,
at the Ising critical point with measurements, the typical
connected
correlation function of the σ̂_i^z operator will be given by the following power law exponent
X̃^(σ̂_Is), R=1_typ=1/2[1+ϵ+5ϵ^2+𝒪(ϵ^3)]
as m→3, where the definition of
X̃^(σ̂_Is), R=1_typ
is completely analogous to the definition of
X̃^(ℰ), R=1_typ in the
tricritical Ising case. [I.e., X̃^(σ̂,Is), R=1_typ
is obtained from the moments of the subtracted Pauli spin operator describing the deviation from its expectation value in a fixed quantum trajectory,
δσ̂^z_i
:= σ̂^z_i -
⟨σ̂^z_i⟩, and
⟨δσ̂^z_iδσ̂^z_j⟩=
⟨σ̂^z_iσ̂^z_j⟩
-
⟨σ̂^z_i⟩⟨σ̂^z_j⟩.
]
.9cm
§ CONCLUSIONS AND DISCUSSION
We have demonstrated that performing weak measurements on relatively simple quantum critical ground states can give rise to critical states with highly complex and novel scaling behavior described by novel universality classes.
We started our study with the critical ground state in the universality class of the
Ising tricritical
point in the lattice formulation by O'Brien and Fendley, and subjected it to weak measurements with a lattice
operator which corresponds to the continuum energy operator at the Ising tricritical point.
The described weak measurements turn out to be a relevant perturbation at the
Ising tricritical point and the critical properties of the states obtained upon measurements are no longer dictated by the
Ising tricrticial
point itself.
We showed that the critical behavior of the tricritical Ising ground state subjected to the described weak measurements is governed by a new, measurement-dominated fixed point, which occurs at a finite strength of measurements.
We presented a controlled perturbative RG analysis, i.e. an ϵ expansion, to study the universal critical properties of this measurement-dominated fixed point and
we calculated a variety of universal quantities (described below) in this ϵ expansion.
We found the first manifestation of the novel scaling properties of the measurement-dominated fixed point in the scaling properties of the
measurement-averaged
moments of correlation functions.
In particular, we showed that the measurement averaged Nth moments of both the spin and the energy correlation function at the tricritical Ising point decay with independent power-law exponents for each N.
Thus, there exists an infinite number of independent scaling exponents associated with each correlation function.
Moreover, noninteger moments N of the correlation functions
also exhibit
scaling behavior, resulting in a continuous spectrum of scaling exponents for each operator, spin and energy.
Each continuous spectrum of scaling exponents is related to
a
universal scaling form of the probability distribution of the given correlation function in states obtained upon measurements, and we determined the typical scaling behavior of the spin and the connected energy correlation function.
We also
demonstrated the presence of
logarithmic CFT features
at
the measurement-dominated fixed point,
in particular
the presence of logarithmic correlation functions.
We showed that, unlike in usual (unitary) CFTs
where all correlation functions are power law decaying,
measurement-averaged correlations functions may possess
a multiplicative logarithm of distance on top of a power law decay. Such logarithmic correlation functions are associated with the non-diagonalizability of the dilation operator, and we also identified the `logarithmic pair' of scaling operators that span the 2× 2 Jordan cell of the dilation operator corresponding to the obtained logarithmic correlation function.Another novel feature of the finite measurement strength fixed point was found in the universal coefficients 1/3c_n^(eff) of the logarithm of subsystem size in the measurement averaged nth Rényi entanglement entropies.
We found that similar to the infinite hierarchy of scaling exponents in the case of moments of the correlation functions, the universal coefficients c_n^(eff) associated with the nth measurement averaged Rényi entanglement entropies are also independent of each other for different n.
This is
in contrast
to the unmeasured 1d quantum critical ground states
(and all unitary CFTs)
where the universal coefficients of the logarithm of subsystem size for all nth Rényi entropies are
all related
solely to a single number,
the central charge of the corresponding 2D CFT.
We showed that c_n^(eff) also appears in the coefficient of the leading order finite temperature correction to the measurement averaged extensive nth Rényi entropy of the full (thermal) mixed
Gibbs state
of the system.The problem of performing weak measurements on a 1d quantum critical ground state can be formulated as a field theory problem with a one dimensional defect at the zero-time slice of the corresponding (replicated) (1+1)d CFT.
We showed, generally, that for a given 1d quantum critical ground state, the universal “Affleck-Ludwig" effective boundary entropy associated with this defect (boundary, after folding) appears as a constant, system size independent piece in the Shannon entropy of the measurement record on the ground state.
In the case of the tricritical Ising ground state subjected to weak measurements with the energy operator, we calculated this universal contribution to the Shannon entropy to leading order in
the ϵ expansion.
We note that the role of the effective boundary entropy in the case of a 1d critical ground state subjected to measurements is analogous to that of the `effective central charge'
at the measurement-induced transition of a deep quantum circuit,
where the latter characterizes the finite size
dependence of
the Shannon entropy of the measurement record on the bulk of the deep quantum circuit at the measurement-induced transition.
Finally, we also studied the ground state of the quantum critical Ising model subjected to weak measurements with the spin operator σ̂_i^z.
By appropriately generalizing the controlled perturbative RG analysis, i.e. the ϵ expansion, developed in the case of the tricritical Ising point, we demonstrated that the critical behavior of the critical Ising ground state subjected to weak measurements with the σ̂_i^z operator is also governed by another measurement-dominated fixed point, which occurs
at a finite strength of measurements.
At the Ising critical point, we determined the power-law exponents of the measurement averaged moments of the σ̂_i^z correlation function to two-loop order in the ϵ expansion and
found these exponents to be independent of each other.
We also obtained the power law exponent of the typical connected correlation function of the σ̂_i^z at the Ising critical point in the ϵ expansion.
Lastly, we also calculated, to leading order in the ϵ expansion, the universal coefficients 1/3c_n^(eff) of the logarithm of subsystem size in the measurement averaged nth Rényi entanglement entropies at the Ising critical point.
Again, analogous to the case of the tricritical Ising point, the universal coefficients c_n^(eff) at the Ising critical point are also found to be independent of each other for different values of n.
1.5cm
One of us (AWWL) thanks Sam Garratt for an inspiring discussion on Ref. GarrattWeinsteinAltman2022 in Fall 2022, as well as especially Romain Vasseur and Chao-Ming Jian for collaboration on several previous works in the related area of measurement induced phase transitions.
.8cm
§
HIGHER
CUMULANTS
AND A COMMENT ON `NON-LOCAL' FIELDS
§.§ Higher Cumulants: Ising tricritical point
We begin by discussing the Ising tricritical point.
In Eq. <ref>, we averaged over measurement outcomes by assuming that only the second cumulant of the distribution P(t_i) is non-zero.
In this Appendix, we will provide justification for why the higher even cumulants of P(t_i) [The distribution P(t_i) is taken to be an even function of t_i to satisfy Eq. <ref>.] cannot change the
critical behavior of the system at long distances.
A
non-vanishing (2n)^th cumulant (Δ̃_2n) of P(t_i) will give rise to the following term
Δ̃_2n∑_i=even4(∑_a=1^RÊ_i^(a))^2n
in the
exponential
on the RHS of Eq. <ref>.
The above expression can be
simplified, using ( Ê_i^(a))^2=1, which shows that it corresponds to
a superposition of terms of the following form (summed over all even i)
∑_a_j_1,a_j_2,…,a_j_2k^all indices are
pairwise distinctÊ_i^(a_j_1)Ê_i^(a_j_2)⋯Ê_i^(a_j_2k),
where k is an integer less than n.
In continuum language, we can replace each Ê^(a_j_l)_i in the above expression with the corresponding continuum field ℰ^(a_j_l)(x,0) in the replica copy “a_j_l".
Going over to
continuum language,
the above expression
thus reads
∑_a_j_1,a_j_2,…,a_j_2k^all indices are
pairwise distinct(ℰ^(a_j_1))(ℰ^(a_j_2))⋯(ℰ^(a_j_2k)).
In principle, in each of the
parentheses
in the above expression we can have a contribution from the
subleading energy field ℰ”
which (just like the leading energy field ℰ) is also odd under K-W duality
and can appear in the continuum representation of the lattice operator
[compare Eqs. <ref>,<ref>].
However, ℰ” (scaling dimension =3) is highly irrelevant as a field with support on the 1-dimensional time slice τ=0, and we can drop it.
Coming back to Eq. <ref>, since the scaling dimension of the field ℰ
at the Ising tricritical point is 1/5, for all k>2 the term appearing in Eq. <ref> is irrelevant as a field with support on the 1-dimensional time slice τ=0, and again we can ignore it.
The only relevant term coming from the higher cumulants
k≥ 2
(apart from the k=1 term already appearing in Eq. <ref>) is
the k=2 term,
∑_a,b,c,d^all indices are
pairwise distinctℰ^(a)ℰ^(b)ℰ^(c)ℰ^(d),
which arises
from all cumulants higher than or equal to the four.
This term has scaling dimension 4/5 < 1
(while being less relevant than
Φ(x) in Eq. <ref>.)
The discussion above of scaling dimensions of the operators in
Eq. <ref>
arising from higher cumulants
was referring to the Ising tricritical point where the measurement strength is Δ=0. However, since we are in fact
interested in the measurement-dominated fixed point at which Δ_*≠0, which we control within the ϵ=3/(m+1)-expansion, we are really concerned with the relevance/irrelevance of these operators at the
Δ_* ≠0 fixed point. Now, all “higher-cumulant” (k≥ 2) operators
in
Eq. <ref>
are highly irrelevant at the
Δ_* ≠0 fixed point when ϵ=3/(m+1) is small, i.e. when m is large:
At ϵ=0 (where 1/m=0)
they have scaling dimensions =2k × (1/2)>1, i.e. are irrelevant (by integers)
on the one-dimensional time-slice when k≥ 2, and for small ϵ those scaling dimensions only change by small amounts (of order ϵ, ϵ^2, ... etc.) when going to the finite-Δ_* fixed point of order ϵ.
More specifically, one can show explicitly <cit.>
that
the operators in
Eq. <ref>
become even more irrelevant at the Δ_* ≠0 fixed point within the
1-loop
epsilon expansion, as compared to their dimensions
=2k × (1/2) at the Δ=0 fixed point.
(I.e. the order ϵ
shifts
of
their scaling dimensions away from their already highly irrelevant ϵ=0 values are all positive.)
This is a familiar feature of the epsilon expansion, well known
already from that of ϕ^4 Landau-Ginzburg theory in d=4-ϵ dimensions
where, although the ϕ^6 perturbation is relevant at the Gaussian fixed point when d<3 (while being irrelevant when d>3), it is
irrelevant at the Wilson-Fisher fixed point of physical interest for all dimensions d ≥ 2.
Analogously, in the epsilon expansion from
Sect. <ref> of interest in this paper, while
the operator in Eq. <ref> associated with the fourth cumulant k=2 is, at the unperturbed fixed point Δ=0 (analogous to the Gaussian fixed
point in ϕ^4 Landau-Ginzburg theory),
relevant when m=4 and irrelevant for all even values m >4,
it is analogously expected to be irrelevant at the Δ_*≠ 0 fixed point of interest,
Eq. <ref>, for all even values
m ≥ 4, i.e. including m → 4. (A general argument for the irrelevance of all higher cumulants (k≥ 2) based on avoided level crossings is presented in
App. <ref>.)
Hence, we do not expect the higher
cumulants
of the distribution P(t_i) to change the long distance behavior of the system. This implies in particular that in the case weak measurements with discrete measurement outcomes
(Eqs. <ref>, <ref>, <ref>), where P(t_i) is a (normalized) sum of delta functions [compare the discussion below Eq. <ref>] and thus contains even cumulants higher than the second, the same critical behavior results as in the case of a zero-mean Gaussian distribution P(t_i).
§.§ Higher Cumulants: Ising Critical Point
Following the above discussion for the tricritical Ising case, we will now
provide a justification for why the higher even cumulants of P(t_i) are not expected to change the critical long-wavelength properties
in the case of measurements with the σ̂^z_i
operator on the ground state of the critical quantum Ising model.
Analogous to Eq. <ref>, the higher cumulants for the σ̂^z_i measurements will give rise to terms of form
∑_a_j_1,a_j_2,…,a_j_2k^all indices are
pairwise distinct(σ̂^z_i)^(a_j_1)(σ̂^z_i)^(a_j_2)⋯(σ̂^z_i)^(a_j_2k),
which will appear in the
exponentional
in Eq. <ref>.
In continuum language, we can replace each (σ̂^z_i)^(a_j_l) operator
in the above equation by the continuum field 𝔰^(a_j_l)(x,τ). Since
the scaling dimension of 𝔰^(a_j_l)(x,τ) is 1/8, all the terms with k>4 appearing in the above equation are irrelevant at the
(unmeasured)
Ising critical point.
Thus, at the Ising critical point, the
4-replica and 6-replica terms (corresponding to k=2 and k=3 in
Eq. <ref>) are relevant,
while the 8-replica term (corresponding to k=4) is marginal
(while these terms are all less relevant than the perturbation Φ(x) in
Eq. <ref>
corresponding to k=1).
In analogy with the tricritical Ising case in the preceding subsection, the discussion above of the scaling dimensions of the operators in Eq. <ref> arising from higher cumulants
was referring to the Ising critical point where the measurement strength is Δ=0. However, again, as we are interested in the measurement-dominated fixed point at which Δ_* ≠ 0, which we control within the ϵ = 3/(m+1)-expansion [where now m= odd], we are really interested in the relevance/irrelevance
of these operators at this
new
fixed point. Again, all “higher-cumulant” operators (k≥ 2) in Eq. <ref> are highly irrelevant at the
Δ_* ≠0 fixed point when ϵ = 3/(m+1) is small [For odd m>3 minimal models, we denote the generalization of the field 𝔰 by the symbol 𝒮 defined in Eq. <ref>], i.e. when m= odd is large: At ϵ=0 (1/m=0) they have again scaling dimensions =2k × (1/2)>1, i.e. are again irrelevant (by integers) on the one-dimensional
time-slice when k ≥ 2. And again, for small ϵ those scaling dimensions
only change by small amounts when going to the finite-Δ_* fixed point of order ϵ. Again, specifically, within the
1-loop
epsilon expansion <cit.> these operators become more irrelevant as compared to their already irrelevant scaling dimensions
=2 k × (1/2) at Δ=0. Again, in analogy with the discussion of the ϕ^6 term in the d=4-ϵ expansion of ϕ^4 Landau-Ginzburg theory, the operators in Eq. <ref> with k=2, 3, 4 are expected to be irrelevant at the Δ_* ≠0 fixed point of interest for all odd values of m ≥ 3, i.e. including m=3.
(For a general argument for the irrelevance of all higher cumulants (k≥ 2) based on avoided level crossings we refer again to
App. <ref>.)
Hence, again,
we do not expect the higher
cumulants
of the distribution P(t_i) to change the long distance behavior of the system.
This implies again in particular that in the case of weak measurements
with discrete measurement outcomes
(Eqs. <ref>, <ref>, <ref>),
where P(t_i) is a (normalized) sum of delta functions (compare discussion below Eq. <ref>) and thus contains even cumulants higher than the second, the same critical behavior results as in the case of a zero-mean Gaussian distribution P(t_i).
§.§ Locality of Observables
We will close this section by discussing the significance
of `locality' of
fields
𝒪_i
associated with lattice operators 𝒪̂_i and
which appear
in Eq. <ref>.
Note that in deriving
Eq. <ref>,
we assumed that the operators 𝒪̂_i in their continuum representation correspond to `local' fields 𝒪_i, i.e. 𝒪_i can be expressed in terms of local combinations
involving
the Landau-Ginzburg field ϕ(x,τ) and its normal ordered higher powers.
Important differences in the gluing of field configurations {ϕ^(a)(x,0^-)}_j=1^R and {ϕ^(a)(x,0^+)}_j=1^R (appearing in Fig. <ref> and Eq. <ref>) could occur if the fields 𝒪_i are non-local.
An example of this is seen in the case of measurements performed on the Tomonaga-Luttinger liquids (TLLs), studied in Ref. <cit.>, when calculating the correlations functions of phase e^iθ(x). The phase θ(x) is termed as a `non-local' field in the bosonic theory of the field ϕ(x) [∂_xϕ(x) is proportional to the density of the TLLs] as they satisfy the following equal time commutator
[ϕ(x),θ(x')]=iπ H(x-x')=
iπ x≥ x'
0 x<x'
.
In the calculation of phase correlation functions, as noted in Ref. <cit.>, the two field configurations {ϕ^(a)(x,0^-)}_j=1^R and {ϕ^(a)(x,0^+)}_j=1^Rdiffer with each other on an interval of values of position x, and are not `identified/glued' on this interval.
In this work, we have only considered correlation functions (and their moments) of lattice operators 𝒪̂_i which correspond to `local' fields in continuum, i.e. they can be expressed as local combinations of Landau-Ginzburg field ϕ and normal ordered higher powers of ϕ.
§ IRREDUCIBLE REPRESENTATIONS OF THE SYMMETRY GROUP
Unlike the unperturbed critical theory (Δ=0), the operators ∏_i=1^Nℰ^(α_i)(x,0)
(1≤α_i≤ R
in a theory
with R replica)
are no longer scaling operators at the new fixed point (Δ=Δ_*).
Rather, as discussed in Ref. <cit.> and <cit.>, the scaling operators at the new fixed point are formed out of linear superposition of operators ∏_i=1^Nℰ^(α_i)(x,0) for different choices of replica indices {α_i}, and they transform in irreducible representations of the symmetric group S_R. Following Ref. <cit.>, the
corresponding
scaling operators at the new fixed point are given by
𝔈_NMR=
∑_α_i≠α_j
1≤α_i≤
R-M(ℰ^(α_1)-ℰ^(R))…(ℰ^(α_M)-ℰ^(R-M+1))ℰ^(α_M+1)…ℰ^(α_N)
(0≤ M≤ N).
The scaling dimensions of the above operators are
calculated to two-loop order in Ref. <cit.> in a dimensional regularization [by ϵ=3/(m+1)] RG scheme, with minimal subtraction of poles in ϵ. From their analysis, the scaling dimension of the operator in Eq. <ref> is given by
X^(ℰ),R_NM=NX_ℰ-γ(Δ_*) … (X_ℰ=1/2-3/2(m+1))
Δ^*=ϵ/4(2-R)+ϵ^2/4(2-R)^2+ O(ϵ^3) … (ϵ=3/m+1)
γ(Δ)=2b̃_NMRΔ-8(N(R-N)+(N-1)b̃_NMR)Δ^2
+ O(Δ^3)
b̃_NMR=2((N -M)R-N^2 +M(M -1))
In table <ref>,
we list the scaling dimensions X^(ℰ),R_NM for R=0,1 and
N=1,2,3.
Out of the N scaling dimensions, corresponding to different values of M in Eq. <ref>, the smallest one dictates the
power law behavior
of ⟨ℰ(x,0)ℰ(y,0)⟩^N when |x-y|→∞.
Setting R=1 and minimizing the scaling dimension X^(ℰ),R_NM in Eq. <ref> over possible values of M
,
we obtain Eqs. <ref> and <ref>.
As an aside, we note that in Ref. <cit.> they were in interested in the limit R→0 (which corresponds to quenched disorder), and in this limit the smallest scaling dimension for a fixed N in Eq. <ref> is given by,
X^(ℰ),R=0_N=N/2(1-ϵ^2/4(3N-4)+𝒪(ϵ^2)).
0.6cm
§.§ Colliding Scaling Dimensions in Replica Limit Lg
As discussed in Sect. <ref>, scaling dimensions of operators with unequal scaling dimensions at generic replica number R≠1 can become equal to each other at R=1.
To see this collision of scaling dimensions in replica limit R→1, we consider two operators 𝔈_20R and 𝔈_22R from Eq. <ref>.
We note that the correlation function of the 𝔈_20R operator is given by,
⟨𝔈_20R(r,0)𝔈_20R(0,0)⟩ = 2R(R-1)×
(⟨ℰ^1(r)ℰ^1(0)ℰ^2(r)ℰ^2(0)⟩+
+(R-2)(R-3)/2⟨ℰ^1(r)ℰ^2(r)ℰ^3(0)ℰ^4(0)⟩
+2(R-2)⟨ℰ^1(r)ℰ^1(0)ℰ^2(r)ℰ^3(0)⟩),
while for 𝔈_22R operator,
⟨𝔈_22R(r,0)𝔈_22R(0,0)⟩ = (R-3)(R-2)^2(R-1)×
(⟨ℰ^1(r)ℰ^1(0)ℰ^2(r)ℰ^2(0)⟩+ ⟨ℰ^1(r)ℰ^2(r)ℰ^3(0)ℰ^4(0)⟩
-2⟨ℰ^1(r)ℰ^1(0)ℰ^2(r)ℰ^3(0)⟩).
Ignoring the overall R dependent constants, clearly, the expressions in parentheses in Eqs. <ref> and <ref> are identical to each other in R→1 limit.
Thus, the two operators 𝔈_20R and 𝔈_22R have colliding scaling dimensions in the replica limit R→1,
i.e. the scaling dimensions of the two operators are equal to each other in the limit R→1 at the new fixed point Δ_*(ϵ) (ϵ=3/(m+1)) for all even values of m.
(This can also be verified using the ϵ-expansion
for the scaling dimensions of the two operators using Eq. <ref>.)
Moreover, with the given normalization for operator 𝒪 (Eq. <ref>) and operator 𝒪̃ (Eq. <ref>), it can be easily verified that the criterion in Eq. <ref> is satisfied by the amplitudes of correlators ⟨𝒪(r) 𝒪(0)⟩ and ⟨𝒪̃(r) 𝒪̃(0)⟩.
Finally, since the correlation functions in the parentheses of Eqs. <ref> and <ref> are physical correlators, we expect to get a finite answer for them in the R→1
limit, and thus operators 𝒪 and 𝒪̃ also satisfy the criterion in Eq. <ref>.
As discussed in Sect. <ref>, such colliding of scaling dimensions give rise to logarithmic correlation functions at the new fixed point.
.8cm
§ DETAILS OF ENTANGLEMENT ENTROPY CALCULATION
Given a set of measurement outcomes
m⃗=
{m_j}, the state obtained after measurements
is
|Ψ_{m_j}⟩=K̂_𝐦⃗|0⟩/√(⟨0|(K̂_𝐦⃗)^†K̂_𝐦⃗|0⟩).
The n^th Rényi entanglement entropy of a spatial region A=[u,v] in this state is given by
S_n,A(|Ψ_{m_j}⟩)=1/1-nln{Tr_A [ρ_A (|Ψ_{m_j}⟩)]
^n},
where the reduced density matrix is
ρ_A(|Ψ_{m_j}⟩)= Tr_A̅(|Ψ_{m_j}⟩⟨Ψ_{m_j}|)
= Tr_A̅(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)/Tr(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)
and A̅ is complement of spatial region A. Then
S_n,A(|Ψ_{m_j}⟩)= 1/1-n{ln(Tr_A(Tr_A̅(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†))^n)
-ln(Tr(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†))^n}
= 1/1-n{lnTr(𝒮_n,A(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)^⊗ n)
-lnTr((K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)^⊗ n)}
= 1/1-nlim_k→ 01/k×{Tr(𝒮^k_n,A(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)^⊗ nk)-Tr((K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)^⊗ nk)}.
Here the permutation operator 𝒮_n,A
is defined <cit.> as
𝒮_n,A=xΠ1.2χ_g_xand g_x=
(1,2,…,n ) x ∈ A
identity=e x ∈A̅
,
where g_x labels the permutation on site x, and 1.2χ_g_x =∑_[i]|i_g_x(1)i_g_x(2)… i_g_x(n)⟩⟨i_1i_2… i_n| is its representation on the replicated on-site Hilbert space.
Since the operator (K_𝐦⃗)^⊗ nk=K_𝐦⃗⊗ K_𝐦⃗⋯⊗ K_𝐦⃗
commutes with the permutation operator 𝒮_n,A,
using cyclicity of trace we can write Eq. <ref> as
S_n,A(|Ψ_{m_j}⟩)=1/1-nlim_k→ 01/k×{Tr(𝒮^k_n,A(|0⟩⟨0|)^⊗ nk(K̂_m⃗^†K̂_m⃗)^⊗ nk)-Tr((|0⟩⟨0|)^⊗ nk(K̂_m⃗^†K̂_m⃗)^⊗ nk)}
Then the average of the n^th Rényi entropy over the measurement outcomes with Born rule is given by,
S_n,A= ∑_𝐦⃗p_0(𝐦⃗)S_n,A(|Ψ_{m_j}⟩)
= lim_k→ 01/(1-n)k∑_𝐦⃗p_0(𝐦⃗){Tr(𝒮^k_n,A(|0⟩⟨0|)^⊗ nk(K̂_m⃗^†K̂_m⃗)^⊗ nk)-Tr((|0⟩⟨0|)^⊗ nk(K̂_m⃗^†K̂_m⃗)^⊗ nk)}.
Since p_0(𝐦⃗)=Tr(K_𝐦⃗|0⟩⟨0|(K_𝐦⃗)^†)=Tr(|0⟩⟨0|K_𝐦⃗^†K_𝐦⃗)
we obtain
S_n,A= 1/1-nlim_k→ 01 k [ 𝒵_A -𝒵_∅ ],
where
𝒵_𝒜=∑_𝐦⃗Tr(𝒮^k_n,A(|0⟩⟨0|)^⊗ nk+1(K̂_m⃗^†K̂_m⃗)^⊗ nk+1) and 𝒵_∅=∑_𝐦⃗Tr((|0⟩⟨0|)^⊗ nk+1(K̂_m⃗^†K̂_m⃗)^⊗ nk+1).
Owing to the POVM condition
Eq. <ref>,
lim_k → 0𝒵_∅=1,
we can write the measurement averaged n^th Rényi entropy as
S_n,A=1/1-nlim_k→ 01/k(𝒵_A/𝒵_∅-1)=1/1-n(d/dk|_k=0𝒵_A/𝒵_∅).
Using Eq. <ref>,
<ref>,
<ref> and following the arguments in
the derivation of Eq. <ref>, 𝒵_A can be written as
𝒵_𝒜=∑_𝐦⃗Tr(𝒮^k_n,A(|0⟩⟨0|)^⊗ nk+1(K̂_m⃗^†K̂_m⃗)^⊗ nk+1)∝
∝∫∏_a=1^nk+1Dϕ^(a) e^-∑_a=1^nk+1S_*^(a)+Δ∫ dx Φ(x) Tr(𝒮^k_n,A|{ϕ^(a)(x,0^+)}⟩⟨{ϕ^(a)(x,0^-)}|)
where,
Φ(x)=∑_a,b=1
a≠ b^Rℰ^(a)(x,0) ℰ^(b)(x,0) and |{ϕ^(a)(x,0^±)}⟩=⊗_a=1^R|{ϕ^(a)(x,0^±)}⟩
Upon making use of the definition Eq. <ref>,
the factor Tr (𝒮^k_n,A|{ϕ^(a)(x,0^+)}⟩⟨{ϕ^(a)(x,0^-)}|) in Eq. <ref> does the job gluing the nk+1 replicas into k n-sheeted Riemann surfaces, as illustrated in Fig. <ref>:
Each of these k n-sheeted Riemann surfaces contain n replicas which are glued in the spatial region A along the τ=0 equal-time slice, and there is one additional replica representing a plane that remains unglued.
Thus we conclude that the
ratio 𝒵_A/𝒵_∅ of partition functions equals the correlation function of two
twist fields describing these k n-sheeted Riemann surfaces. Thus we can express the measurement-averaged Rényi entropies from
Eq. <ref> in terms of the twist fields as
S_n,A=1/1-nd/dk|_k=0⟨∏_j=1^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))⟩_Δ_*,
where
the superscript j on the twist fields
denotes
the
Riemann surface (out of k) to which the twist field corresponds, and the subscript n
indicates
that we are dealing with twist fields for a n-sheeted Riemann surface.
(Τ^(j)_n)^-1 denotes the twist field conjugate (“inverse”) to
Τ^(j)_n.
.8cm
§ OPE COEFFICIENT OF TWO TWIST FIELDS INTO LG
In order to compute the scaling dimension of the twist field at the new fixed point to 1-loop order in the small parameter ϵ= 3/(m+1),
we will need the OPE coefficient
with which the perturbation Φ(x)
(from Eqs. <ref>,
<ref>) appears
in the OPE of the twist fields.
This is equivalent to finding the following three point
correlation function
in the unperturbed replica theory (with action in Eq. <ref>)
⟨∏_j^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))Φ(x)⟩=
∑_a,b=1
a≠ b^nk+1⟨∏_j^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))ℰ^(a)(x,0)ℰ^(b)(x,0)⟩.
We know from Ref. <cit.> that
⟨ℰ^(a)(x,0)ℰ^(b)(x,0) ⟩_(ℛ_n)^k+1=
⟨∏_j=1^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))ℰ^(a)(x,o)ℰ^(b)(x,0)⟩/⟨∏_j^k(Τ^(j)_n(u,0)(Τ^(j)_n)^-1(v,0))⟩.
The
LHS
of the above equation calculates the correlator for the two
specified
fields
ℰ^(a) and ℰ^(b)
in the geometry shown in Fig. <ref>, which involves k
copies of a n-sheeted Riemann
surface
and
one plane, denoted by the subscript (ℛ_n)^k+1 on the correlator.
On the LHS of the above equation, the index a is to be thought of as equal to a combined index (i,α).
The index i here indicates either a Riemann surface out of the k
copies of the n-sheeted Riemann
surface (when i∈{1, ..., k})
or it indicates the plane
(when i=k+1). When the index i corresponds to a Riemann surface, the index α denotes the Riemann sheet of that n-sheeted Riemann surface on which the field is located [When the index i corresponds to the plane, there is no ambiguity of the Riemann sheet
to
which the field belongs and α can be taken to be zero.].
On the other hand, the correlators on the
RHS
of the above equation are evaluated in the nk+1 independent replicas of the m^th minimal model and the
labels a and b indicate
the replica copy of the theory.Now to evaluate
the correlator ⟨ℰ^(a)(x,0)ℰ^(b)(x,0) ⟩_(ℛ_n)^k+1, we can use a conformal transformation to map each
of the k copies of the n-sheeted Riemann surface
to
a plane.
In particular, we note that since ℰ is a
(Virasoro) primary field, its
expectation value on the plane vanishes [
All primary fields of a CFT are by convention subtracted so that their expectation values in the infinite plane vanish identically.].
So ⟨ℰ^(a)(x,0)ℰ^(b)(x,0) ⟩_(ℛ_n)^k+1 is zero unless both ℰ^(a)(x,0) and ℰ^(b)(x,0) lie on the same n-sheeted Riemann
surface, i.e. a=(j,α) and b=(j,β) for the same Riemann surface j [Here, the symbol β should not be confused with the inverse-temperature.]
.
The correlator ⟨ℰ^(α)(x,0)ℰ^(β)(x,0) ⟩_ℛ_n
for a single n-sheeted Riemann surface ℛ_n
can be calculated using the following conformal transformation
z=f(w)=(w-u/w-v)^1/n.
In particular, if w corresponds to a point (x,0)
on
the α^th sheet in the n-sheeted Riemann surface,
z=f(α,(x,0)_(x,0) on the
α^th sheet)=(x-u/x-v)^1/ne^2πα/niwith α∈{1,2,…,n}.
Since ℰ is a
(Virasoro) primary
field,
⟨ℰ^(α)(x,0)ℰ^(β)(x,0) ⟩_ℛ_n=
|dw_1/dz_1|^-X_ℰ|dw_2/dz_2|^-X_ℰ⟨ℰ^(α)(z_1)ℰ^(β)(z_2) ⟩_plane,
where w_1
denotes position
(x,0) in the α^th sheet and w_2
denotes position
(x,0) in the β^th sheet. Also,
dw_1/dz_1=n(x-v)(x-u)/u-v(x-v/x-u)^1/ne^-2πα/ni,
dw_2/dz_2=n(x-v)(x-u)/u-v(x-v/x-u)^1/ne^-2πβ/ni,
and |z_1-z_2|=2|(x-u/x-v)^1/nsin(π(α-β)/n)|.
Then using Eq. <ref>
as well as ⟨ℰ^(α)(z_1)ℰ^(β)(z_2) ⟩_plane= 1/ ( |z_1-z_2|^2X_ℰ), we obtain
⟨ℰ^(α)(x,0) ℰ^(β) (x,0)⟩_ℛ_n=
=
|(u-v)/2n(x-v)(x-u)sin(π(α-β)/n)|^2X_ℰ
=⟨ℰ^(a)(x,0)ℰ^(b)(x,0)⟩_(ℛ_n)^k+1,
where in the last equality we recall that a=(j,α) and b=(j,β).
The last equality in the above equation follows
because, as already mentioned above, we are interested in the case when both the fields ℰ^(a) and ℰ^(b) lie on the same Riemann surface ℛ_n, since otherwise the correlator is zero.
Finally, from Eq. <ref>, <ref> and
⟨𝒯_n(u,0)(𝒯_n)^-1(v,0)⟩=
1/ |u-v|^2d_n,
we obtain
for the desired three point function
⟨∏_j^k(Τ^(j)_n(u,0) (Τ^(j)_n)^-1(v,0))Φ(x)⟩=
= C_n,k/|u-v|^2kd_n-2X_ℰ|x-u|^2X_ℰ|x-v|^2X_ℰ,
with C_n,k=k ∑_α,β=1
α≠β^n1/|2nsin(π(α-β)/n)|^2X_ℰ.
Thus,
the required OPE coefficient is
C_n,k=k∑_α,β=1
α≠β^n1/|2nsin(π(α-β)/n)|^2X_ℰ=k/2∑_α=1^n-11/(sin(πα/n))^2X_ℰ.
Moreover, as
2X_ℰ=
1-ϵ,
with ϵ = 3/(m+1), the above OPE coefficient can be expanded in powers of ϵ as
C_n,k=k/2∑_α=1^n-11/sin(πα/n)+𝒪(ϵ).
To obtain the von Neumann entanglement entropy, we also want to be able to
analytically continue
the n-dependence in the above expression to n → 1.
Thus, we want an expression for the above OPE coefficient
which is an analytic function of n at n=0, and
which is valid for all real numbers n and which reduces to Eq. <ref> when n is a natural number (≥ 2).
Following Ref. <cit.>, we can write
1/sin(π x)
in the form
(see also Ref. <cit.>)
1/sin(π x)=1/π∫_0^∞dt t^x-1/1+tx∈(0,1) implying
1/sin(πα/n )=1/π∫_0^∞dt t^α-n/n/1+t=n/π∫_0^∞ds s^α-1/1+s^n.
From Eq. <ref>, the OPE coefficient then can be written as C_n,k=kI_n, where I_n is defined as
I_ndef=n/2π∫_0^∞ds 1-s^n-1/(1-s)(1+s^n)+
𝒪(ϵ).
We make use of Eq. <ref> in Sect.
<ref>
below Eq. <ref>.
.8cm
§ HIGHER LOOP ORDERS IN THE RG AND THE ISING CRITICAL POINT
We noted in Section <ref> that when m=3, i.e. in the case of
the
Ising critical point, there is an additional subtlety associated with the RG analysis of the
replica action in Eq. <ref> – <ref>.
This is due to the
term in Eq. <ref>, which appears in
the
OPE
in Eq. <ref>
for a generic number R of replicas,
and although it is irrelevant for minimal models m≥4, it becomes marginal at m=3.
We noted in Section <ref>
that, as shown in Eq. <ref> below,
the coefficient of
this
marginal term in Eq. <ref> comes with a factor of (R-1) in the OPE in Eq. <ref>.
Thus, in the
R→1 limit, this term
is not generated by the RG to second (1-loop) order in the coupling constant Δ of the perturbation Φ.
In this appendix, we will show that the term in Eq. <ref> cannot be generated
by the RG
to any (higher-loop) order in
the coupling Δ in the replica limit R → 1, relevant for Born-rule measurements.
We will provide two different arguments, (i) and (ii).
(i): In the first argument we use the fact that
all terms that could possibly
be generated under the RG at arbitrary order in the coupling
Δ
can be obtained by analyzing the multiple OPE
Φ(x)×Φ(x)×⋯×Φ(x)_n # of Φ(x)
where Φ from Eq. <ref> is the perturbation given by,
Φ=∑_a,b=1
a≠ b^R𝔰^(a)𝔰^(b).
We use the well-known fact (see e.g. Refs. JLCardy_1986RGOPE, LUDWIG198797, cardy_1996, LUDWIGWIESE) which states that the only operators that can be generated under the RG to any order are the operators that appear in the multiple OPE of the perturbation in Eq. <ref>, and
in multiple OPEs of the operators that appear
in Eq. <ref>. – A brief review of this can be
found, if desired, in App. <ref>. –
By analyzing these OPEs, we will show below that the marginal operator, Eq. <ref>, does not appear in any of these multiple OPEs in the limit R → 1, and thus cannot be generated in this limit in any order.
We begin by discussing
the possible operators
that can occur in a multiple OPE of the operator Φ in Eq. <ref>.
The CFT describing the Ising critical point has three primary fields: I, 𝔰 and 𝔢
(identity, spin, and energy). The OPEs between these primary fields are
given [
In the above OPEs, we have
not written
the explicit coefficients
which accompany
the fields in the OPE and which depend on position of the fields.
(For Ising critical point, this is same as the fusion rules of the theory.).]
by,
𝔰×𝔰=I+𝔢,
𝔰×𝔢=𝔰,
𝔢×𝔢=I,
I×𝔰=𝔰, I×𝔢=𝔢, I× I=I
For the purpose of the following discussion, we only care
about whether or not
a field appears in the OPE of
two given
fields, and its exact coefficient is immaterial.
Let us now first consider the OPE in Eq. <ref> for n=2. Using the above OPE relations in each replica copy, we obtain
Φ×Φ=4(R-2)Φ+4a_1(R-1)∑_a=1^R𝔢^(a)+
+a_2
∑_a≠ b, a≠ c,
b≠ c𝔢^(a)𝔰^(b)𝔰^(c)+a_3
∑_a,b,c,d^all indices are
pairwise distinct𝔰^(a)𝔰^(b)𝔰^(c)𝔰^(d)
where a_1, a_2 and a_3 are R independent numerical constants.
We see that the marginal term ∑_a=1^R𝔢^(a) comes with a prefactor (R-1), which vanishes in the
R→1 limit.
For n=3, the OPE in Eq. <ref> can be obtained by contracting
the RHSs of Eqs. <ref> and <ref>.
In particular, Φ on
the
RHS of Eq. <ref> can be contracted with Φ in Eq. <ref>, and the
term ∑_a=1^R𝔢^(a) will be again be produced with a (R-1) prefactor. Moreover, the term
∑_a≠ b, a≠ c,
b≠ c𝔢^(a)𝔰^(b)𝔰^(c)
in Eq. <ref> can contract with Φ=∑_a≠ b𝔰^(a)𝔰^(b) to give,
(∑_a≠ b, a≠ c,
b≠ c𝔢^(a)𝔰^(b)𝔰^(c))×Φ=
R-12∑_a𝔢^(a)+other terms
where
again
the marginal term ∑_a𝔢^(a) comes with a factor which vanishes in the limit
R→1 of interest. Finally, we note that when contracted with Φ
the last term in Eq. <ref> cannot produce the marginal term [Note that the term 𝔰^(a_1)𝔰^(a_2)𝔰^(a_3)𝔰^(a_4) when contracted with 𝔰^(b_1)𝔰^(b_2) gives terms of the following form: 𝔰^(c_1)𝔰^(c_2), 𝔰^(c_1)𝔰^(c_2)𝔢^(c_3) and 𝔰^(c_1)𝔰^(c_2)𝔢^(c_3)𝔢^(c_4). Thus, when contracted with Φ, the last term on the RHS of the OPE in Eq. <ref> cannot produce the marginal operator ∑_a 𝔢^(a)].
Thus, one can conclude that as R→1, the marginal term ∑_a𝔢^(a) does also not occur in the OPE in Eq. <ref> for n=3.We now present an induction argument for the absence of the
operator
∑_a𝔢^(a) in the OPE of Eq. <ref> for
any
n
number of operators Φ, and for the set of operators appearing in this OPE.
To this end, let
us assume that the
operator
∑_a𝔢^(a)
does not appear
in the R→1 limit in the OPE
of
n operators Φ.
Let us also assume that the
most general replica term that can occur in the OPE of
n
operators
Φ is
∑_a_1,a_2,…,a_2k,
b_1,b_2,…,b_k^all indices are
pairwise distinct𝔰^(a_1)𝔰^(a_2)⋯ 𝔰^(a_2k)𝔢^(b_1)𝔢^(b_2)⋯𝔢^(b_l),
where 2k+l≤ 2n.
For
n=2 this corresponds to the terms appearing in
Eq. <ref>, and this is our first step in the induction.
Now to
obtain
the OPE of
(n+1) operators Φ in Eq. <ref>, we have to contract all the terms
that appear
in
Eq. <ref> above
with Φ=∑_a≠ b𝔰^(a)𝔰^(b).
Since only the terms in the same replica copy can be contracted with each other,
out of all the sub-terms shown in Eq. <ref> which appear in OPEs of
n operators Φ,
only the following terms can be contracted with another Φ to get the marginal ∑_a𝔢^(a) term:
∑_a_i≠ a_j𝔰^(a_i)𝔰^(a_j)=Φ,
∑_a_i≠ a_j, a_i≠ b_k,
a_j≠ b_k𝔰^(a_i)𝔰^(a_j)𝔢^(b_k).
From Eq. <ref> and Eq. <ref> we see that both of these
sub-terms
when contracted with Φ produce the marginal ∑_a𝔢^(a), and
that the corresponding coefficient vanishes in the
R→1 limit in both
cases.
Moreover, all terms that can appear in the OPE of Eq. <ref> with the perturbation Φ are again of the form of Eq. <ref> with n
replaced by n+1. This completes the induction argument.
Thus, in summary, we have proven so far that
(a) the only operators that can appear in the multiple OPE of
n
operators Φ are the operators of the form appearing in Eq. <ref>, and
(b) of those the marginal operator, having k=0 and l=1, appears with a combinatorical coefficient that vanishes in the limit R→ 1.
Finally, since the operators appearing in Eq. <ref> can all be generated in the OPE in Eq. <ref>, and thus could be generated by the RG
(with combinatorical coefficients that we have not determined), we would
have finished demonstrating that the marginal operator cannot be generated in the RG to any order, if we could show that the marginal operator cannot appear in the limit R→ 1 in the OPE of an arbitrary number of operators of
the type listed in Eq. <ref>. We will
now show that this is indeed the case.
First, we observe that it is sufficient to show that this is the case for only two
such operators, because by definition the operators appearing in Eq. <ref> form a closed set of
operators under the OPE [I.e., they form a closed Operator Algebra.].
Namely, when we consider an arbitrary number of successive OPEs of operators of the form of
Eq. <ref>, the marginal operator would not be generated in this multiple OPE
if it was not generated in any of the individual successive OPEs in this limit (which involves only two operators).
On the other hand, we can show as follows that in
the OPE of two operators from Eq. <ref>
the marginal operator can only appear with a combinatorical coefficient that vanishes in the limit R → 1:
Note that only the terms in the same replica copy can contract under
the
OPE. Moreover, the marginal operator 𝔢 is only produced either when two 𝔰 fields are fused (see Eq. <ref>) or when the 𝔢 field fuses with the identity (see Eq. <ref>). Therefore, the general term which appears in Eq. <ref> can produce the marginal operator ∑_a 𝔢^(a)
only (i) when it contracts with
itself (i.e., both operators have the same values of k and l), i.e. with
∑_a_1,a_2,…,a_2k,
b_1,b_2,…,b_k^all indices are
pairwise distinct𝔰^(a_1)𝔰^(a_2)⋯ 𝔰^(a_2k)𝔢^(b_1)𝔢^(b_2)⋯𝔢^(b_l),
or (ii) when it contracts with another operator of the form in
Eq. <ref> with the same value of k but with l replaced by l+1, namely with
∑_a_1,a_2,…,a_2k,
b_1,b_2,…,b_k^all indices are
pairwise distinct𝔰^(a_1)𝔰^(a_2)⋯ 𝔰^(a_2k)𝔢^(b_1)𝔢^(b_2)⋯𝔢^(b_l)𝔢^(b_l+1).
Let us first consider the OPE of the term in Eq. <ref> itself, i.e. with the term in Eq. <ref>.
Since we are interested in the coefficient of the marginal operator ∑_a𝔢^(a), we can consider two
identical 𝔰^(a)
fields, one in each of the two identical operators from
Eq. <ref>
we are considering the OPE of,
and contract
these
two fields to produce the field 𝔢^(a), while the rest of the fields
in these two operators,
which includes both 𝔰^(a_i) and 𝔢^(b_i), should contract to produce the identity.
Since a_i,b_i≠ a, the number of choices for the replica indices of the remaining 𝔰^(a_i) and 𝔢^(b_i) fields is given by R-12k+l-1.
Thus, the marginal operator ∑_a𝔢^(a) appears with a prefactor R-12k+l-1 in the OPE of the general term in Eq. <ref> with
itself, and this prefactor thus vanishes in the limit R → 1.
When k=1 and l=0, this statement is the same as ∑_a𝔢^(a) appearing with a prefactor of (R-1) as shown in Eq. <ref>.
Analogously, one
sees
that in the OPE of Eq. <ref> with Eq. <ref>
the marginal operator ∑_a𝔢^(a) appears with a prefactor
R-12k+l, thus also vanishing in the limit R → 1.
When k=1 and l=0 in Eqs. <ref> and <ref>, this statement implies that the marginal operator appears with a prefactor of R-12 in the OPE of Eq. <ref> and Eq. <ref>, which was verified in Eq. <ref>.
Thus, we see that whenever the marginal operator is produced under the OPE of two (same or different) general operators of the type shown in Eq. <ref>, it always comes with a prefactor which vanishes in R→ 1 limit. This concludes our proof.
Thus, to summarize our argument (i), we conclude that
in the limit R→1, the
operator
∑_a𝔢^(a) cannot be generated under the RG in any order in perturbation theory with Δ.
One can check that
this result also holds
if we include higher replica terms
arising from higher cumulants discussed in App. <ref>, and the
proof of this statement
proceeds analogous to the above discussion.
Finally, we note that
besides
Φ,
the exactly marginal operator and the higher replica terms discussed in App. <ref>,
all other
terms
that could be generated under the RG are of the form
of those in Eq. <ref>, involving a mixture of 𝔰 and 𝔢 (the term in Eq. <ref> being the simplest example), and
are all irrelevant under the RG, as terms with support on the τ=0 time-slice.
.1cm
(ii): We will now give another argument
which is perhaps more physical, for why
the operator
∑_a𝔢^(a) cannot be generated under
the RG in the limit R → 1 of relevance to
Born-rule measurements.
The operator
∑_a𝔢^(a), if
it were to be generated
at any order in RG, can be handled non-perturbatively by using the exact solution due to Bariev <cit.>, and McCoy and Perk <cit.>.
Using the exact
solution one sees that when the Ising critical point CFT is perturbed with
the exactly marginal operator 𝔢(x,τ) supported on the one-dimensional time-slice
and
in the absence of any other perturbation,
the power law exponent of
the 𝔰(x,τ) two-point correlation function along the time-slice (defect line) should change continuously with the
coupling
strength of the
marginal operator supported on the defect line.
In our replica field theory, in addition to
a possible
perturbation ∑_a𝔢^a
generated under the RG, we will also have the defect perturbation Φ itself, from
Eq. <ref>.
Ignoring
higher cumulants, which
cannot change the low energy details (see App. <ref>),
the same
replica field theory
would also arise
when we consider performing measurements with σ̂_i^z on the
state
|ψ⟩=exp{κ∑_iσ̂^z_iσ̂^z_i+1}|0⟩ (κ≠0)
where |0⟩ is the ground state at the Ising critical point and the operator σ̂^z_iσ̂^z_i+1
represents
the continuum field 𝔢 at the Ising critical point.
If we insert in Eq. <ref> in place of the
state |0⟩ the state
|ψ⟩ from Eq. <ref>, we
see that upon going
over to the continuum
formulation, we will
obtain
the discussed replica theory with both, the marginal ∑_a𝔢^a
as well as the Φ(x) interaction
added along the one-dimensional time-slice.
Thus, if the marginal ∑_a𝔢^a term
were to be generated under the RG in the replica field theory for σ̂_i^z measurements
performed on the
critical ground state |0⟩, we get the same replica theory as that for σ̂_i^z measurements
performed on the state |ψ⟩.
This is a contradiction because Eq. <ref> tells us that the measurement averaged correlation function of
the σ̂_i^z operator, which
represents
the field 𝔰(x,τ), should be the same as that in the unmeasured state, be it |0⟩ or |ψ⟩.
Since the σ̂_i^z correlation function has different power law behavior in states |0⟩ and |ψ⟩, they cannot be described by the same replica field theory at any energy scale.
Thus, in the replica field theory for the Ising critical ground state |0⟩ under Born-rule σ̂_i^z measurements, the marginal ∑_a𝔢^a term cannot be generated at any order in
the RG.
.8cm
§ BRIEF REVIEW - RG EQUATIONS FROM THE OPERATOR PRODUCT EXPANSION (OPE)
In general one is interested in computing expectation values of O, representing an operator or a product of operators in the perturbed theory such as in Eq. <ref>,
⟨ O⟩_Δ_0
=Z_*
Z_Δ_0 ⟨ O e^+Δ_0 ∫ dx Φ(x)⟩_*,
where expectation values ⟨…⟩_* are taken in the unperturbed (i.e. critical) CFT (in the present case the Ising CFT, compare e.g.
Eq. <ref>).
Here Z_*= Z_Δ_0=0 is the partition function of the unperturbed CFT, and Z_Δ_0 is the fully interacting partition function
obtained from Eq. <ref> by letting O→ 1.
The RG equations for all operators generated in perturbation theory to any order in Δ_0 is obtained by expanding the exponential on the right hand side of Eq. <ref>,
⟨… 1 ⟩_*+
⟨… (Δ_0 ∫_x_1Φ(x_1)
)⟩_* +
+
⟨… (Δ_0^2 2!∫_x_1∫_x_2Φ(x_1) Φ(x_2)
)⟩_*+
+
⟨… (Δ_0^3 3!∫_x_1∫_x_2∫_x_3Φ(x_1) Φ(x_2)
Φ(x_3)
)⟩_*+
…
[We note that in the above formula (and also in the subsequent formulae in this appendix) the symbol ∫_x is a short hand for ∫d^2x/a^1-X_𝒜, i.e. in addition to the integral over coordinate x of the integrand field Φ_𝒜(x), the measure of the integral is normalized with a factor of short distance cutoff a raised to an appropriate power involving scaling dimension X_𝒜 of the field Φ_𝒜 so that the corresponding coupling constant (like Δ_0 when Φ_𝒜=Φ) is dimensionless.].
The following discussion is independent of the operator(s) O, indicated by the ellipses, present in the expectation
value [There is an analogous procedure to handle the RG equation of operators present in the expectation value, but this is not elaborated on here.].
In a general
term in Eq. <ref>
we use the OPE which
expands the product of
n
operators Φ into a complete set of operators Φ^ A located at, say, the position
x_n of the last operator,
Φ(x_1) …Φ(x_n-1) Φ(x_n)
=
=∑_ AC_ A [(x_1-x_n), (x_2-x_n), ..., (x_n-1-x_n)] Φ^ A(x_n).
Most of the possible operators Φ^ A that appear
are irrelevant, and we will mostly be interested in relevant or marginal ones.
The integrals
I_n-1 over the n-1
relative coordinates appearing in the OPE coefficient C_ A are performed against a suitable “cutoff function” which restricts the absolute values of all relative coordinates within the range between a short-distance cutoff a and a long-distance cutoff L. (There are many options for the “cutoff function”, and our discussion and result will not depend on this choice.)
Inserting these integrals into Eq. <ref>
the latter reads
⟨… 1 ⟩_*+
⟨… (Δ_0 ∫_xΦ(x)
)⟩_* +
+
⟨… (Δ_0^2 2!∑_ AI^ A_1∫_xΦ^ A(x)
)⟩_*+
+
⟨… (Δ_0^3 3!∑_ AI_2^ A∫_xΦ^ A(x)
)⟩_*+
… =
=
⟨… ( 1+
Δ∫_x Φ(x) +∑_Φ^ A≠Φλ_ A∫_xΦ^ A(x) + ... )
⟩_*
=
⟨…
e^Δ∫_x Φ(x) +∑_Φ^ A≠Φλ_ A∫_xΦ^ A(x)⟩_*
where
we have re-exponentiated in the last line of Eq. <ref> (using standard logic) and we defined
Δ(Δ_0, a L)=
Δ_0 [1
+ Δ_0 2!I_1(a L)
+ Δ_0^2 3!I_2(a L) +
… ],
and
λ^ A(Δ_0, a L)
=
[ Δ_0^2 2!I^ A_1(a L)
+ Δ_0^3 3!I^ A_2(a L) + … ],
for Φ^ A≠Φ.
Here we used the abbreviation
I_k(a L) :=
I^ A_k(a L), when Φ^ A=Φ,
k=1, 2, ...
The dependence of the RG equations on Δ, at any order, is then obtained
for both of the couplings Δ and λ^ A in the standard manner:
The RG equation for Δ reads
d Δ(ℓ) dℓ=
y_Δ·Δ(ℓ)
+(a∂∂ a)_|Δ_0 Δ(Δ_0, a L)=
=y_Δ·Δ(ℓ)
+ b_2 Δ^2(ℓ)
+ b_3 Δ^3(ℓ) + …,
where ℓ=ln(L/a) and Δ(ℓ=0)=Δ_0 is kept fixed,
while Eq. <ref> is used to re-express the right hand side order-by-order in terms of
Δ(Δ_0, a L)=
Δ(ℓ). Here, y_Δ is the RG eigenvalue of the coupling Δ in the unperturbed CFT, i.e.
y_Δ=
1- X_Δ= 3/(m+1); compare Eq. <ref>,
but now with m=odd. The terms of up to order Δ^2(ℓ) (1-loop order) are those listed in Eq. <ref>.
The RG equations for λ^ A read
d λ^ A(ℓ) dℓ=
y_ A·λ^ A(ℓ)
+(a∂∂ a)_|Δ_0 λ^ A(Δ_0, a L) =
=
y_ A·λ^ A(ℓ)
+ b^ A_2 Δ^2(ℓ)
+ b^ A_3 Δ^3(ℓ) + …,
for Φ^ A≠Φ,
where λ^ A(ℓ=0)=0, and
again Eq. <ref> is used to re-express the right hand side order-by-order in terms of
Δ(Δ_0, a L)=
Δ(ℓ).
Here,
y_ A are the RG eigenvalues of the couplings λ^ A in the unperturbed CFT.
At this stage of the discussion only powers of Δ(ℓ) appear on the right hand side of both RG equations Eqs. <ref>, <ref>, while we see from the latter equation that in general non-vanishing couplings λ^ A of operators Φ^ A≠Φ
are generated. These will then appear in the argument of the exponential of the last line of
Eq. <ref>. The key point then is the following:
Upon the RG coarse-graining process
these thereby generated couplings will generate additional terms in the RG equations. These
additional terms
can easily be understood and incorporated into our existing discussion due to the fact that the set of all thereby generated operators Φ^ A with couplings λ^ A form the set of operators
listed in Eq. <ref> which is closed under the Operator Product
Expansion (OPE). This means that all the terms generated by the RG from the couplings
λ^ A of operators Φ^ A≠Φ can be understood by simply generalizing Eq. <ref> to multiple OPEs of the operators appearing in Eq. <ref>, namely to
Φ^ A_1(x_1)
…Φ^ A_n-1(x_n-1) Φ^ A_n(x_n)
=
=∑_ AC_ A^ A_1, ..., A_n-1, A_n [(x_1-x_n), (x_2-x_n), ..., (x_n-1-x_n)] Φ^ A(x_n).
Employing the same logic that led to the RG equations Eqs. <ref>, <ref> now leads to the same RG equations but with arbitrary powers of the coupling constants λ^ A appearing on the right hand side of these equations.
The key result of this analysis is that the only couplings λ^ A that can be generated under the RG are those of operators Φ^ A that can appear on the right hand side of the multiple OPE in Eq. <ref>. Such contributions would be represented by a term of order λ^ A_1…λ^ A_n-1 λ^ A_n
on the right hand side of the RG equation for λ^ A.
However we show in argument (i)
of App. <ref>
that all the OPE coefficients in
Eq. <ref>,
involving on the left hand side operators
appearing in Eq. <ref>,
vanish in the limit R → 1, when Φ^ A and λ^ A appearing on the right hand side corresponds to the exactly marginal operator
in
Eq. <ref>
and its corresponding coupling constant. This then implies that the exactly marginal operator cannot be generated under the RG to any order in perturbation theory in the coupling Δ in the limit R → 1.
.8cm
§ IRRELEVANCE OF HIGHER CUMULANTS FROM AVOIDED LEVEL CROSSINGS
The “higher-moment operators” in Eq. <ref>
and
in Eq. <ref> whose scaling dimensions
at the Δ_* ≠0 fixed point are of interest
in App. <ref> and App. <ref>, respectively, are conformal boundary operators: In the standard manner, these operators with support on the
one-dimensional τ=0 time-slice in space-time can be viewed upon folding the space-time
along
this time-slice
<cit.>,
<cit.>
as operators with support on the boundary, the real axis, of the tensor product of two identical copies of the (non-random but replicated) bulk CFT, located in the upper half complex plane. The fixed point Δ_* ≠0
describes a scale- and conformally invariant boundary condition B_Δ_* on the two copies of the bulk CFT in the upper half plane, which determines all the universal properties of interest to us in this paper.
After conformal mapping from the upper half complex plane to
the interior of an infinitely long strip of finite width L with identical boundary conditions B_Δ_* on both sides, the spectrum of the Hamiltonian Ĥ_Δ_*, generating translations along the strip, is universally related by finite-size scaling
<cit.>,<cit.>
and the operator-state correspondence to the scaling dimensions of the set of all operators with support on the boundary B_Δ_*, of interest to us here.
Here we will discuss,
for each value of m or equivalently of ϵ=3/(m+1),
the evolution (“spectral flow”) of a particular set of energy levels of the Hamiltonian Ĥ_Δ(ℓ) where the coupling constant
Δ=Δ(ℓ)
(here ℓ=ln(L/a)), flows under the RG from the RG-unstable zero coupling fixed point
Δ=0 in the ultraviolet to the RG-stable finite-coupling
fixed point
Δ_* = Δ_*(ϵ) in the infrared.
For each value of m
(or equivalently
ϵ) this
describes an RG flow between two conformally invariant boundary conditions.
In the intermediate regime of length scales ℓ away from the two fixed points, the corresponding boundary condition B_Δ(ℓ) will not be scale- nor conformally invariant, and the entire spectrum of the corresponding
Hamiltonian Ĥ_Δ(ℓ) will undergo an evolution, i.e. a spectral flow with the length scale ℓ. Because the operator coupling to Δ is invariant under the group S_R of permutations of the R replicas, we can classify all eigenstates of Ĥ_Δ(ℓ) according to irreducible representations of the permutation group S_R.
For each value of m we consider the spectrum of this Hamiltonian in each symmetry sector separately. Since the operators in Eq. <ref>
and
in Eq. <ref> of interest to us are singlets under permutations, we restrict attention to the S_R-singlet sector of the spectrum of Ĥ_Δ(ℓ). We know that this Hamiltonian is, due to the large conformal symmetry, integrable at the ultraviolet (Δ=0) as well as the infrared fixed point (Δ_*≠0). However, at all intermediate scales ℓ away from the two fixed points this Hamiltonian is not expected to be integrable since the operator Φ coupling to Δ
(and thus setting the boundary condition B_Δ(ℓ) which determines the spectrum of Ĥ_Δ(ℓ)) is not expected to conserve a macroscropic number of the conformal conservation laws present at the two fixed points.
Given that the Hamiltonian Ĥ_Δ(ℓ) is not integrable at intermediate scales, the evolution of its spectrum as a function of scale ℓ=ln(L/a) in the S_R-singlet sector is expected to exhibit avoided level crossings.
Now for each value of m (or ϵ),
as discussed in App. <ref> and App. <ref>,
the scaling dimensions of the operators in Eq. <ref>
and
in Eq. <ref> at the ultraviolet fixed point Δ=0
are equal to
2k × X_ϵ= 2k × X_φ_1,2
when m=
even, and
2k × X_ϵ= 2k × X_φ_1,2
when
m= odd, respectively, and thus are strictly ordered in both cases. Here
X_ϵ=X_φ_1,2
is given by Eq. <ref> for both cases, m= even and m= odd (see footnote <cit.>).
Note that in the limit m →∞ (ϵ→ 0), these dimensions become
2k × (1/2) =k since
X_ϵ=X_φ_1,2
→ 1/2.
In either case, all these operators with k>1 thus
have, for any value of m (even or odd), scaling dimensions larger than the respective perturbation Φ (which corresponds to k=1)
at the ultraviolet fixed point Δ=0.
For each case, i.e. for any even and for any odd value of m corresponding to
Eq. <ref>
and
Eq. <ref> respectively, we expect,
given the avoided level
crossings,
as we increase the scale ℓ=ln(L/a) to run the RG via finite-size scaling
from
the ultraviolet to the infrared fixed point
Δ_*(ϵ),
that the relative ordering of these scaling dimensions for different values of k is preserved. In particular, we expect the scaling dimensions of all these operators with k>1 to remain larger than the scaling dimension of the operator which has the smallest scaling dimension at the ultraviolet fixed point Δ=0, which is the one with k=1, corresponding to the perturbation Φ. But since we know that the perturbation Φ must be irrelevant (i.e. must have scaling dimension >1) at the infrared fixed point (as given by the slope of the corresponding RG beta function for Δ), we conclude that
the scaling dimensions of all the operators with k>1 will also need to
be
certainly larger than unity, due to avoided level crossings.
This provides an argument supporting the irrelevance at the infrared fixed point of
the operators in Eq. <ref>
and
in Eq. <ref>
arising from all higher cumulants, both in the tricritical Ising (App. <ref>)
as well as in the Ising (App. <ref>) case.We close by noting that avoided level crossings of scaling dimensions of bulk
operators in RG flows between two
(bulk) 2D RG fixed points, arising from perturbations breaking the integrability of the ultraviolet CFT, have been observed
explicitly via the Truncated Conformal Space approach
<cit.>,
e.g. see Ref. <cit.> Sect. IV.C, Figs. 7, 8.
Flows of boundary scaling dimensions in RG flows between two different boundary fixed points of the same bulk CFT have also been studied using the Truncated Conformal Space approach, see e.g.
Ref. <cit.>;
the latter particular investigation is of less direct relevance for us since in this study only an integrable boundary perturbation is discussed, but it demonstrates the ability to study RG flows between boundary fixed points effectively within the Truncated Conformal Space approach. Finally, all spectra numerically obtained from
K. G. Wilson's numerical renormalization group approach to the Kondo- and other quantum impurity problems
(see e.g. Ref.
Wilson1975,PangCox1991)
precisely observe <cit.>
related spectra of boundary RG flows between different fixed point boundary conditions on a fixed bulk CFT, exhibiting avoided crossings in a given symmetry sector. ] |
http://arxiv.org/abs/2409.03322v1 | 20240905075206 | A multi-kiloGauss magnetic field driving the magnetospheric accretion process in EX Lupi | [
"Kim Pouilly",
"Marc Audard",
"Alexis Lavail",
"Ágnes Kóspál"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Department of Astronomy, University of Geneva, Chemin Pegasi 51, CH-1290 Versoix, Switzerland
[email protected]
Konkoly Observatory, HUN-REN Research Centre for Astronomy and Earth Sciences, MTA Centre of Excellence, Konkoly-Thege Miklós út 15-17, 1121 Budapest, Hungary
Institute of Physics and Astronomy, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
Institut de Recherche en Astrophysique et Planétologie, Université de Toulouse, CNRS, IRAP/UMR 5277, 14 avenue Edouard Belin, 31400 Toulouse, France
EX Lupi is the prototype of EX Lup-type stars, meaning classical T Tauri stars (cTTSs) showing luminosity bursts and outbursts of 1 to 5 magnitudes lasting for a few months to a few years. These events are ascribed to an episodic accretion that can occur repeatedly but whose physical mechanism is still debated.
In this work, we aim to investigate the magnetically-driven accretion of EX Lup in quiescence, including for the first time a study of the small and large-scale magnetic field. This allows us to provide a complete characterisation of the magnetospheric accretion process of the system.
We use spectropolarimetric times series acquired in 2016 and 2019 with the Echelle SpectroPolarimetric Device for the Observation of Stars and in 2019 with the SpectroPolarimètre InfraRouge at the Canada-France-Hawaii telescope, during a quiescence phase of EX Lup. We were thus able to perform a variability analysis of the radial velocity, the emission lines and surface averaged longitudinal magnetic field along different epochs and wavelength domains. We also provide a small-scale magnetic field analysis using Zeeman intensification of photospheric lines and large-scale magnetic topology reconstruction using Zeeman-Doppler Imaging.
Our study reveals a typical magnetospheric accretion ongoing on EX Lup, with a main accretion funnel flow connecting the inner disc to the star in a stable fashion and producing an accretion shock on the stellar surface close to the pole of the magnetic dipole component. We also measure one of the strongest fields ever observed on cTTSs. Such a strong field indicates that the disc is truncated by the magnetic field close but beyond the corotation radius, where the angular velocity of the disc equals the angular velocity of the star. Such a configuration is suitable for a magnetically-induced disc instability yielding episodic accretion onto the star.
A multi-kiloGauss magnetic field driving the magnetospheric accretion process in EX Lupi
K. Pouilly1
M. Audard1
Á. Kóspál23
A. Lavail4
Received 16 July 2024; accepted 04 September 2024
==========================================================================================
§ INTRODUCTION
EX Lup-type objects (EXors) are classical T Tauri stars (cTTSs), meaning low-mass pre-main sequence stars surrounded by an accretion disc, that show burst and outburst events ascribed to episodic accretion.
During these phases, they can increase their optical luminosity from 1 to 5 magnitudes, lasting typically for a few months to a few years, and can occur repeatedly <cit.>.
These events are therefore more moderate, both in duration and luminosity increase, than the FU Orionis-type stars (FUors).
While the magnetospheric accretion of cTTSs, where the strong stellar magnetic field truncates the disc, forcing the accreted material to follow the magnetic field lines <cit.>, seems to be ongoing on EXors as well, the origin of this episodic accretion is still highly debated.
The different hypotheses can be gathered in three groups <cit.>: (i) the magnetospheric accretion itself, being inherently episodic when a strong magnetic field truncates the disc close to the corotation radius <cit.>. (ii) The disc, showing viscous-thermal <cit.>, or gravitational and magneto-rotational <cit.> instabilities, or accretion clumps in a gravitationally unstable environment <cit.>. (iii) The presence of a companion, perturbing the accretion trough tidal effect <cit.>, or thermal instabilities <cit.>.
This work aims to investigate the magnetospheric accretion process of the prototypical EXors, EX Lup, using high-resolution spectropolarimetric time series, as it was done for cTTSs not displaying episodic accretion <cit.>.
This object is a young M0.5-type star <cit.>, known to have both moderate and short-timescale variability and rare extreme episodic outbursts.
It is located at 154.7±0.4 pc <cit.> and has a rotation period of 7.417 days, from its radial velocity modulation that was first ascribed to a low-mass companion <cit.>, before being ascribed to stellar activity <cit.>.
The system has a moderate-to-low inclination of its rotation axis (between 20^∘ and 45^∘) according to modelling of the spectral energy distribution <cit.> and emission lines analysis <cit.>, and a projected rotational velocity vsin i=4.4±2.0 <cit.>.
This object is one of the most studied EXors using spectroscopy, both in quiescence <cit.> and in outburst <cit.>, but the present study is the first one including spectropolarimetry, giving access to information on the magnetic field together with the accretion diagnostics.
Its spectrum contains the typical accretion-related emission lines observed in cTTSs, in addition to numerous neutral metallic emission lines superimposed to photospheric absorption in quiescence, and overwhelming any absorption feature in outburst <cit.>.
The present dataset was acquired during quiescence, meaning that we will characterise the "stable" accretion of the system, even if <cit.> have shown that the accretion pattern seems stable in quiescence and outburst, only the amount of accreted material is affected.
This accretion pattern is consistent with the typical cTTS magnetospheric accretion, through accretion funnel flows connecting the disc to the stellar surface, except that "clumps" of material are also accreted through these funnel flows, detected thanks to the day-to-day variation of the emission lines' broad component (BC) <cit.>.
This article is organised as follows: we describe the observations in Sect. <ref>, the analysis and results are presented in Sect. <ref> and discussed in Sect. <ref>.
We conclude this work in Sect. <ref>.
§ OBSERVATIONS
The spectropolarimetric time series used in this work were acquired at the Canada-France-Hawaii Telescope at two different epochs (2016 and 2019, proposals 16AF03 and 19AF50, respectively).
The second data set is composed of two subsets using two different instruments: the Echelle SpectroPolarimetric Device for the Observation of Stars <cit.> in the optical and the SpectroPolarimètre InfraRouge <cit.> in the near-infrared, both used in polarimetric mode, while the first one only used ESPaDOnS.
This means that each observation is composed of four sub-exposures taken in different polarimeter configurations, which are then combined to obtain the intensity (Stokes I), the circularly polarised (Stokes V), and the null polarisation spectra.
A complete journal of observation is provided in Table <ref>.
§.§ ESPaDOnS
The ESPaDOnS observations, which cover the 370 to 1050 nm wavelength range and reach a resolving power of 68 000, consist of 11 nights between 2016 June 09 and 2016 June 24, with an approximately nightly cadence, and 6 nights between 2019 May 31 and 2019 June 12, the 5 latter respecting a 1-day sampling.
The signal-to-noise ratio (S/N) of the 2016 (2019) observations ranges between 69 and 142 (111 and 140) for the Stokes I at 731 nm.
Each observation was reduced using the package <cit.>.
§.§ SPIRou
The SPIRou observations are covering the 960 to 2350 nm wavelength range with R∼75 000.
They were acquired during 8 consecutive nights, between 2019 June 14 and 2019 June 21, and the S/N of the unpolarised spectra in the H-band range between 107 and 175.
The observations were reduced using the pipeline <cit.>.
§ RESULTS
In this section, we present the results obtained from the analysis of the observations described in Sect. <ref>. They consist of the analysis of the radial velocity, emission lines, and stellar magnetic field.
§.§ Radial Velocity
To determine the radial velocity of EX Lup, we cross-correlated each spectrum with a synthetic spectrum computed using the code <cit.>, with atmospheric models <cit.> and <cit.> line lists adapted to EX Lup stellar parameters for ESPaDOnS and SPIRou wavelengths.
We computed the cross-correlation function (CCF) over 27 (14) wavelength windows of about 10 nm ranging from 441 to 890 nm (1150 to 2290 nm) for ESPaDOnS (SPIRou) observations.
Then we performed a sigma-clipping across all the CCFs for each observation and used the mean and standard deviation of the remaining values as measurement of the radial velocity and its uncertainty.
The results are plotted in Fig. <ref>.
A quick sinusoidal fit of the values for each data set allowed us roughly measure the periodicity and mean value of the radial velocity and yielded P=7.55±0.24 d and ⟨⟩=-0.48±0.07 (ESPaDOnS 2016), P=7.63±0.18 d and ⟨⟩=-0.59±0.14 (ESPaDOnS 2019), and P=8.36±0.97 d and ⟨⟩=-0.58±0.14 (SPIRou).
These measures are consistent within the uncertainties with previous results obtained by <cit.>, =7.417±0.001 d and ⟨⟩=-0.52±0.07 , we will thus adopt the latter values for our analysis.
Finally, we folded all the measurements in phase using =7.417 d and an arbitrary T0, and we fitted this curve with a sinus to estimate the T0 required to set ϕ = 0.5 at the mean velocity between the maximum and minimum of the modulation <cit.>.
The resulting T0 is HJD 2 457 544.40981.
We will thus use the following ephemeris for the rest of this work:
HJD (d) = 2 457 544.40981 + 7.417 E,
where E is the rotation cycle.
All our radial velocity measurements are in phase (Fig. <ref>), but the 2019 measurements show a larger amplitude of its modulation.
This indicates an evolving feature modulating the radial velocity but located at the same longitude in 2016 and 2019.
The values are summarised in Table <ref>.
§.§ Emission line variability
In this section, we present the analysis of emission lines tracing either the accretion funnel flow, here the Balmer lines <cit.>, or the accretion shock such as the CaII infrared triplet (IRT), the HeI D3 <cit.>, or the HeI at 1083 nm lines.
For each line, we analysed the profile variability, their periodicity (except for the 2019 ESPaDOnS data set which does not cover a sufficient time span), and the correlations of these variabilities.
Most of the analyses of this section were performed using [<https://github.com/pouillyk/PySTELLA>], a Python tool for SpecTral Emission Lines (variabiLity) Analysis.
§.§.§ Balmer lines
The Balmer lines are partly formed in the accretion funnel flow and are thus tracing the magnetospheric accretion process.
In this work, we focussed on Hα, Hβ, and Hγ, which we corrected from the photospheric contribution using the moderately active M-dwarf HD 42581 as template <cit.>, broadened to the vsin i of EX Lup.
The 2016 and 2019 profiles are shown in Fig. <ref> and Fig. <ref>, respectively.
On both data sets, the profiles are composed of a broad and a narrow component, both highly variable.
Furthermore we can notice a flux depletion, going below the continuum, around +200 , and extending up to +300 .
This behaviour is characteristic of the so-called inverse P Cygni (IPC) profiles, the red-shifted absorption forming due to infalling material.
The 2D-periodograms, consisting of a Lomb-Scargle periodogram computed in each velocity channel, of the 2016 lines are presented in Fig. <ref>, and show a periodic signal along the whole Hβ and Hγ lines consistent with the rotation period of the star, with a false alarm probability <cit.> of 0.04 and 0.01, respectively.
The Hα line is showing this signal in a less continuous way with a much higher FAP (0.25).
The symmetric signal around 0.9 d^ -1 is the 1-day alias, a spectral leakage of the Fourrier transform reconstructing (with the real period) the observation sampling, meaning approximately one observation per day.
To separate the various parts of the line profile undergoing different variability patterns, we computed the auto-correlation matrices of each line.
This tool consists of computing a linear correlation coefficient (here a Pearson coefficient) between the velocity channels of the line.
A correlated region (close to 1) indicates a variability dominated by a given physical process.
An anti-correlated region (close to -1) indicates a variability dominated by a given physical process or, at least, linked physical processes.
The auto-correlation matrices of 2019 Hα, Hβ, and Hγ lines are shown in Fig <ref>.
Here again, Hα is showing a different behaviour than Hβ and Hγ.
Hα shows three main correlated regions between -100 and +100 , corresponding to the core of the line, around +150 , corresponding to a slight emission excess in the IPC profile (occurring around HJD 2 458 8634.95 and 2 458 641.95), and around +250 , corresponding to the IPC profile itself.
The two latter regions are anti-correlated between them, but the ∼+150 region is also slightly anti-correlated with the line center.
This means that this region might be a broadening of the BC invoked by <cit.>, causing a global decrease of the line (and so a anti-correlation with the whole line profile).
Hβ and Hγ are showing the same correlation around the line centre and +350 , without significant anti-correlation around +150 , and a moderate anti-correlation of the profile with a region around +350 which seems to be an artefact as it is located in the continuum.
§.§.§ HeI D3 587.6 nm
The HeI D3 lines of 2016 and 2019 data sets are presented in Fig. <ref> and are composed of a narrow component (NC) only, extending from approximately -35 to +50 , without significant BC.
The NC is formed in the post-shock region of the accretion spot <cit.>, and can thus trace the accretion close to the stellar surface.
One can note the strong variability of this NC with a maximum reached around ϕ=0.6 [HJD 2 457 548.91, 2 458 645.88, and 2 458 646.87].
The auto-correlation matrices shown in Fig. <ref> confirm that this region is formed by one physical process, and the periodicity of the 2016 NC's variation, consistent with the stellar rotation period (see Fig. <ref>, FAP=0.06), indicates that this component is tracing an accretion shock at the stellar surface.
We thus performed a fit of the HeI D3 NC's radial velocity following the method described in <cit.> to recover the emitting region's location.
The results are shown in Fig. <ref> and summarised below:
* V_ flow = 7.091_-1.03^+2.00 ,
* V_ rot = 3.85 ± 5.0 ,
* dϕ = 0.10_-0.11^+0.15,
* θ = 12.05_-6.7^+48.00 ^∘,
* α = 55.40_-16.36^+20.00 ^∘,
where V_ flow is the velocity of the material in the post-shock region, V_ rot the equatorial velocity, 0.5+dϕ the phase where the emitting region is facing the observer, θ the colatitude of the spot, and α = 90^∘-i, i being the inclination of the rotation axis.
This means that the emitting region is located at ϕ = 0.6, 70^∘ latitude.
These results are consistent within uncertainties with the measurements by <cit.>, who give for the HeI emitting region[Values estimated from their Fig. 7.] a latitude of 60±25^∘, longitude 40±5^∘, meaning ϕ = 0.1±0.01.
However, the authors are using the first date of observation as T0, translated to our ephemeris (Eq. <ref>), this yields ϕ = 0.7.
§.§.§ CaII infrared triplet
As the HeI D3 NC, the CaII IRT NC is formed in the post-shock region, we thus studied these lines as well.
The three components of this triplet show identical shape and variability, we thus focussed on one of them located at 854.209 nm.
The 2016 and 2019 residual line profiles are shown in Fig. <ref> and Fig. <ref>, respectively.
The two sets of lines show an IPC profile around 200 , which is, for the 2016 line, periodic with the stellar rotation period (see the 2D-periodogram in Fig. <ref>, FAP=0.05).
The 2016's auto-correlation matrix (Fig. <ref>) exhibits several correlated regions: from -130 to -60, -50 to -10, +50 to +100, +110 to +160, and +170 to +200 .
However, these regions are less numerous on the 2019 matrix (Fig. <ref>), with only three correlated regions from -40 to +40, +80 to +130, and +150 to +200 , but we retrieve the correlated region around the IPC profile in both matrices, which is anti-correlated with the NC in 2019.
This goes with the smaller (larger) variability of the NC (BC) observed in 2016 compared to 2019, showing the different origins of the two components and a small change in the accretion pattern between the two epochs.
§.§.§ HeI 1083 nm
The only accretion tracer in emission in EX Lup's SPIRou observation is the HeI line at 1083 nm.
The profiles, the 2D-periodogram and the auto-correlation matrix are shown in Fig. <ref>.
The profiles seem composed of two peaks blue- and red-shifted around -50 and +100, and two absorptions, blue-and red-shifted at higher velocity (-150 and +200).
The red-shifted peak and the two absorptions display significant variability and seem modulated on the stellar rotation period with FAPs reaching 0.001, 0.01, and 0.02 for the red-shifted peak, the blue- and the red-shifted absorption, respectively.
The auto-correlation matrix revealed a more complex decomposition.
Indeed, if the four substructures seen in the profiles are represented, it seems that the two absorptions can both be separated in two regions, from -250 to -160 and -160 to -110 for the blue-shifted absorption, and from +130 to +170 and +210 to +270 for the red-shifted absorption.
Furthermore, the main peak at ∼+100 is anti-correlated with the most blue- and red-shifted regions only.
This can be interpreted as follows: the blueshifted absorption, probably a P-Cygni profile traditionally ascribed to a wind, is also associated with a redshifted emission excess, producing the two substructures seen in the redshifted absorption.
The IPC profile, as the opposite physical phenomenon, is also associated with a blueshifted emission excess, responsible for the two substructures in the blueshifted absorption.
§.§.§ Correlation matrices ESPaDOnS
As the several lines studied are tracing different regions of the accretion, we can compute correlation matrices between two different lines to analyse the link between the different regions identified from the auto-correlation matrices.
The correlation matrices of the 2016 and 2019 lines are presented in
Appendix <ref>.
In 2016, the Hα line centre is correlated with the HeI D3 and the CaII IRT NCs, and slightly anti-correlated with the region of CaII IRT IPC profile.
The latter is also strongly anti-correlated with the HeI D3 NC.
The region of the Hα's IPC profile is also anti-correlated with the HeI D3 NC, and correlated with the region of the CaII IRT IPC profile.
In 2019 matrices, we observe the same behaviour between the NCs and IPC regions of the various lines, but both the correlation and anti-correlation coefficients are stronger.
§.§ Magnetic field
In this section, we present the magnetic analysis of EX Lup.
This was done at two scales: the large scale using the Zeeman-Doppler Imaging technique <cit.>, and the small scale using the Zeeman intensification of photospheric lines.
§.§.§ Large-scale
In this section, we used the Least-Squares Deconvolution method <cit.> to study the large-scale magnetic field.
This method allows us to increase the S/N of the Stokes I (unpolarised) and Stokes V (circularly polarised) profiles, by using as many photospheric lines as possible.
To compute the LSD profiles we used the [<https://github.com/folsomcp/LSDpy>] Python implementation.
We normalised our LSD weights using an intrinsic line depth, a mean Landé factor and a central wavelength of 0.2, 1.2, and 500 nm (respectively) for ESPaDOns, and 0.1, 1.2, and 1700 nm for SPIRou observations.
The photospheric lines were selected from a mask produced using the same line list and atmospheric models as in Sect. <ref>, and removing the emission lines, the telluric and the heavily blended lines regions using the [<https://github.com/folsomcp/specpolFlow>] Python package.
About 12 000 lines were used for ESPaDOnS, and 1600 for SPIRou observations.
The LSD profiles of ESPaDOnS 2016, 2019, and SPIRou observations are presented in Figs. <ref>, <ref>, and <ref>, respectively, and the S/N of the profiles are available in Table <ref>.
Please note that the observation at HJD 2 457 551.87135 was remove from this analysis because of its low S/N.
A clear Stokes V signature is detected for all ESPaDOnS observations and most of the SPIRou observations.
Furthermore, one can note that the SPIRou Stokes V signatures are much weaker than ESPaDOnS and the signal at ϕ∼0.6–0.7 almost vanishes on SPIRou when maximal on ESPaDOnS.
Directly from the LSD profiles, one can estimate the surface averaged longitudinal magnetic field <cit.>:
B_ℓ = -2.14 × 10^11×∫ vV(v)dv/λ g c ∫ (1-I(v))dv,
where B_ℓ is in Gauss, v the velocity relative to the line centre, and λ and g the central wavelength and the mean Landé factor used for the LSD computation.
The integration was performed on a ±25 (±35 ) velocity range around the stellar rest frame to minimise the uncertainties without losing any magnetic information on ESPaDOnS (SPIRou) observations.
The B_ℓ curves are shown in Fig. <ref>.
The three curves are modulated with the stellar rotation period and show a maximum around ϕ=0.6.
In the optical frame, the modulation's amplitude is slightly larger in 2019 than in 2016, which is reminiscent of the radial velocity behaviour (see Sect. <ref>).
Finally, as expected from the Stokes V signatures, the SPIRou measurements are far weakest, and mostly negatives.
The same analysis can be performed on the NC of the HeI D3 line (on a ±50 velocity range), formed close to the accretion shock and thus gives access to the magnetic field strength at the foot of the accretion funnel flow.
The B_ℓ obtained are shown in Fig. <ref> and range between -1.4 and -3.7 kG in 2016 and between -0.5 and -4.5 kG in 2019.
As for the radial velocity (see Sect. <ref>), the amplitude of the modulation is larger in 2019, but both curves are in phase, with a minimum reached at ϕ=0.6, which is also consistent with the emitting region's position obtained from the radial velocity modulation of the HeI D3 NC (see Sect. <ref>).
These measurements are also opposed in phase and sign with the LSD B_ℓ. This is a common behaviour on cTTSs <cit.> reflecting that LSD and HeI D3 NC are two diagnostics probing different regions of different polarities.
Finally, we performed a complete ZDI analysis of the three data sets using [<https://github.com/folsomcp/ZDIpy>]<cit.> with a recent implementation of Unno-Rachkovsky’s solutions to polarised radiative transfer equations in a Milne-Eddington atmosphere <cit.> presented in <cit.> to used a more general description than the weak-field approximation used initially in .
The magnetic topology is reconstructed in two steps: (i) the building of a Doppler image (DI), starting from a uniformly bright stellar disk and iteratively adding dark and bright features to fit the observed LSD Stokes I profiles, and (ii) fitting the LSD Stokes V profiles to derive the magnetic topology by adjusting its spherical harmonic components <cit.>, here with a maximum degree of harmonic expansion ℓ_ max=15.
We used as input parameter P_ rot=7.417 d, vsin i=4.4 .
Then we ran a grid of ZDI Stokes V reconstruction over the inclination range derived in the literature (20-45^∘) and used the minimal χ^2 obtained to set our input inclination value (i=30^∘).
The resulting maps are presented in Fig. <ref> and <ref> and the corresponding LSD profile fits are shown in Appendix <ref>.
The ESPaDOnS 2016's DI map shows a main dark spot around ϕ=0.55 and extending between 70 and 20^∘ latitude, which is consistent with HeI D3 emitting region, and a bright structure from ϕ=0.95 to 0.20 around the equator which is certainly a plage as even the accretion shock is dark at the photospheric level.
The magnetic topology, mostly toroidal (61%) is dominated by the dipolar (60%) and the quadrupolar (17%) components, with a mean magnetic field strength of 1.00 kG (B_ max = 3.12 kG), and a magnetic dipolar positive pole of 0.728 kG located at about 30^∘ latitude and 166^∘ longitude (ϕ=0.46).
The brightness map obtained from SPIRou revealed a main dark feature extended from ϕ = 0.0 to 0.25, and a bright plage around ϕ = 0.6, which is perfectly consistent with the 2016 map.
However, the magnetic topology seems less complex than in 2016, almost fully poloidal (99%) and more dominated by the dipole component (77%).
As expected, the recovered field strength is much smaller (⟨B⟩ = 0.131 kG, B_ max = 0.389 kG).
The dipole negative pole is located at about 23^∘ latitude, 329^∘ longitude (ϕ=0.91), with a -0.231 kG-strength.
Given the poor rotational phase coverage of the ESPaDOnS 2019's data set (ϕ∈[0.03,0.64]), we needed to guide the reconstruction instead of starting from a uniform map to avoid a too strong extrapolation on the missing phases.
We do not expect a similar brightness contrast between ESPaDOnS and SPIRou, but given the very similar spectroscopic behaviour between the 2016 and 2019 ESPaDOnS data sets (see Sect. <ref> and <ref>), we could expect similar features, we thus used the ESPaDOnS 2016 brightness map as input to guide its reconstruction.
To check this assumption, we reproduce the Stokes I profiles resulting from the 2016 reconstruction on the 2019 phases, yielding a consistent behaviour (reduced χ^2=1.1).
The obtained final brightness reconstruction shows a main dark feature, slightly shifted in phase compared to 2016 (ϕ≈0.5) and less extended in latitude.
A polar bright feature is also located around ϕ=1, which is reminiscent of the SPIRou's maps.
Concerning the magnetic reconstruction, we expect a similar topology of the magnetic field between SPIRou and ESPaDOnS 2019, with different magnetic strengths as pointed out by the B_ℓ analysis.
We thus used the SPIRou magnetic maps as input to guide the reconstruction.
The resulting topology is less poloidal-dominated than SPIRou (91%), and the dipolar component occupies a smaller fraction of this poloidal field (65%) with a similar contribution of the quadrupolar and octupolar components (about 15%).
Surprisingly the mean magnetic field strength is similar to 2016 (1.08 kG) but the maximum field strength is much higher (4.79 kG), as well as the strength of the dipolar pole B_ dip=1.87 kG which is located at the same position (ϕ =0.43 and 30^∘ latitude).
§.§.§ Small-scale
Even if the ZDI analysis gives access to the magnetic topology, it neglects the small-scale magnetic field, which contains a major part of cool stars’ magnetic energy.
This is analysed by studying the change it induces in the shape of magnetically sensitive lines <cit.>, a technique called Zeeman intensification.
To perform this analysis, we used the algorithm of <cit.>, performing a Markov Chain Monte Carlo (MCMC) sampling, using the library <cit.> on a grid of synthetic spectra produced by the code, a polarised radiative transfer code described by <cit.>.
This grid is computed from the line lists used in Sect. <ref>, and atmospheric models <cit.>.
We parametrized a uniform radial magnetic field as a sum of magnetic field strength ranging from 0 to 6 kG, with a 2 kG step, weighted by filling factors representing the amount of stellar surface covered by this magnetic field.
For ESPaDOnS observations, as in previous studies <cit.>, we used the 963.5–981.2 nm region.
This region contains a group of TiI lines with different magnetic sensitivity (g_ eff summarised in Table <ref>), allowing us to disentangle the effect of the magnetic field on the equivalent widths from the effect of any other parameters, such as the TiI abundance.
However, this region also contains many telluric lines which are superimposed on the TiI lines from the stellar spectra and which need to be removed from the observed spectrum to perform the magnetic analysis.
To do so, we used the molecfit package <cit.>, developed to model and remove telluric lines from spectra obtained with instruments at the European Southern Observatory, and which can be used on spectra from any instrument.
Finally, as EX Lup is an accreting star, showing a signature of an accretion shock, the veiling might disturb the inference results, in particular the abundance, the vsin i and the radial tangential macroturbulent velocity, v_ mac.
We thus estimated the veiling using the magnetic null line of the TiI multiplet at 974.36 nm by performing a χ^2 minimisation using synthetic spectra letting the abundance, the vsin i, and the v_ mac as free parameters and adding a fractional veiling defined as:
I_veil = I+r/1+r,
where I_ veil is the veiled spectrum, I the spectrum without veiling, and r the fractional veiling.
We stress to the reader that deriving a precise value of the veiling is outside the scope of the present study
, our aim was only to get an estimation of the EX Lup mean spectrum at a wavelength close to the TiI lines in order to minimise its effect on the other inferred parameters.
For the 2016's ESPaDOnS mean spectrum, the minimal χ^2 is reached for r=0.5, vsin i=4.9 , v_ mac=1.9 , and a TiI abundance of -7.2.
For the mean spectrum of the 2019's ESPaDOnS observations, we obtained r=0.48, vsin i=5.0 , v_ mac=2.5, and a TiI abundance of -7.2.
We will thus assume the values obtained for r and v_ mac, and use the others as initial guesses for the MCMC sampling.
We assumed a multicomponent model given by:
S=∑f_i S_i,
where f_i are the filling factors, meaning the fraction of the stellar surface covered by a field strength B_i, and S_i the synthetic spectra of the corresponding magnetic field strength.
The averaged magnetic field is thus given by:
⟨ B ⟩ = ∑f_i B_i.
To set the number of filling factor to use, we iteratively added filling factors, with a 2-kG step in the corresponding magnetic field strength, and used the Bayesian information criterion <cit.> to only include the filling factors that significantly improve the fit.
The suitable solutions are components of 0, 2, 4 kG for ESPaDOnS 2016 and SPIRou observations, and 0, 2, 4, 6, 8 kG for ESPaDOnS 2019.
The free parameters of the analysis are thus the following: f_i, vsin i, v_r, and the TiI abundance, for which uniform prior were adopted.
Finally, we used an effective sample size of 1000 <cit.>.
The resulting line fit and magnetic field strength posterior distributions are presented in Fig. <ref>.
The 2016's magnetic field strength (3.08 ± 0.04 kG) is consistent with 2019 within uncertainties (3.16 ± 0.05 kG).
The inferences of all parameters are summarised in Table <ref>.
For SPIRou observations, we used here again a set of TiI lines around 2200 nm (see Table <ref>).
Unfortunately, this wavelength region does not contain any magnetically null line deep enough at our S/N to perform the veiling study we have done on ESPaDOnS observations.
Only a visual inspection of the line at 974.4 nm is possible, indicating that at this wavelength the parameters obtained for ESPaDOnS are consistent with the SPIRou observations.
We thus used the veiling values obtained for the 2019 ESPaDOnS data set, fixed the vsin i at the literature value and let the inference compensate for the eventual error with the non-magnetic parameters.
The inferred parameters are summarised in Table <ref>, and the line fit and inferred magnetic field strength are shown in Fig. <ref>.
The magnetic field strength recovered (2.00±0.03 kG) is significantly lower than the values found for ESPaDOnS observations.
As highlighted by <cit.>, the small-scale field might be overestimated when using optical wavelength.
In our case, a second explanation might come from the high v_ mac obtained, probably needed to compensate for an underestimated veiling, which lowers the effect of the magnetic field.
§ DISCUSSION
This work aimed at characterising the accretion process of the prototypical EXor, EX Lup, in quiescence, together with its magnetic field at small and large scales.
This study confirms that the typical magnetospheric accretion process of cTTSs is ongoing on this system, which seems stable between the two epochs studied (2016 and 2019), with a main accretion funnel flow connecting the disc to the stellar surface.
This produces the IPC profile observed in the H lines studied and their modulation with the stellar rotation period.
The accretion shock at the stellar surface produces the emission of the NC of the HeI D3 and CaII IRT lines, which are modulated with the stellar period and anti-correlated with the IPC profiles as expected in the magnetospheric accretion scheme.
This is consistent with the maximum IPC profile occurring at the same phase as the HeI D3 emitting region (ϕ≈ 0.6), indicating a funnel flow aligned with the accretion shock and inherently meaning that the truncation radius is located at the stellar corotation radius.
As expected, this phase is also associated with an extremum of the B_ℓ, for both LSD and HeI D3 measurements, and with the ESPaDOnS ZDI brightness and magnetic topology reconstructions, showing the connection between accretion and magnetic field.
To investigate the truncation radius r_ mag, we used the expression given by <cit.>:
r_ mag/R_⋆ = 2 m_ s^2/7 B_⋆^4/7Ṁ_ acc^-2/7 M_⋆^-1/7 R_⋆^5/7,
where the Mach number m_s≈1, B_⋆ is the equatorial magnetic field strength <cit.> in units of 140 G, is the mass accretion rate in units of 10^-8 , M_⋆ the stellar mass in units of 0.8 M_⊙, and R_⋆ the stellar radius in units of 2 R_⊙.
As in quiescence, we used the value before the 2022 outburst given by
<cit.>, 1.8×10^-9 .
The stellar mass and radius are given by <cit.> (0.6 M_ and 1.6 R_), and magnetic obliquity is obtained from ZDI analysis (30^∘ for ESPaDOnS, 23^∘ for SPIRou, see Sect. <ref>).
The dipole strengths needed to obtain r_ mag = r_ corot = 8.5±0.5 R_⋆ are thus
2.10±0.46 kG and 1.98±0.43 kG for ESPaDOnS and SPIRou, respectively.
Only an estimate of the truncation radius can be obtained because we do not have a precise value of the dipole strength.
Indeed, the B_ℓ measurements in the HeI D3 line and Zeeman intensification values also contain the higher-order components of the magnetic field, and the ZDI spherical harmonic decomposition from LSD might include flux cancellation lowering the large-scale strength obtained.
Even if the ZDI results on the 2019 datasets indicate a topology largely dominated by the dipole component, using these values should yield an overestimation of r_ mag.
The results using our magnetic field measurements are summarised in Table <ref>.
The values obtained from the B_ℓ and the small-scale field are not consistent with the corotation radius, as expected, except for the SPIRou measurement which can be explained by the smaller recovered field in the infrared domain.
However, the optical ZDI values yield truncation radii consistent with the corotation radius, due to the dipole-dominant magnetic topology of the system allowing us to recover a good estimate of the dipole pole strength using ZDI.
Here again, the SPIRou value yields a lower truncation radius due to the lower magnetic field strength obtained.
The difference in the results of the various magnetic analyses between the optical and infrared frames in 2019 needs to be discussed.
From the B_ℓ computed from the LSD profiles, the maximum values obtained are 911±26 and 64±20 G, for ESPaDOnS and SPIRou observations, respectively.
These maxima both occurred around ϕ=0.6, where the Stokes V signature is maximal with ESPaDOnS but almost vanishes with SPIRou, it is thus not surprising to have a large discrepancy here.
Concerning the minimum B_ℓ values, they reach -155±11 and -178±13 G, for ESPaDOnS and SPIRou observations, respectively, both around ϕ=0.1 and are thus consistent.
This behaviour is also visible on the ZDI reconstruction, where the strong positive radial magnetic field region at phase 0.6 on ESPaDOnS map completely vanishes on SPIRou map.
The two qualitative explanations we can provide are the following: (i) this maximum value is located in an optically dark region of the photosphere, which is consistent with simultaneous optical photometry (Kóspál et al., in prep.), and thus less contrasted in the SPIRou domain.
The magnetic contribution of this region to the Stokes V signal might be thus lowered in the infrared domain if the smaller-scale negative field is obscured in the optical domain.
And (ii) the two wavelength domains allowing to trace different heights in the photosphere, that such a difference might be an indication of a vertical structure of the magnetic field.
Concerning the two ZDI reconstructions obtained from the two ESPaDOnS epochs, they seem to point a similar brightness and magnetic radial maps even if the overall topology has evolved from a toroidal- to a poloidal-dominated state, as seen on previous objects <cit.>.
The strong positive radial field spot, associated with the dark spot of the stellar brightness and with the dipole pole, is also consistent with the modulation of HeI D3 NC radial velocity and B_ℓ.
However, the latter is pointing to an accretion shock associated with a region with a strong negative field.
Given the large amplitude of the HeI D3 NC radial velocity variation compared to EX Lup vsin i, the emitting region has probably a very small extent, meaning that its magnetic field information is lost at the photospheric level, explaining this disparity <cit.>.
Despite this stable pattern between 2016 and 2019, we observed some disparities in some parameters.
Even if the radial velocity and the B_ℓ (from LSD and HeI D3) modulations are well in phase, their amplitudes are slightly larger in 2019.
If the stronger magnetic field obtained in the optical frame, at small- and large-scale, explains the larger amplitude of the B_ℓ modulation, the accompanying effect on the radial velocity points to a modulation by the hotspot.
This is not a surprising behaviour, but the consistency between the ESPaDOnS and SPIRou radial velocity measurements is.
Indeed, the stellar activity effect on the photospheric lines, producing the apparent radial velocity modulation, is a wavelength-dependent phenomenon, an amplitude consistency between the radial velocity measurements in the optical and infrared frames is thus not expected.
To investigate in our data sets the relation between the stellar activity and the radial velocity modulation, we compared the latter to an activity indicator, the bisector inverse slope <cit.>.
The BIS was computed from the LSD profiles presented in Fig. <ref>, <ref>, and <ref>, and is defined as the difference between the mean velocity of the bisector at the top and at the bottom of the line.
On ESPaDOnS profiles, the first 15%, which contains the continuum and the wings, were ignored, as well as the last 15%, where the noise or an activity signature splitting the profile into two parts can affect the computation.
The top and bottom regions used to compute the BIS are the 25% top and bottom parts of the remaining profile.
For SPIRou profiles, the same conditions were used except that we had to ignore the first 25% of the profiles as it was affected by stronger wings.
In the case where the radial velocity modulation is only induced by the stellar activity, the line deformation, indicated by the BIS, is completely responsible for this modulation.
This means that a strong linear correlation should appear between the BIS and the radial velocity with, in a perfect situation, a -1-slope and a BIS = 0 at the mean velocity.
The BIS versus radial velocity plots are presented in Appendix <ref>.
For each data set, the Pearson correlation coefficient indicates a strong anti-correlation (2016: r=-0.81, p-value=0.005; ESPaDOnS 2019: r=-0.95, p-value=0.004; SPIRou: r=-0.72, p-value=0.05).
However, the optical measurements show much shallower slopes (-0.51±0.13 and -0.43±0.07 in 2016 and 2019, respectively), and an intersect far from the mean velocity expected (-0.34±0.10 and -0.08±0.06).
Only the infrared measurements are consistent (slope:-0.68±0.27, intersect: -1.37±0.25), but only thanks to the larger uncertainties, probably due to the lower correlation.
Furthermore, the much lower BIS obtained (exclusively negative) in the infrared compared to the optical frame, indicates chromatic effects that are not observed in the radial velocity modulation.
Stellar activity is thus probably dominating the radial velocity modulation, but another effect, such as Doppler shift induced by a companion, can not be completely excluded and needs further investigations that are unfortunately beyond the scope of the present work.
Finally, we would like to address the strong magnetic field recovered that drives the accretion process on EX Lup.
With the B_ℓ reaching 4.2 kG in the accretion shock, a dipole strength of 1.9 kG within a large-scale field of 1 kG reaching 4.8 kG locally, and a small-scale field exceeding 3 kG in the optical domain, EX Lup has one of the strongest magnetic field among cTTSs, and the strongest among those with dipole-dominated topology.
Such a configuration can set the suitable condition invoked by <cit.> for their hypothesis of an episodic accretion due to the magnetospheric accretion process itself.
Indeed, with such a dipolar strength, the magnetospheric radius is outside but close to the corotation radius (see Table <ref>).
In such a situation, instability arises due to the fact that angular momentum is transferred from the star to the disc (the so-called propeller regime), without being enough to drive an outflow.
This magnetic interaction only prevents accretion, pilling up the gas in the inner disc and increasing its pressure, thus forcing the inner edge of the disc to move inward and cross the corotation radius allowing the accretion to occur.
Once the gas reservoir has been accreted, the inner edge of the disc moves outward and another cycle starts <cit.>.
This phenomenon is different from the magnetospheric inflation reported for other cTTSs <cit.>, where a significant difference between the truncation and corotation radii induces a torsion of the magnetic field lines producing a toroidal field that inflates the whole magnetosphere.
This inflation goes up to an opening of the magnetic field lines that produces a magnetospheric ejection <cit.> before reconnecting to the disc.
Such a phenomenon is recurrent and produces an accretion variability as well.
Still, the time scale of such a cycle is of the order of the stellar rotation period, far shorter than the gas pilling up invoked by <cit.>, and no signature of magnetospheric inflation, nor ejection, were detected in EX Lup.
However, we would like to stress to the reader that no accumulation of matter at the magnetospheric radius was detected either, EX Lup seems only to be in the same initial conditions as the latter authors' theory.
In addition, the BIS suggest another source for radial velocity variation, such as a companion.
Tidal interactions <cit.> or thermal instabilities in the disc <cit.> induced by a companion are also existing hypotheses for EXor behaviour.
Moreover, recent works by <cit.> favour the latter instability as the origin of the episodic accretion of FUor objects.
Finally, the disc itself is not investigated in this work, instabilities in the disc are thus hypotheses that cannot be excluded <cit.>.
§ CONCLUSIONS
EX Lup is the prototypical EXor-type object whose recurrent bursts and outbursts were previously studied in detail using spectroscopy and photometry.
However, until this work, no information about its magnetic field was derived, despite its key role in the accretion process of cTTS and its possible origin for episodic accretion.
Here we provide the first spectropolarimetric time series, over two epochs (2016 and 2019) and two wavelength domains (optical and infrared), study of EX Lup, the first EXor whose magnetic field is studied.
We confirmed an ongoing magnetospheric accretion process as seen on many cTTSs.
It is represented by an accretion funnel flow and an accretion shock corotating with the stellar surface and driven by a kG dipolar magnetic field.
The funnel flow seems aligned with the accretion shock, itself located near the magnetic dipole pole, which is consistent with the stable pattern observed between the epochs studied.
The magnetic field of EX Lup has shown some disparities between wavelength domains, much weaker in the infrared.
This can be understood as a wavelength dependency of the parameters studied, but can also point to a vertical structure of the magnetic field, as different wavelengths are tracing different heights in the photosphere.
An expected small-to-large scale effect is also observed, by the different field strengths recovered using ZDI and Zeeman intensification, but also by the opposite polarity of the field associated with the accretion shock and the one recovered using LSD.
This indicates a very small accretion shock, as expected from the low mass accretion rate in quiescence and the large radial velocity variation of the emitting region.
Finally, the multi-kG field recovered for EX Lup is pointing to a magnetospheric radius being close but outside the corotation radius.
This configuration is suitable for disc instabilities induced by the magnetic field that yield accretion cycles.
These cycles might explain the accretion bursts observed on EX Lup, suggesting an inherently episodic magnetospheric accretion process.
However, a definite identification of the origin of EXor behaviour is beyond the scope of this paper, and other theories implying the disc itself or a companion cannot be excluded.
We would like to warmly thanks Oleg Kochukhov for useful discussions about the magnetic field of EX Lup, as well as Colin P. Folsom for his help in using the new version of his package.
The package is available at <https://github.com/folsomcp/specpolFlow>.
The package is available at <https://github.com/pouillyk/PySTELLA>
This research was funded in whole or in part by the Swiss National Science Foundation (SNSF), grant number 217195 (SIMBA). For the purpose of Open Access, a CC BY public copyright licence is applied to any Author Accepted Manuscript (AAM) version arising from this submission.
Based on observations obtained at the Canada–France– Hawaii Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The observations at the Canada–France–Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site.
This work was also supported by the NKFIH excellence grant TKP2021-NKTA-64.
aa
§ CROSS-CORRELATION MATRICES OF OPTICAL EMISSION LINES
Here we present the cross-correlation matrices of the ESPaDOnS emission lines studied in this work.
These matrices are discussed in Sect. <ref>.
§ STOKES I AND V FIT FROM ZDI RECONSTRUCTION
In this section, we show the fit of the LSD profiles by the ZDI reconstruction for the three data sets studied.
The Stokes I profiles are shown in Fig. <ref> and the Stokes V profiles in Fig. <ref>.
§ BIS VS. RADIAL VELOCITY
This appendix presents the analysis of the BIS and radial velocity correlation.
These results are discussed in Sect. <ref>.
|
http://arxiv.org/abs/2409.03095v1 | 20240904214849 | On Advanced Monte Carlo Methods for Linear Algebra on Advanced Accelerator Architectures | [
"Anton Lebedev",
"Vassil Alexandrov"
] | math.NA | [
"math.NA",
"cs.DC",
"cs.NA",
"G.1.3; G.3"
] |
On Advanced Monte Carlo Methods for Linear Algebra on Advanced Accelerator Architectures
1st Anton Lebedev
Institute for Theoretical Physics,
University of Tübingen,
Germany
Email: anton.lebedev@student.
uni-tuebingen.de
2nd Vassil Alexandrov
ICREA, Catalan Institution for
Research and Advanced Studies
Barcelona Supercomputing Centre, Spain
Email: [email protected]
September 9, 2024
============================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this paper we present computational experiments with the Markov Chain Monte Carlo Matrix Inversion ((MC)^2MI)
on several accelerator architectures and investigate their impact on performance and scalability of the method.
The method is used as a preconditioner and for solving the corresponding system of linear equations iterative methods, such as generalized minimal residuals (GMRES) or bi-conjugate gradient (stabilized) (BICGstab), are used.
Numerical experiments are carried out to highlight the benefits and deficiencies of both approaches and to assess their overall
usefulness in light of scalability of the method.
Monte Carlo Matrix Inversion, Scalable Hybrid Algorithms for Linear Algebra, accelerators
§ INTRODUCTION
Solving systems of linear algebraic equations (SLAE) in the form of Bx = b or inverting a matrix B is of unquestionable importance in many scientific fields. Iterative solvers are used widely to compute the solutions of these systems and such approaches are often the method of choice due to their predictability and reliability when considering accuracy and speed. They, however, may become prohibitive for large-scale problems as they can be very time consuming to compute. The complexity of these methods, in the serial case, is O(kn^2) for dense matrices in the iterative methods case and O(n^3) for direct methods with dense matrices while solving SLAE if common elimination or annihilation schemes (e.g. Gaussian elimination, Gauss-Jordan methods) are employed <cit.>. Therefore, these algorithms often rely on preconditioners to speed up the computations and/or to ensure faster convergence.
Monte Carlo (MC) methods complexity is linear in matrix size <cit.>, <cit.> and can quickly yield a rough estimate of the solution
by sampling a random variable whose mathematical expectation is the desired solution. For some problems an estimate is sufficient or even favourable, due to the accuracy of the underlying data. Therefore, it should be pointed out, that Monte Carlo methods may be efficiently used as preconditioners.
Depending on the method used to compute the preconditioner, the savings and end-results vary. A very sparse preconditioner may be computed quickly, but it is unlikely to
greatly reduce the run time to solution. On the other hand, computing a rather dense preconditioner is computationally expensive and might be time or cost prohibitive. Therefore, finding a good preconditioner that is computationally efficient, while still providing substantial improvement to the iterative solution process, is a worthwhile research topic.
A variety of parallel Monte Carlo methods have been developed within the past 20 years. A comprehensive compendium of the Monte Carlo functions and strategies of parallelization can be found in <cit.> .
In this work we present an enhanced version of a SPAI (SParse Approximate Inverse) preconditioner that is based on parallel Monte Carlo methods presented in <cit.> and <cit.>. This new optimized version is compared against the previous one, taken as a baseline, as well as against
MSPAI, which is the main accepted deterministic algorithm for SPAI preconditioning. Our results show that the Monte Carlo-based algorithm can be used instead of MSPAI to reduce the computation time and resource usage while producing results with similar or better quality.
Also a scalability analysis is carried out, showing that the random patterns in the memory access have a strong influence on the performance of the algorithm. Further research, to solve this issues, is proposed within the context of quasi-Monte Carlo Methods.
The next section gives and overview of related work. Monte Carlo methods, and the specific matrix inversion algorithm that is discussed as a SPAI preconditioner, are presented in section <ref>. Section <ref> presents parallel approach of the MonteCarlo and the hybrid algorithm. Section <ref> shows the approach and methodology applied in the enhancement of the parallel implementations
Sections <ref> and <ref> present corresponding results and analysis of the implementations. The conclusion <ref> summarises the results and outlines the future work.
§ RELATED WORK
Research efforts in the past have been directed towards optimizing the approach of
sparse approximate inverse preconditioners. Improvements to the computation of the Frobenius norm have been
proposed for example by concentrating on sparse pattern selection
strategies <cit.>, or building a symmetric preconditioner by
averaging off-diagonal entries <cit.>. Further, it has been
shown that the sparse approximate inverse preconditioning approach is also a viable
course of action on large-scale dense linear systems <cit.>. This is of
special interest to us, as the Monte Carlo method we are proposing in this paper is part
of a bigger family. It includes serial and parallel Monte Carlo algorithms for the
inversion of sparse, as well as dense matrices, and the solution of systems of linear
algebraic equations. The proposed Monte Carlo algorithm has been developed and enhanced
in the last decades, and several key advances in serial and parallel Monte Carlo
methods for solving such problems have been made <cit.>.
There has been an increased research interest in parallel Monte Carlo
methods for linear algebra in the past few years, and recent example is the Monte Carlo
Synthetic Acceleration (MCSA) developed through MCREX project at ORNL<cit.>.
Future work that deals with a parallel implementation of the presented algorithm is
being considered further in this section and in section <ref>.
In the past there have been differing approaches and advances towards a parallelisation
of the SPAI preconditioner. In recent years the
class of Frobenius norm minimizations that has been used in the original SPAI
implementation <cit.> was modified and is provided in a parallel SPAI
software package. One implementation of it, by the original authors of SPAI, is the
Modified SParse Approximate Inverse (MSPAI <cit.>).
This version provides a class of modified preconditioners such as MILU (modified ILU),
interface probing techniques and probing constraints to the original SPAI, apart from a
more efficient, parallel Frobenius norm minimization. Further, this package also
provides two novel optimization techniques. One option is using a dictionary in order to
avoid redundant calculations, and to serve as a lookup table. The second option is
an option to switch to a less computationalyl intensive, sparse QR
decomposition whenever possible. This optimized code runs in parallel, together with a
dynamic load balancing.
§.§ Using SParse Approximate Inverse as Preconditioner (SPAI)
The SPAI algorithm <cit.> is used to compute a sparse approximate inverse
matrix M for a given sparse input matrix B. This is done by minimizing ‖ BM-I‖_F.
The algorithm explicitly computes the approximate inverse, which is
intended to be applied as a preconditioner of an iterative method. The SPAI application
provides the option to fix the sparsity pattern of the approximate inverse a priori or
capture it automatically. Since the introduction of the original SPAI in 1996, several advances, building upon the initial implementation, have been made. Two newer implementations are provided by the original authors, the aforementioned MSPAI, and the highly scalable Factorized SParse Approximate Inverse (FSPAI <cit.>). The intended use of both differs depending on the problem at hand. Whereas MSPAI is used as a preconditioner for large sparse and ill-conditioned systems of linear equations, FSPAI is applicable only to symmetric positive definite systems of this kind. FSPAI is based around an inherently parallel implementation, generating the approximate inverse of the Cholesky factorization for the input matrix. MSPAI on the other hand is using an extension of the well-known Frobenius norm minimization that has been introduced in the original SPAI.
The algorithm attempts to solve a system of linear equations of the form Bx=b. Its input is a sparse, square coefficient matrix B. The right hand side vector b can either be provided by the user, or is arbitrarily defined by the software implementation. In the case of the SPAI application suite, if no right hand side vector is handed to the algorithm, it constructs one by multiplying matrix B with a vector consisting of all ones. In a general case, an input matrix B is passed to SPAI as a file. The program then computes a preconditioner using the Frobenius norm, afterwards it uses this intermediate result as an input to an appropriate solver.
§ MONTE CARLO APPROACH
Monte Carlo methods are probabilistic methods that use random numbers to either
simulate a stochastic behaviour or to estimate the solution of a problem. They are good
candidates for parallelisation due to the fact that, in principle, many independent samples are
used to estimate the solution. These samples can be calculated in parallel, thereby
speeding up the solution finding process. The so designed and developed parallel Monte Carlo
methods possess the following main generic properties <cit.>: efficient distribution of the compute data, minimum communication during the computation
and increased precision being achieved by adding extra refinement computations. Consideration of all these properties naturally leads to scalable algorithms.
It has to be noted that the quality of the solutions obtained using a Monte Carlo method is dependent upon the availability
of independent (pseudo) random numbers, which is a concern in parallel environments.
§.§ Algorithm
The following procedure has been presented in <cit.> and allows to extend
the Monte Carlo algorithm for processing diagonally dominant matrices, that is used as the foundation for this work (c.f. <cit.>), to the case of general matrices <cit.> <cit.>.
Let us recall for simplicity the key details from <cit.>. We assume the general case where ‖ B‖ > 1, with ‖·‖ being an arbitrary matrix norm, and consider the splitting
B = B̂ - C,
where the off-diagonal elements of B̂ are the same as those of B, and the diagonal elements of B̂ are defined as b̂_ii=b_ii+α_i‖ B‖, choosing in most cases α_i> 1 for i=1,2,...,n. For the simplicity of the algorithm it is often easier to fix single α .
In the general case, ‖ B‖ > 1, make the initial split B = B̂ - C. From this compute A = B_1^-1B_2, B_1 = diag(B̂) which satisfies ‖ A‖ < 1.
Then the inverse of B̂ is generated by
[B̂^-1]_rr^'≈1/N∑_s=1^N[∑_( j| s_j=r^')
W_j],
where ( j | s_j=r^') means that only
W_j = a_r s_1 a_s_1 s_2… a_s_j-1 s_j/p_r s_1 p_s_1 s_2… p_s_j-1 s_j,
for which s_j = r^' are included in the sum (<ref>).
Calculating ‖ B‖ can be an expensive operation, so any a-priori information
allowing for a reasonable estimate here is useful but not strictly necessary.
From this it is then necessary to work back and recover B^-1 from B̂^-1. To
do this an iterative process (k = n-1, n-2,… , 0) is used on B̂^-1:
B^-1_k = B^-1_k+1 + B^-1_k+1S_k+1B^-1_k+1/1 - (B^-1_k+1S_k+1),
where B^-1_n = B̂^-1 and S_i is all zero except for the {ii}^th
component, which is from the matrix S=B̂ - B. Then B^-1=B^-1_0.
The make up of matrix S means that while (<ref>) looks complicated it
is, in fact simply an update of the matrix by a scaled outer product of the (k+1)^th column with the (k+1)^th row.
There are obvious simplifications possible to ensure that
many multiplications by zero are not performed. This method of splitting and recovery
leads to Algorithm 1 <cit.>, which details a MC algorithm for inverting
general matrices and is given below for completeness. Further details on the recovery of the original inverse can be found in <cit.>.
Algorithm 1: Monte Carlo Algorithm for Inverting General Matrices
* Read in matrix B
* Input matrix B, parameters ε and δ
* Remove a set percentage of the smallest (in magnitude) entries of the matrix.
* Calculate intermediate matrices (B̂, B_1)
* Split B = B̂ - (B̂ - B), where B̂ is a diagonally
dominant matrix
* Apply the algorithm for inverting diagonally dominant
matrices from <cit.> with B=B̂ to obtain
B̂^-1
* Recovery of B^-1 from B̂^-1
* Compute S = B̂ - B
* Let S_i for i = 1, 2, …, n where each S_i has just
one of the n[on-zero elements of the matrix S
* Set B_n^-1 = B̂^-1
* Apply B_i-1^-1 = B_i^-1 + B_i^-1S_iB_i^-1/1 - (B_i^-1S_i) for i = n, n-1, …, 1
* Then B^-1 = B_0^-1
Note that the second step is optional and is relevant only when a reduction in the amount of data being communicated is desired. Its
influence has been investigated and the results are presented in sec. <ref>.
The above algorithm was modified to develop an MPI version of the algorithm. Several enhancements of the algorithm, as well as modifications concerning GPU implementation, are listed in the next section and were able to substantially improve its performance in generating rough inverses of the input matrices. The result can then be used directly as a preconditioner for solving a system of linear algebraic equations or further improved. We propose the use of an iterative refinement process, a parallel filter, or a combination of the two to further enhance the quality of the preconditioner. The decision whether those additional steps are taken is based upon the required accuracy and can be freely selected, depending on user requirements.
§ PARALLELIZATION DETAILS AND ISSUES
The previous algorithm can be split into the following 5 phases (Notice that phases 1 and 5 are only necessary when the initial matrix is not a diagonally dominant matrix (ddm)):
1) Initial matrix is transformed into a ddm, 2) Transformation of ddm for suitable Neumann series expansion, 3) Monte Carlo method is applied to calculate sparse approximation of the inverse matrix, 4) Given 2, calculate the inverse of the ddm from 3, 5) Recovery process is applied to calculate the inverse of the original matrix due to the transformation in 1.
It must be noted that the last phase requires in general 𝒪(n^3) operations and hence is generally neglected. Prior numerical experiments
have demonstrated that it is not compulsory to obtain an effective preconditioner.
This algorithm was originally designed for a HPC cluster composed of single-core compute nodes. It is written in C and uses the MPI library. It also makes use of the BeBOP sparse matrix converter <cit.> to translate the input matrix format into a CSR format.
§.§ MPI implementaion
Matrices A, B_1 and P (the transition probability matrix) are calculated during the phases mentioned above. Note that A = (I - C),C=B_1^-1B̂ and B_1 = diag(B̂).
Then a procedure is called by all the processes in which the partitioning of the matrix A is carried out. The distribution of the work is done evenly when the number of rows is divisible by the number of processes. In the opposite case, the remaining rows are distributed among the smaller MPI processes (without including the Master process).
After that, matrices A, B_1 and P are broadcast using MPI_Broadcast(). Then the Monte Carlo process (phase 3) is started in parallel by all MPI processes.
During the Monte Carlo phase, each MPI process will calculate a piece of the inverse matrix of C (C^-1), using matrix A; remember that C = (I - A).
Column-scaling by B_1^-1 will then be applied to each row, to get the respective part of B̂^-1 (phase 4).
After finishing the Monte Carlo process and phase 4, each process will send its part of the matrix (B̂^-1) to the master process by calling MPI_Send(). The master process will perform a corresponding MPI_Receive() and will merge the received parts with its own.
Given a concatenation issue due to the CSR format, the Send-Receive process has to be ordered, having to receive first the data from process 1, then process 2 and so on.
Finally the last phase (5) is optionally executed by the master processes on matrix (B̂^-1) to calculate B^-1. This step is optional and must be
enabled explicitly.
This process is difficult to parallelize due to its iterative nature. On the other hand, using an approach in which each iteration is executed in parallel, would imply a high increment in the communications given, that a synchronization would be required at each iteration.
§.§ GPU implementaion
Regarding the GPU implementation it must be noted, that due to the irregular data access and comparatively short computation kernels
the method appears to be ill-suited for a GPU. Nevertheless a GPU can be used to accelerate the computation of the preconditioner using (MC)^2MI
if care is taken to keep the GPU sufficiently busy.
If the requirement for a sparse inverse is abandoned the algorithm, with or without recovery, yields itself well to an implementation for GPUs.
This restricts the dimensionality of the matrices the algorithm is applicable to to those that fit entirely into the main memory of the accelerator device.
If a preconditioner is to be computed using (MC)^2MI on one or more GPUs for large, sparse, matrices a non-negligible amount
of overhead is introduced in order to ensure that only the most relevant entries of the inverse are retained for each row.
In a first implementation for NVIDIA GPUs the entire sparse inverse
was stored on the device, along with the necessary (preprocessed) matrix. Since the number of different entries visited by a chain
is not known a-priori the entire set of Markov Chains is simulated at once and used to fill a contiguous array corresponding to one
row of the approximate inverse. Afterwards only a prescribed number of entries largest in magnitude are retained for the sparse approximate inverse.
The rationale behind this is that if the inverse is itself considered as a Markov Chain only the
entries largest in magnitude will contribute significantly to its inverse (the original matrix).
As was the case with the previous implementation an extension to multiple GPUs is comparatively simple and has therefore been implemented.
§ ALGORITHMIC MODIFICATIONS
The original code, provided by Diego Dávila, was corrected to adhere to the MPI 3.0 standard and therefore be portable. This was crucial for performance analysis
on the testing system c.f. sec. <ref>. Furthermore a parallelizable pseudo-random number generator (PRNG) was used to replace the original generator, which was not suited for parallel environments.
§.§ Matrix Reduction
The computation of an approximate inverse using Markov Chain Monte Carlo (MCMC) requires
the knowledge of the whole state space - hence of the entire matrix A. The distribution of A
among the parallel workers becomes increasingly expensive with growing matrix size.
An obvious way to accelerate the method is to reduce the amount of data being transferred, i.e., to reduce the number of non-zero entries
of the matrix. Since the magnitude of the entries of A signifies its importance in the MCMC simulation we decided to
drop a set percentage of the smallest entries of the matrix. This modifies the linear system and hence the correctness
of the approach had to be verified.
§.§ Implementation Specifics - MPI
As a first step the pseudo-random number generator used in the original version of the program
was replaced by TRNG[https://www.numbercrunch.de/trng]. This was necessary since the original code used the standard C
PRNG, which does not possess a sufficiently long period to guarantee statistic independence of the Markov chains for large matrices. Additionally it is not designed for parallel environments. Both flaws are rectified by using TRNG.
The amount of communications has already been reduced to an almost-minimum in the previous implementation of (MC)^2MI.
In one iteration of the improvement of the code the broadcast of the transition probabilities (necessary for the MCMC simulation) was
eliminated. Instead these probabilities were computed by every worker from its knowledge of A.
Furthermore a minor non-conformity to the MPI standard was eliminated, which made the code reliant upon a specific
implementation of MPI, thus preventing the use of the preferred compiler and optimized MPI implementation on MareNostrum 4.
§.§ Implementation Specifics - GPU
Compared to the host machine the GPU has a very limited amount of memory and requires a more elaborate
approach to memory handling. Due to memory constraints storage of a dense block of an inverse on the device is not feasible,
and neither is on-the-fly transfer of computed entries to the host - due to latency constraints.
We have opted to allocate and fill a block of the sparse inverse on the device and transfer it to
the host matrix at the end of the computation.
This differs from the MPI implementation in so far as the computation of each row requires additional memory management overhead
but the final reduction of the separate blocks of the inverse is cheaper since the necessary storage and data layout is known beforehand. The downside being that
for some matrices entries of the inverse may be lost for some rows, whilst others contain unused entries (=0). This deficiency will be addressed in future
versions of the GPU implementation.
A further difference from the MPI implementation is the usage of α·‖ B‖·sgn(B_i,i) as entries of the matrix B_2, as opposed
to α·‖ B‖. This ensures that even if the signs of the diagonal elements are non-uniform the augmentation will yield a diagonally-dominant matrix.
This approach also reduces the perturbation of the original matrix caused by the augmentation procedure.
Usage of multiple GPUs was implemented by letting each device be controlled by a dedicated OpenMP process.
§ NUMERICAL EXPERIMENTS
§.§ Execution Environment
The set of matrices chosen for the assessment of the proposed modifications is listed in tbl. <ref>.
The set contains symmetric and non-symmetric matrices of varying sizes and filling fractions. Matrices
and have been provided by our collaborators and are representative
of systems occurring in climate simulations. The matrix has been taken
from the Florida University's matrix collection and is a discretized Laplacian using cubic finite elements on a fine mesh.
Almost all numerical experiments were carried out on the MareNostrum 4 (MN4) cluster at the Barcelona Supercomputing Centre in Spain.
The machine consists of 3456 nodes with 2 Intel Xeon Platinum [email protected] GHz per node.
The nodes are connected via Intel Omni-Path HFI Silicon 100 (100 GBit/s) adapters.
The evaluation of the preconditioners was performed using 3 nodes of MareNostrum 4. The number was chosen arbitrary but kept constant,
thereby ensuring that the execution times of the preconditioned iterative solvers would be comparable for preconditioners computed
using CPUs and GPUs.
Earlier experiments evaluating the performance of Tesla K80 GPUs were performed on a GPU workstation and on
the institutional cluster set up by AL at the Institute for Theoretical Physics.
Said cluster consists of 12 Nodes connected by a common 10GBit ethernet network and each containing two Intel Xeon E5-2640v4 CPUs.
On MN4 both, (MC)^2MI and MSPAI, were compiled using the INTEL compiler (v 17.0.4) and MPI implementation (build 20170405).
Execution was carried out in exclusive mode with CPU clock speeds fixed to the second-highest speed-step using batch script options to SLURM.
The computed preconditioners were validated using the GMRES implementation provided by Trilinos(v. 12.10.1).
For most experiments a precision of ϵ,δ = 2^-4 was chosen for MSPAI and MCMCMI. Additionally, for MCMCMI
the scaling of the diagonal was performed using α = 5.
To ensure that the GPUs are well-utilized a precision of ϵ,δ∈{ 0.01,0.005} has been chosen
in the numerical experiments comparing GPUs and CPUs. This choice provides a first limit on the range of parameters for which the use
of of (MC)^2MI on accelerators could be considered.
§.§ Fitness of purpose
All of the numerical experiments in this section have been carried out with a fixed execution configuration of 48 processes
spread evenly over two nodes of MN4.
The total execution time (preconditioner computation and GMRES execution) is provided in fig. <ref>
and <ref> for two different matrices.
Henceforth refers to the method without preconditioner and the preconditioner (computed using MSPAI or MCMCMI) is designated P.
MSPAI is more effective in the case of the larger of the two matrices but
only in the case when >7 % of the value range of the elements of the matrix have been dropped. If less elements are removed (c.f. fig.
<ref>) (MC)^2MI will require less computation time.
In fig. <ref> one can see, that the idea of removing a set amount of small elements may well accelerate the
computation of the preconditioner. The outcome depends on the matrix and there will, in general, be an optimal amount of negligible
entries for each matrix. In the case of that amount is between 2% and 6%. If more entries
are removed the amount of information contained in the matrix becomes insufficient to create a good preconditioner.
As can be seen in fig. <ref> the reduction of the amount of information required to be broadcast, coupled with a
moderate precision requirements for the approximate inverse will result in a shorter overall execution time when a preconditioner is computed using (MC)^2MI.
This demonstrates that the method may be used in cases where the preconditioner has to be recomputed every time prior to its usage (i.e.,
in iterative methods where the matrix changes in every step).
Finally we attempted to use the Monte Carlo method to compute a preconditioner for the matrix of the sparse matrix collection,
whose condition number surpasses 5· 10^16 and which has a non-trivial nullity.
Accordingly the iterative solver used to test the preconditioner for this case (BiCGstab) fails to converge
if no preconditioner is used, reaching the defined upper bound of 30000 iterations for a desired precision of ‖ r‖_2‖ b‖_2≤ 0.45.
Using the preconditioner computed with (MC)^2MI for ϵ=0.01 enables BiCGstab to converge,
reducing the number of steps required to achieve the desired bound to 3852
and the total execution time (preconditioner + BiCGstab) from 9.7[sec] to 1.3[sec] - in this case using 96 instead of 48 processes.
§.§ Scaling to Moderate Number of Cores/Processors
In fig. <ref> the execution time of the preconditioner computation using 512 processes
is shown for all matrices of tbl. <ref>. It is obvious that (MC)^2MI is
superior to MSPAI in every case, with the largest savings being achieved for rather dense or very large matrices.
Note that this is purely a comparison of the time required to compute a preconditioner using the appropriate method.
§.§ (MC)^2MI on Accelerators
In a final set of numerical experiments we investigated the feasibility of using accelerators, specifically NVIDIA GPUs
to speed up the computation of the preconditioners using (MC)^2MI. To this end the algorithm delineated in sec.
<ref> was implemented in CUDA and evaluated for matrices of tbl. <ref>.
Here we have to note that unlike for the pure MPI implementation a sufficiently small ϵ,δ (i.e., a high precision)
is necessary to fully utilise the GPU as such a precision of ϵ = 0.01 was chosen for the experiments.
The latter were performed on Tesla K80 as well as Volta V100 devices using a variable number of GPUs.
Fig. <ref> shows the typical behaviour of the Markov Chain Monte-Carlo method when implemented on GPUs
using as an example. The execution time of the pure MPI implementation on two nodes of MareNostrum 4 serves as
a reference. It is immediately obvious that a GPU is significantly faster by up to a factor of ∼ 6.5.
The speed-up decreases when using 3 or more GPUs, which is to be attributed to the overhead introduced by the memory management.
Profiling results indicate that in this case the time required to sort the entries of the inverse row matches the time required to compute them
using (MC)^2MI. An additional factor limiting the performance, which has not yet been eliminated, is the necessity to compact
the pre-processed matrix on the host before the MC iteration may be performed. In the present case this reduces the achievable speed-up by a factor of ∼ 2.
Note that for this comparison the α parameter was chosen to be 4.0 instead of 5.0. This change results in a 20% longer execution time.
The effect of an increased amount of work can be seen in fig. <ref>, where the speed-up achieved
in comparison to an older CPU architecture is shown. The comparison is provided due to the given CPU and GPU resources being
an easily accessible resource maintained by AL at the Institute for Theoretical Physics in Tübingen as well as to their availability
to common users (in comparison to multiple V100).
The speed-up provided by the older Teslas is limited for the given case due to the sparsity of the matrix. Further numerical
experiments indicate that utilisation of the GPU may be improved by increasing the desired precision.
Finally fig. <ref> shows the speed-up achieved by two generations of NVIDIA GPUs for the nonsym_r3_a11 matrix
compared to the small institutional cluster in Tübingen. As has been demonstrated in fig. <ref> the amount of work provided by this matrix is insufficient to mask the overhead of data management and the CPU portion of the preprocessing stage. Both are currently being adressed in development.
The striking feature is that the newer architecture appears to perform worse than the older one. We believe this to be an artefact due to
an insufficient optimization of the GPU code for the NVIDIA Volta architecture, since it has been originally developed and optimized for the Kepler
architecture.
§ CONCLUSIONS AND FUTURE WORK
In summary we have shown that the computation of a preconditioner using (MC)^2MI method can be accelerated
by ballancing the precision with which the preconditioner is calculated as well as dropping entries of the original matrix depending on the precision. The quality of the resulting preconditioner
does not deteriorate as fast as is the case if the same approach is applied to MSPAI.
The approach shows that in most cases the number of iterations required by GMRES or BiCGstab to solve the resulting system of Linear Algebraic Equations can be substantially reduced. If only a rough estimate of the inverse is required the combination of (MC)^2MI and an appropriate (for the matrix type)
iterative method can result in a lower total execution time, when compared to a non-preconditioned method.
The numerical experiments indicate that for ϵ,δ < 0.01 (at high precisions) the usage of GPUs should be considered.
It has been demonstrated, that despite the apparent bad suitability for a GPU the (MC)^2MI method may still be successfully
used with it.
Future work will focus on a merging of the CPU and GPU implementations using the tasking constructs of OpenMP 4.5. This approach
promises to reduce the overhead of memory management on the GPU whilst simultaneously utilising the host to its full extend.
Preliminary profiling suggests a potential increase in performance by a factor of ≳ 2.
Furthermore an integrated application test for the Markov Chain Monte Carlo preconditioners is planned, to observe the performance
on a wider set of matrices than the set used so far, as well as an investigation of the potential for a MPI+CUDA parallelisation of the method.
On the host side the pure MPI implementation will be rewritten to utilise hybrid parallelism using MPI+OpenMP and implement a better load balancing.
§ ACKNOWLEDGMENTS
Anton Lebedev wishes to thank the Severo Ochoa program, Spain, for providing a mobility grant enabling him to work on this project at the Barcelona Supercomputing Centre.
IEEEtran
papersize=8.5in,11in
§ ARTIFACT DESCRIPTION APPENDIX: ON ADVANCED MONTE CARLO METHODS FOR LINEAR ALGEBRA ON ADVANCED ACCELERATOR ARCHITECTURES
§.§ Abstract
We present observations on the performance of the implementation of the Markov Chain Matrix Inversion method for different versions of the x86
CPU architecture (Broadwell,Skylake) and NVIDIA GPUs of the Kepler and Volta architectures. The performance and correctness of the method
as a means of obtaining preconditioners for iterative systems is evaluated using Trilinos and compared to MSPAI.
The CPU (MPI) and GPU implementations of the Markov Chain method are compared to each other to determine the feasibility and limitations of
a GPU implementation of the method.
§.§ Description
§.§.§ Check-list (artifact meta information)
* Algorithm: Markov Chain Monte Carlo Matrix Inversion, preallocated row storage on CPU, stream compaction and sorting on GPU
* Compilation: MareNostrum 4: INTEL toolchain v 2017.4, with optimization flags.
ITP Tübingen: GCC v. 6.3.1 (20170216) with optimization flags.
* Run-time environment: MareNostrum 4: SLES 12-SP2, Kernel: 4.4.120-92.70.
ITP Tübingen: CentOS 7, Kernel: 3.10.0-514.26.2 (no KPTI mitigation).
CTE Power RHEL 7.4, Kernel: 4.11.0-44
* Hardware:
MareNostrum 4 Nodes with 2 Xeon Platinum 8160 CPUs each. 96GB RAM per node. Connected via 100GBit Intel Omni-Path HFI Silicon 100 in a fat tree network topology.
CTE Power (V100 machine) Nodes with 2 x IBM Power9 8335-GTG 3.00GHz each. 512GB RAM and 4 V100 GPUs with 16GB HBM2 VRAM.
ITP Tübingen Nodes with 2 Intel Xeon E5-2640v4 CPUs each. 128GB RAM per node. Connected via 10GBit Ethernet, star network topology. Network parameters not optimized.
* Execution: Via SLURM scheduler.
* Output: Execution times (in milliseconds) are printed to standard output and processed from there.
* Experiment workflow: Automated filling of SLURM script templates and automated enqueueing of the jobs by a generator script written in Python.
An index of numerical experiments is stored in the top-level directory where the generator script was called. This index is used to
collect and pre-process the results using an evaluation script written in Python.
Graphical analysis of the data is performed using a Jupyter notebook.
* Experiment customization: Execution configuration of the job scripts customized to stay within storage quota.
K80 experiments driven by a separate script.
* Publicly available?: Currently not publicly available. Access to the authors repository can be granted upon request.
§.§.§ How software can be obtained
The GPU implementation can be obtained through the authors private Bitbucket repository upon request. The CPU implementation will be
publicly available from said repository by the end of November.
§.§.§ Hardware dependencies
The optimal block and grid size of the GPU implementation are dependent on the used GPU and have hence to be adapted accordingly.
A rough search for minimal execution time using the sym_r6_a11 matrix suggested a block size of 96 threads (3 warps) and a grid size of 170 for the Volta GPUs.
§.§.§ Software dependencies
*CPU
The CPU implementation uses version 4.15 of Tina's Random Number Generator library. The library implements parallel pseudorandom number generators
and is therefore key to the correctness of the presented method. It is available from www.numbercrunch.de/trng.
It also relies on the BeBOP sparse matrix library to handle CRS matrices.
*GPU
The GPU verision has been implemented in C++ and CUDA. Of the CUDA libraries it utilises cuRAND in the core routines and
cuBLAS in some auxiliary routines and for testing purposes.
The V100 compilation was performed using CUDA Toolkit v9.1, the K80 compilation was performed using CUDA Toolkit v8.0.
Parsing of execution parameters is done using BOOST program options library (tested with BOOST 1.{ 56,64,66}) and
the Eigen linear algebra template library (http://http://eigen.tuxfamily.org/) to handle sparse matrices with a minor
correction in the unsupported routine.
*Testing
Correctness checks of the preconditioners are carried out using a parallel implementation of CG/CGS/BiCG(stab)/GMRES.
The code performing these checks has been written in C++ and uses Trilinos (v 12.10.1 on MN4, 12.13 on the ITP cluster).
On MareNostrum 4 both the CPU implementation and the preconditioner testing code rely on the MPI implementation provided by INTEL.
On the ITP cluster the MPI library is MPICH 3.2.1.
§.§ Installation
§.§.§ MareNostrum 4
The MPI implementation is compiled using a simple Makefile and utilising the INTEL compiler
to compile all but the TRNG files, which are compiled using . Compiler options are
and linked to BeBOP libraries via
and statically linked to the TRNG library
§.§ CTE-POWER
The GPU code is compiled using v9.1.85 with the following compiler flags
using a simple makefile which constitutes just a collection of source files to be compiled and linked.
§.§.§ ITP Tübingen
The process is the same as for the other two, except for the optimization flags:
§.§ Experiment workflow
The numerical experiments carried out on the MareNostrum 4 and CTE-Power clusters at the Barcelona Supercomputing Centre were
executed in two stages:
* Stage: Generate a set of preconditioners
The numerical experiments were executed using the SLURM scheduler. A generator script was written in Python. Said script accepts
a set of template files for the preconditioner computation and testing parameters as well as job scripts for generation and testing of preconditioners.
The execution parameters are collected in a separate parameter file and indexed by matrix in dictionaries.
The user may provide a desired number of repetitions the experiments will be run (10 as a default) -each repetition will
generate a preconditioner which will be stored with a file name containing the repetition number.
The generator script generates a directory structure and an index file for the desired numerical experiments. All of the jobs
to generate preconditioners are launched using a simple launcher script and the generated index file.
* Stage: Test the preconditioners.
The tests of the generated preconditioners must be enqueued manually by the user since no guarantee can be made, that storage quota will not be reached during the
generation phase. The option for SLURM has been used to ensure that the tests of generated preconditioners
are started only after all repetitions of the generation script have been run.
The testing stage produces, for each parameter set (experiment) and each repetition a unique text file containing the results of the
execution of the chosen iterative method.
Execution on the K80s differs in so far as the second stage is omitted and the first one is executed sequentially by a dedicated Python script
into which all the required parameters are hard-coded.
§.§ Evaluation and expected result
Evaluation of the numerical experiments is carried out by first consolidating the results into a single Pandas data frame.
This is done automatically by a preprocessing script which utilises the index of experiments generated in the first stage of the experiments.
The collected data is stored in CSV format.
It is then imported into a Jupyter notebook and further evaluation and visualization is performed in accordance with the requirements documented therein.
Raw results include plain-text output files from the SLURM scheduler and the code used to test the preconditioners.
Intermediate results are consolidated into CSV files and final results consist of a collection of plots showing the speed-up and execution time of
different parameter configurations for different matrices. The images are stored in EPS format.
§.§ Notes
The MSPAI preconditioner may be obtained at https://www5.in.tum.de/wiki/index.php/MSPAI and is compiled with the provided Makefiles, which require
the ATLAS library.
|
http://arxiv.org/abs/2409.03189v1 | 20240905022817 | A note on the differential spectrum of the Ness-Helleseth function | [
"Ketong Ren",
"Maosheng Xiong",
"Haode Yan"
] | cs.CR | [
"cs.CR",
"cs.IT",
"math.IT"
] |
A note on the differential spectrum of the Ness-Helleseth function
Ketong Ren1, Maosheng Xiong2 and Haode Yan 1
1School of Mathematics, Southwest Jiaotong University, Chengdu, China.
E-mail: mailto: [email protected]@my.swjtu.edu.cn, mailto: [email protected]@swjtu.edu.cn
2Department of Mathematics, The Hong Kong University of Science and Technology, Hong Kong, China.
E-mail: [email protected]
Corresponding author: Haode Yan
==================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Let n⩾3 be an odd integer and u an element in the finite field _3^n. The Ness-Helleseth function is the binomial f_u(x)=ux^d_1+x^d_2 over _3^n, where d_1=3^n-1/2-1 and d_2=3^n-2. In 2007, Ness and Helleseth showed that f_u is an APN function when χ(u+1)=χ(u-1)=χ(u), is differentially 3-uniform when χ(u+1)=χ(u-1)≠χ(u), and has differential uniformity at most 4 if χ(u+1)≠χ(u-1) and u∉_3. Here χ(·) denotes the quadratic character on _3^n. Recently, Xia et al. determined the differential uniformity of f_u for all u and computed the differential spectrum of f_u for u satisfying χ(u+1)=χ(u-1) or u∈_3. The remaining problem is the differential spectrum of f_u with χ(u+1)≠χ(u-1) and u∉_3. In this paper, we fill in the gap. By studying differential equations arising from the Ness-Helleseth function f_u more carefully, we express the differential spectrum of f_u for such u in terms of two quadratic character sums. This complements the previous work of Xia et al.
Keywords: cryptographic function; differential uniformity; differential spectrum; character sum
Mathematics Subject Classification: 11T06, 94A60.
§ INTRODUCTION
Substitution boxes (S-boxes for short) are crucial in symmetric block ciphers. Cryptographic functions used in S-boxes can be considered as functions defined over finite fields. Let _q be the finite field with q elements, where q is a prime power (i.e. q=p^n and n is a positive integer). We denote by _q^*:=_q ∖{0} the multiplicative cyclic subgroup of _q. Any function F: _q→_q can be uniquely represented as a univariate polynomial of degree less than q. For a cryptographic function F, the main tools to study F regarding the differential attack <cit.> are the difference distribution table (DDT for short) and the differential uniformity introduced by Nyberg <cit.> in 1994. The DDT entry at point (a,b) for any a,b∈_q, denoted by δ_F(a,b), is defined as
δ_F(a,b)=|{x∈_q|𝔻_aF(x)=b}|,
where 𝔻_a F(x)=F(x+a)-F(x) is the derivative function of F at the element a. The differential uniformity of F, denoted by Δ_F, is defined as
Δ_F=max{δ_F(a,b)|a∈_q^*, b∈_q}.
Generally speaking, the smaller the value of Δ_F, the stronger the resistance of F used in S-boxes against the differential attack. A cryptographic function F is called differentially k-uniform if Δ_F=k. Particularly when Δ_F=1, F is called a planar function <cit.> or a perfect nonlinear (abbreviated as PN) function <cit.>. When Δ_F=2, F is called an almost perfect nonlinear (abbreviated as APN) function <cit.>, which is of the lowest possible differential uniformity over _2^n as in such finite fields, no PN functions exist. It has been of great research interest to find new functions with low differential uniformity. Readers may refer to <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and references therein for some of the new development.
To investigate further differential properties of nonlinear functions, the concept of differential spectrum was devised as a refinement of differential uniformity <cit.>.
Let F be a function from _p^n to _p^n with differential uniformity k, and
ω_i = |{(a,b) ∈_p^n^* ×_p^n | δ_F(a,b) = i}|, 0 ⩽ i ⩽ k.
The differential spectrum of F is defined as the ordered sequence
𝕊 =[ω_0,ω_1,...,ω_k].
According to the Definition <ref>, we have the following two identities
∑_i=0^kω_i=(p^n-1)p^n,∑_i=0^kiω_i=(p^n-1)p^n.
The differential spectrum of a cryptographic function, compared with the differential uniformity, provides much more detailed information. In particular, the value distribution of the DDT is given directly by the differential spectrum. Differential spectrum has many applications such as in sequences <cit.>, <cit.>, coding theory <cit.>, <cit.>, combinatorial design <cit.> etc. However, to determine the differential spectrum of a cryptographic function is usually a difficult problem. There are two variables a and b to consider in each ω_i. When F is a power function, i.e., F(x)=x^d for some positive integer d, since δ_F(a,b)=δ_F(1,b/a^d), the problem of the value distribution of {δ_F(a,b)|b∈_q} is the same as that of {δ_F(1,b)|b∈_q}, so in this case two variables a and b degenerate into one variable b and the problem becomes much easier. Power functions with known differential spectra are summarized in Table <ref>.
For a polynomial that is not a power function, the investigation of its differential spectrum is much more difficult. There are very few such functions whose differential spectra were known <cit.>, <cit.>. The main focus of this paper is the Ness-Helleseth function. Let n be a positive odd integer, d_1=3^n-1/2-1, d_2=3^n-2 and u∈_3^n. The Ness-Helleseth function, denoted as f_u(x), is a binomial over _3^n defined as
f_u(x)=ux^d_1+x^d_2.
To describe the differential properties of the Ness-Helleseth function f_u(x) which obviously depend on u, we define certain sets of u as in <cit.>
{[ 𝒰_0={u∈_3^n | χ(u+1)≠χ(u-1)},; 𝒰_1={u∈_3^n | χ(u+1)=χ(u-1)},; 𝒰_10={u∈_3^n | χ(u+1)=χ(u-1)≠χ(u)},; 𝒰_11={u∈_3^n | χ(u+1)=χ(u-1)=χ(u)}. ].
Here χ denotes the quadratic character on _3^n^*. It is easy to see that 𝒰_0∩𝒰_1=∅, 𝒰_10∩𝒰_11=∅ and 𝒰_10∪𝒰_11=𝒰_1.
In 2007, Ness and Helleseth showed that (see <cit.>)
1). f_u is an APN function when u∈𝒰_11;
2). f_u is differentially 3-uniform when u∈𝒰_10;
3). f_u has differential uniformity at most 4 if u∈𝒰_0∖_3.
Moreover, Ness and Helleseth observed by numerical computation that in 1), the constraint imposed on u, namely u∈𝒰_11, appears to be necessary for f_u to be an APN function.
In a recent paper <cit.>, Xia et al. conducted a further investigation into the differential properties of the Ness-Helleseth function f_u. They determined the differential uniformity of f_u for all u ∈_3^n (see <cit.>), hence confirming, in particular, that f_u is indeed APN if and only if u∈𝒰_11. Moreover, for the cases of 1) and 2), they also computed the differential spectrum of f_u explicitly in terms of a quadratic character sum T(u) (see <cit.>). However, for u∈𝒰_0∖_3, while it was shown that f_u has differential uniformity 4, the differential spectrum of f_u remains open. The purpose of this paper is to fill in this gap, that is, in this paper, we will compute the differential spectrum of f_u explicitly for any u∈𝒰_0∖_3 and similar to <cit.>, the result will be expressed in terms of quadratic character sums depending on u.
Let us make a comparison of the methods used in this paper and in <cit.>. We first remark that for u∈𝒰_0∖_3, to determine the differential uniformity of f_u is already a quite difficult problem, as was shown in <cit.>, the final result involved 32 different quadratic character sums, about one-half of which can not be evaluated easily (see <cit.>). Instead, the authors applied Weil's bound on many of these character sums over finite fields to conclude that the differential uniformity of f_u is 4. While our paper is based on <cit.> and can be considered as a refinement, since we are dealing with the differential spectrum which is a much more difficult problem, it is conceivable that the techniques involved in this paper would be even more complicated. This is indeed the case, as will be seen in the proofs later on. In particular, we have found many relations among these 32 character sums, some of which are quite technical and surprising, that help up in computing the differential spectrum.
This paper is organized as follows. Section <ref> presents certain quadratic character sums that are essential for the computation of the differential spectrum. In Section <ref>, the necessary and sufficient conditions of the differential equation to have i (i=0,1,2,3,4) solutions are given. The differential spectrum of f_u is investigated in Section <ref>. Section <ref> concludes this paper.
§ ON QUADRATIC CHARACTER SUMS
In this section, we will introduce some results on the quadratic character sum over finite fields. Let χ(·) be the quadratic character of _p^n (p is an odd prime), which is defined as
χ(x)=
{[ 1, x _p^n^*,; -1, x _p^n^*,; 0, x=0. ].
Let _p^n[x] be the polynomial ring over _p^n. We consider the character sum of the form
∑_a∈_p^nχ(f(a))
with f∈_p^n[x]. The case of (f) =1 is trivial, and for (f) =2, the following explicit formula was established in <cit.>.
<cit.>
Let f(x)=a_2x^2+a_1x+a_0∈_q[x] with q odd and a_2≠ 0. Put d=a_1^2-4a_0a_2 and let χ(·) be the quadratic character of _q. Then
∑_a∈_qχ(f(a))=
{[ -χ(a_2), if d≠ 0,; (q-1)χ(a_2), if d=0.; ].
The character sum plays an important role in determining the differential spectrum of the Ness-Helleseth function. Let p=3 and n be an odd integer. For any fixed u ∈𝒰_0∖_3, define g_i∈_3^n[x] (i∈{1,2,3,4,5}) as follows:
{[ g_1(x)=-(u+1)x,; g_2(x)=x(x-1-u),; g_3(x)=x(x-1+u),; g_4(x)=x^2-x+u^2=(x+1+√(1-u^2))(x+1-√(1-u^2)),; g_5(x)=-φ(u)(x+u^2/φ(u))=-φ(u)(x+1-√(1-u^2)), where φ(u)=1+√(1-u^2).; ].
Herein and hereafter, for a square x ∈_3^n^*, we denote by √(x) the square root of x in _3^n such that χ(√(x))=1. Since n is odd, χ(-1)=-1, this √(x) is uniquely determined by x. For u ∈𝒰_0∖_3, the element 1-u^2 is always a square in _3^n, so √(1-u^2) is well defined, and we have χ((u+1)φ(u))=χ(-(u+1+√(1-u^2))^2)=-1. Additionally, let A={0,1± u,-1±√(1-u^2)}. It is easy to note that the set A contains all the zeros of g_i(x), i=1,2,3,4,5. The values of χ(g_i(x)) on A are displayed in the Table <ref>.
For any u ∈𝒰_0\_3, the following character sums were meticulously computed in <cit.>:
<cit.>
Let u ∈𝒰_0\_3, we have
* ∑_z∈_3^nχ(g_1(z)g_2(z))=-1,
* ∑_z∈_3^nχ(g_1(z)g_3(z))=-1,
* ∑_z∈_3^nχ(g_1(z)g_5(z))=1,
* ∑_z∈_3^nχ(g_2(z)g_3(z))=-2.
* ∑_z∈_3^nχ(g_1(z)g_2(z)g_5(z))=2,
* ∑_z∈_3^nχ(g_1(z)g_3(z)g_5(z))=2.
In what follows, we give a series of lemmas on quadratic character sums involving g_i(x). The first three lemmas can be proved directly.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_4(z)g_5(z))=-χ(φ(u)).
We have,
∑_z∈_3^nχ(g_4(z)g_5(z)) =-χ(φ(u))∑_z∈_3^nχ((z+1-√(1-u^2))^2(z+1+√(1-u^2)))
=-χ(φ(u))∑_z∈_3^n,z≠√(1-u^2)-1χ(z+1+√(1-u^2))
=-χ(φ(u))χ(√(1-u^2))
=-χ(φ(u)).
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_1(z)g_4(z)g_5(z))=1+χ(φ(u)).
We have,
∑_z∈_3^nχ(g_1(z)g_4(z)g_5(z)) =χ(u+1)χ(φ(u))∑_z∈_3^nχ( z(z+1+√(1-u^2))(z+1-√(1-u^2))^2)
=-∑_z∈_3^n,z≠√(1-u^2)-1χ( z(z+1+√(1-u^2)))
=-(-1-χ((√(1-u^2)-1)(-√(1-u^2))))
=1-χ(√(1-u^2)-1)
=1+χ(φ(u)).
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_1(z)g_3(z)g_4(z)g_5(z))=2-χ((√(1-u^2)+1+u)),
and
∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z)g_5(z))=2-χ((√(1-u^2)+1-u)).
We only prove the first identity, as the proof of the second is very similar.
∑_z∈_3^nχ(g_1(z)g_3(z)g_4(z)g_5(z))= χ((u+1)φ(u))∑_z∈_3^nχ((z-1+u)(z+1+√(1-u^2))z^2(z+1-√(1-u^2))^2)
= -∑_z∈_3^n^*,z≠√(1-u^2)-1χ((z-1+u)(z+1+√(1-u^2)))
= -∑_z∈_3^nχ((z-1+u)(z+1+√(1-u^2)))
+χ((u-1)φ(u))+χ((√(1-u^2)+1+u)(-√(1-u^2)))
= 1-χ((u+1)φ(u))-χ((√(1-u^2)+1+u))
= 2-χ((√(1-u^2)+1+u)).
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z))=-2.
We have,
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z)) =∑_z∈_3^nχ(z^2(z-1+u)(z-1-u)(z^2-z+u^2))
=∑_z∈_3^n^*χ((z-1+u)(z-1-u)(z^2-z+u^2))
=∑_z∈_3^n^*, z≠ 1± uχ(z^2-z+u^2/z^2+z+1-u^2)
=∑_z∈_3^n, z≠ 1 ± uχ(z^2-z+u^2/z^2+z+1-u^2)-χ(u^2/1-u^2).
Let t=z^2-z+u^2/z^2+z+1-u^2. Then (t-1)z^2+(t+1)z+t(1-u^2)-u^2=0. We know that t=1 if and only if z=1+u^2. When t≠ 1, the discriminant of the quadratic equation on z is Δ_t=(t+1)^2-(t-1)(t(1-u^2)-u^2)=u^2t^2+1-u^2. The number of z with a fixed t≠ 1 is 1+χ(Δ_t). Hence, we have
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z)) =∑_z∈_3^n, z≠ 1 ± uχ(z^2-z+u^2/z^2+z+1-u^2)-χ(u^2/1-u^2)
=∑_t∈_3^n, t≠ 1χ(t)(1+χ(Δ_t))-χ(u^2/1-u^2)
=∑_t∈_3^nχ(t)(1+χ(Δ_t))-χ(1)-χ(u^2/1-u^2)
=∑_t∈_3^nχ(t)+∑_t∈_3^nχ(t(u^2t^2+1-u^2))-χ(1)-χ(u^2/1-u^2).
Note that ∑_t∈_3^nχ(t(u^2t^2+1-u^2))=∑_t∈_3^nχ(-t(u^2t^2+1-u^2))=-∑_t∈_3^nχ(t(u^2t^2+1-u^2)), then ∑_t∈_3^nχ(t(u^2t^2+1-u^2))=0. This with χ(u^2/1-u^2)=1 leads to ∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z))=-2.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_1(z)g_4(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z))=0.
First,
∑_z∈_3^nχ(g_1(z)g_4(z))=-χ(u+1)∑_z∈_3^nχ(z(z^2-z+u^2)),
and
∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)) =-χ(u+1)∑_z∈_3^nχ(z^3(z-(u+1))(z+(u-1)))
=-χ(u+1)∑_z∈_3^n^*χ(z(z^2+z+1-u^2)).
Note that
∑_z∈_3^n^*χ(z(z^2+z+1-u^2))=-∑_z∈_3^n^*χ(z(z^2-z+1-u^2))=-∑_z∈_3^n^*χ(z^2-z+1-u^2/z).
Let z^2-z+1-u^2/z=t. Then t satisfies the quadratic equation
z^2-(t+1)z+1-u^2=0.
Clearly, z=0 is not the solution of this quadratic equation for any t∈_3^n since u∉_3. For each t, the number of solutions of z is 1+χ(Δ_t), where Δ_t=(t+1)^2-(1-u^2)=t^2-t+u^2. Hence
∑_z∈_3^n^*χ(z^2-z+1-u^2/z)=∑_t∈_3^nχ(t)(1+χ(Δ_t))=∑_t∈_3^nχ(tΔ_t)=∑_t∈_3^nχ(t(t^2-t+u^2)).
The desired result follows.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_2(z)g_4(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z))=-2,
and
∑_z∈_3^nχ(g_3(z)g_4(z))+∑_z∈_3^nχ(g_1(z)g_3(z)g_4(z))=-2.
We only prove the first identity. The proof of the second one is similar, so we omit it.
Note that
∑_z∈_3^nχ(g_2(z)g_4(z))=∑_z∈_3^nχ(z(z-1-u)(z^2-z+u^2)),
and
∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z))=-χ(u+1)∑_z∈_3^nχ(z^2(z-1-u)(z^2-z+u^2)).
We have
∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z)) =-χ(u+1)∑_z∈_3^n^*χ((z-1-u)(z^2-z+u^2))
=-χ(u+1)∑_z∈_3^n, z -1-uχ(z((z+1+u)^2-(z+1+u)+u^2))
=-χ(u+1)∑_z∈_3^n, z -1-uχ(z(z^2-(u-1)z-(u^2-u)))
=-χ(u+1)(∑_z∈_3^nχ(z(z^2-(u-1)z-(u^2-u))-χ(-(u+1)u^2)))
=-1-χ(u+1)∑_z∈_3^nχ(z(z^2-(u-1)z-(u^2-u))),
and
∑_z∈_3^nχ(g_2(z)g_4(z))=∑_z∈_3^nχ(z(z-1-u)(z^2-z+u^2))=∑_z∈_3^n^*, z u+1χ(z^2-z+u^2/z(z-1-u)).
Let z^2-z+u^2/z(z-1-u)=t. Then t satisfies
(t-1)z^2+(1-(u+1)t)z-u^2=0.
We know that t=1 if and only if z=-u. When t≠ 1, the discriminant of the quadratic equation on z is Δ_t=(1-(u+1)t)^2+(t-1)u^2=(u+1)^2t^2+(u-1)^2t+1-u^2. The number of z with a fixed t≠ 1 is 1+χ(Δ_t). Hence, we have
∑_z∈_3^nχ(g_2(z)g_4(z)) =∑_z∈_3^n^*, z u+1χ(z^2-z+u^2/z(z-1-u))
=1+∑_t∈_3^n, t 1χ(t)(1+χ(Δ_t))
=1+∑_t∈_3^nχ(t)(1+χ(Δ_t))-(1+χ(u^2))
=∑_t∈_3^nχ(t)(1+χ(Δ_t))-1.
Note that ∑_t∈_3^nχ(tΔ_t)=∑_t∈_3^nχ(t((u+1)^2t^2+(u-1)^2t+1-u^2))=∑_t∈_3^n^*χ((u+1)^2t^2+(u-1)^2t+1-u^2/t). Let v=(u+1)^2t^2+(u-1)^2t+1-u^2/t. Then
(u+1)^2t^2+((u-1)^2-v)t+(1-u^2)=0,
which is a quadratic equation on t. Δ_v=((u-1)^2-v)^2-(u+1)^2(1-u^2)=v^2+(u-1)^2v-(u-1)u^3.
Then
∑_t∈_3^nχ(tΔ_t) =∑_v∈_3^nχ(v(1+χ(Δ_v)))
=∑_v∈_3^nχ(v(v^2+(u-1)^2v-(u-1)u^3))
=∑_w∈_3^nχ((u-1)^2w((u-1)^4w^2+(u-1)^4w-(u-1)u^3))
=∑_w∈_3^nχ(w(w^2+w-u^3/(u-1)^3))
=∑_w∈_3^nχ(w^1/3((w^1/3)^2+w^1/3-u^3/(u-1)^3))
=∑_w∈_3^nχ(w(w^2+w-u/u-1))
=∑_w∈_3^nχ(w/u-1((w/u-1)^2+w/u-1-u/u-1))
=χ(u-1)∑_w∈_3^nχ(w(w^2+(u-1)w-(u^2-u)))
=-χ(u-1)∑_w∈_3^nχ(w(w^2-(u-1)w-(u^2-u))).
We conclude that
∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z))=-1-χ(u+1)∑_z∈_3^nχ(z(z^2-(u-1)z-(u^2-u))),
and
∑_z∈_3^nχ(g_2(z)g_4(z))=-1-χ(u-1)∑_w∈_3^nχ(w(w^2-(u-1)w-(u^2-u))).
Then we have
∑_z∈_3^nχ(g_2(z)g_4(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_4(z))=-2.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_2(z)g_3(z)g_5(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_5(z))=2.
Note that
∑_z∈_3^nχ(g_2(z)g_3(z)g_5(z))
=-χ(φ(u))∑_z∈_3^nχ(z^2(z-1-u)(z-1+u)(z+1-√(1-u^2)))
=-χ(φ(u))∑_z∈_3^n^*χ((z-1-u)(z-1+u)(z+1-√(1-u^2)))
=-χ(φ(u))(-χ((-1-u)(-1+u)(1-√(1-u^2)))+∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1-√(1-u^2))))
=1-χ(φ(u))∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1-√(1-u^2))),
and
∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1-√(1-u^2)))
= ∑_z∈_3^nχ((z+√(1-u^2)+1-u)(z+√(1-u^2)+1+u)z)
= ∑_z∈_3^nχ(z(z^2-(√(1-u^2)+1)z+(u^2-1-√(1-u^2))))
= χ(√(1-u^2)+1)∑_z∈_3^nχ(z(z^2-z+-u^2+1-√(1-u^2)/u^2)).
Then
∑_z∈_3^nχ(g_2(z)g_3(z)g_5(z))=1-∑_z∈_3^nχ(z(z^2-z+-u^2+1-√(1-u^2)/u^2)).
Moreover,
∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_5(z))
= χ(u+1)χ(φ(u))∑_z∈_3^nχ(z^3(z-1-u)(z-1+u)(z+1-√(1-u^2)))
= -∑_z∈_3^nχ(z(z-1-u)(z-1+u)(z+1-√(1-u^2)))
= -∑_t∈_3^n^*χ(1/t(1/t-1-u)(1/t-1+u)(1/t+1-√(1-u^2)))
= -∑_t∈_3^n^*χ((1-(1+u)t)(1-(1-u)t)(1+(1-√(1-u^2))t))
= 1-∑_t∈_3^nχ((1-(1+u)t)(1-(1-u)t)(1+(1-√(1-u^2))t))
= 1-χ((1+u)(1-u)(1-√(1-u^2)))∑_t∈_3^nχ((1/1+u-t)(1/1-u-t)(1/1-√(1-u^2)+t))
= 1-χ(1-√(1-u^2))∑_t∈_3^nχ((t-1/1+u)(t-1/1-u)(t+1/1-√(1-u^2)))
= 1-χ(1-√(1-u^2))∑_t∈_3^nχ((t-1/1-√(1-u^2)-1/1+u)(t-1/1-√(1-u^2)-1/1-u)t)
= 1-χ(1-√(1-u^2))∑_t∈_3^nχ(t(t^2+1+(1-u^2)√(1-u^2)/u^2(1-u^2)t-u^4+u^2+1+√(1-u^2)/u^4(1-u^2)))
= 1-χ(1-√(1-u^2))χ(1+(1-u^2)√(1-u^2)/u^2(1-u^2))
·∑_t∈_3^nχ(t(t^2+t-u^4+u^2+1+√(1-u^2)/u^4(1-u^2)/(1+(1-u^2)√(1-u^2)/u^2(1-u^2))^2))
= 1-∑_t∈_3^nχ(t(t^2+t-u^4+u^2+1+√(1-u^2)/u^4(1-u^2)/(1+(1-u^2)√(1-u^2)/u^2(1-u^2))^2))
= 1-∑_t∈_3^nχ(t(t^2+t+-(u^2-1)^3-(√(1-u^2))^3/u^6))
= 1-∑_t∈_3^nχ(t(t^2+t+-u^2+1-√(1-u^2)/u^2))
= 1+∑_t∈_3^nχ(t(t^2-t+-u^2+1-√(1-u^2)/u^2)).
The desired result follows.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z)g_5(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z)g_5(z))=2.
We have
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z)g_5(z))
= -χ(φ(u))∑_z∈_3^nχ(z^2(z-1-u)(z-1+u)(z+1+√(1-u^2))(z+1-√(1-u^2))^2)
= -χ(φ(u))∑_z∈_3^n, z 0, z√(1-u^2)-1χ((z-1-u)(z-1+u)(z+1+√(1-u^2)))
= -χ(φ(u))(-χ((-1-u)(-1+u)(1+√(1-u^2)))-χ((√(1-u^2)+1-u)(√(1-u^2)+1+u)(-√(1-u^2)))
+∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1+√(1-u^2))))
= 2-χ(φ(u))∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1+√(1-u^2))).
Note that
∑_z∈_3^nχ((z-1-u)(z-1+u)(z+1+√(1-u^2)))
= ∑_z∈_3^nχ(z(z+1-√(1-u^2)-u)(z+1-√(1-u^2)+u))
= ∑_z∈_3^nχ(z(z^2+(√(1-u^2)-1)z+(u^2-1+√(1-u^2))))
= χ(√(1-u^2)-1)∑_z∈_3^nχ(z(z^2+z+1-u^2+√(1-u^2)/u^2)).
Then
∑_z∈_3^nχ(g_2(z)g_3(z)g_4(z)g_5(z))=2+∑_z∈_3^nχ(z(z^2+z+1-u^2+√(1-u^2)/u^2)).
Moreover,
∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z)g_5(z))
=χ(φ(u))χ(u+1)∑_z∈_3^nχ(z^3(z-1-u)(z-1+u)(z+1+√(1-u^2))(z+1-√(1-u^2))^2)
=-∑_z∈_3^n,z√(1-u^2)-1χ(z(z-1-u)(z-1+u)(z+1+√(1-u^2)))
=-(-χ((√(1-u^2)-1)(√(1-u^2)+1-u)(√(1-u^2)+1+u)(-√(1-u^2)))
+∑_z∈_3^nχ(z(z-1-u)(z-1+u)(z+1+√(1-u^2))))
=-1-∑_z∈_3^nχ(z(z-1-u)(z-1+u)(z+1+√(1-u^2))).
Note that
∑_z∈_3^nχ(z(z-1-u)(z-1+u)(z+1+√(1-u^2)))
= ∑_t∈_3^n^*χ(1/t(1/t-1-u)(1/t-1+u)(1/t+1+√(1-u^2)))
= ∑_t∈_3^n^*χ((1-(1+u)t)(1-(1-u)t)(1+(1+√(1-u^2))t))
= -1+∑_t∈_3^nχ((1-(1+u)t)(1-(1-u)t)(1+(1+√(1-u^2))t))
= -1+χ(1+u)χ(1-u)χ(1+√(1-u^2))∑_t∈_3^nχ((t-1/1+u)(t-1/1-u)(t+1/1+√(1-u^2)))
= -1+χ(1+√(1-u^2))∑_t∈_3^nχ(t(t-1/1+√(1-u^2)-1/1+u)(t-1/1+√(1-u^2)-1/1-u))
= -1+χ(1+√(1-u^2))∑_t∈_3^nχ(t(t^2+1-(1-u^2)√(1-u^2)/u^2(1-u^2)t+-u^4-u^2-1+√(1-u^2)/u^4(1-u^2)))
= -1+χ(1+√(1-u^2))χ(1-(1-u^2)√(1-u^2)/u^2(1-u^2))
·∑_t∈_3^nχ(t(t^2+t+-u^4-u^2-1+√(1-u^2)/u^4(1-u^2)/(1-(1-u^2)√(1-u^2)/u^2(1-u^2))^2))
= -1+∑_t∈_3^nχ(t(t^2+t+(√(1-u^2)/1-√(1-u^2))^3))
= -1+∑_t∈_3^nχ(t(t^2+t+√(1-u^2)/1-√(1-u^2)))
= -1+∑_t∈_3^nχ(t(t^2+t+1-u^2+√(1-u^2)/u^2)).
Then
∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z)g_5(z))=-∑_t∈_3^nχ(t(t^2+t+1-u^2+√(1-u^2)/u^2)).
The desired result follows.
When u ∈𝒰_0\_3, we have
∑_z∈_3^nχ(g_2(z)g_5(z))+∑_z∈_3^nχ(g_3(z)g_4(z)g_5(z))=χ(√(1-u^2)+1+u),
and
∑_z∈_3^nχ(g_3(z)g_5(z))+∑_z∈_3^nχ(g_2(z)g_4(z)g_5(z))=χ(√(1-u^2)+1-u).
We only prove the first identity, as the proof of the second is very similar.
∑_z∈_3^nχ(g_2(z)g_5(z))=-χ(φ(u))∑_z∈_3^nχ(z(z-1-u)(z+1-√(1-u^2))).
Note that
∑_z∈_3^nχ(z(z-1-u)(z+1-√(1-u^2)))
= ∑_z∈_3^nχ(z(z^2-(u+√(1-u^2))z-(u+1)(1-√(1-u^2))))
= ∑_z∈_3^n^*χ(z^2-(u+√(1-u^2))z-(u+1)(1-√(1-u^2))/z).
Let t=z^2-(u+√(1-u^2))z-(u+1)(1-√(1-u^2))/z. Then
z^2-(t+u+√(1-u^2))z-(u+1)(1-√(1-u^2))=0,
and Δ_t=(t+u+√(1-u^2))^2+(u+1)(1-√(1-u^2))=t^2-(u+√(1-u^2))t+(u-1)(1+√(1-u^2)).
We have
∑_z∈_3^nχ(z(z-1-u)(z+1-√(1-u^2)))
= ∑_t∈_3^nχ(t)(1+χ(Δ_t))
= ∑_t∈_3^nχ(t(t^2-(u+√(1-u^2))t+(u-1)(1+√(1-u^2))))
= -∑_t∈_3^nχ(t(t^2+(u+√(1-u^2))t+(u-1)(1+√(1-u^2)))),
then
∑_z∈_3^nχ(g_2(z)g_5(z))=χ(φ(u))∑_t∈_3^nχ(t(t^2+(u+√(1-u^2))t+(u-1)(1+√(1-u^2)))).
On the other hand,
∑_z∈_3^nχ(g_3(z)g_4(z)g_5(z))
=-χ(φ(u))∑_z∈_3^nχ(z(z-1+u)(z+1+√(1-u^2))(z+1-√(1-u^2))^2)
=-χ(φ(u))∑_z∈_3^n,z≠√(1-u^2)-1χ(z(z-1+u)(z+1+√(1-u^2)))
=-χ(φ(u))(-χ((√(1-u^2)-1)(√(1-u^2)+1+u)(-√(1-u^2)))
+∑_z∈_3^nχ(z(z-1+u)(z+1+√(1-u^2))))
=χ(√(1-u^2)+1+u)-χ(φ(u))∑_z∈_3^nχ(z(z^2+(u+√(1-u^2))z+(u-1)(1+√(1-u^2)))).
Hence, the first identity ensues.
§ ON THE NUMBER OF SOLUTIONS OF THE DIFFERENTIAL EQUATION OF F_U
Let n be a positive odd integer, u∈_3^n. Recall that the Ness-Helleseth function is defined as
f_u(x)=ux^d_1+x^d_2,
where d_1=3^n-1/2-1 and d_2=3^n-2. To determine the differential spectrum of f_u(x), attention should be given to the differential equation
𝔻_af_u(x)=u(x+a)^3^n-1/2-1+(x+a)^3^n-2-ux^3^n-1/2-1-x^3^n-2=b,
where (a,b)∈_3^n^*×_3^n. This equation was studied in <cit.>. For the sake of completeness, we give some details here.
We denote by N(a,b), N_1(a,b) and N_2(a,b) the numbers of solutions of (<ref>) in the sets _3^n, {0,-a} and _3^n\{0,-a} respectively. Then
N(a,b)=N_1(a,b)+N_2(a,b).
The following lemma is given in <cit.>.
<cit.>
The value of N_1(a,b) is determined as follows:
N_1(a,b)=
{[ 2, if b=a^-1 and u=0,; 1, if b=a^-1(1± uχ(a)) and u 0,; 0, otherwise. ].
When b = 0, the value of N_2(a,b) is given as
N_2(a,0)=
{[ 3^n-3/4, if u∈{± 1},; 0, if u ∈𝒰_0\{±1}.; ].
What needs to be calculated is N_2(a,b) for b∈_3^n^*. When x∉{0, -a}, the differential equation is equivalent to
u(x+a)^3^n-1/2x+x-ux^3^n-1/2(x+a)-(x+a)=bx(x+a),
which can be simplified as
bx^2+(ba-u(τ_a-τ_0))x+a(uτ_0+1)=0,
where τ_a=χ(x+a) and τ_0=χ(x).
The discussion of the solutions of the quadratic equation above when b≠0 has been clarified by Helleseth in <cit.> and results are listed in Table <ref>, in which x_1 and x_2 denote the two solutions of the quadratic equations in each case.
Drawing upon the information in Table <ref>, the subsequent pivotal results have been unveiled. Note that the term a desired solution refers to a solution of a certain quadratic equation in any case in Table <ref> that indeed satisfies the corresponding condition on (τ_a,τ_0). In the rest of this paper, we always assume that u ∈𝒰_0 \_3={u∈_3^n∖_3 | χ(u+1)≠χ(u-1)}, then χ(1-u^2)=1 and (a,b) ∈_3^n^* ×_3^n^*.
For the sake of brevity and clarity, for such fixed u and (a,b), we denote by N_I (respectively N_II, N_III and N_IV) the number of desired solutions in Case I (respectively, Case II, Case III and Case IV). Consequently, N_2(a,b)=N_I+N_II+N_III+N_IV. We discuss the values of N_I, N_II, N_III and N_IV as follows. It was proved in <cit.> that N_I≤ 1 and N_IV≤ 1. Moreover, the following proposition was proposed.
(<cit.>) We have,
* N_I=1 if and only if
χ(1-u+1/ab)=1 and χ(u+1/ab)=-1.
* N_IV=1 if and only if
χ(1+u-1/ab)=1 and χ(u+1/ab)=-1.
Since χ(u+1/ab)≠ 0, the following corollary can be deduced immediately.
We have,
* N_I=0 if one of the subsequent three disjoint conditions is met:
* χ(1-u+1/ab)=0.
* χ(1-u+1/ab)=-1.
* χ(1-u+1/ab)=1 and χ(u+1/ab)=1.
* N_IV=0 if one of the subsequent three disjoint conditions is met:
* χ(1+u-1/ab)=0.
* χ(1+u-1/ab)=-1.
* χ(1+u-1/ab)=1 and χ(u+1/ab)=1.
As has been demonstrated in <cit.>, if x is a solution of the quadratic equation in Case II, then -(x+a) is a solution of the quadratic equation in Case III, and vice versa. Besides, x and -(x+a) cannot be desired solutions simultaneously. Therefore, it can be concluded that N_II+N_III≤ 2. More specifically, the following proposition showed the sufficient and necessary condition of N_II+N_III=2.
(<cit.>)We have,
N_II+N_III=2 if and only if
{[ χ(u^2+a^2b^2-ab)=1,; χ(-u^2-ab-ab√(1-u^2))=1.; ].
Next, we specifically consider the case when N_II+N_III=1. We have the following proposition.
We have,
N_II+N_III=1 if and only if
{[ χ(u^2+a^2b^2-ab)=0,; χ(a^2b^2-u^2)=1.; ].
The sufficiency is obvious. We only prove the necessity. When χ(u^2+a^2b^2-ab)=1, the quadratic equation in Case II has two solutions, namely x_1 and x_2. Then the solutions of the quadratic equation in Case III are -x_1-a and -x_2-a. Note that x_1 (x_2, respectively) is a desired solution if and only if -x_2-a (-x_1-a, respectively) is a desired solution. Then N_II+N_III≠ 1 when χ(u^2+a^2b^2-ab)=1. Moreover, when χ(u^2+a^2b^2-ab)=-1, N_II+N_III=0. We conclude that if N_II+N_III=1, then χ(u^2+a^2b^2-ab)=0.
When χ(u^2+a^2b^2-ab)=0, let x_0 be the unique solution of the quadratic equation in Case II, then
x_0=u+ab/b. Moreover, the unique solution of the quadratic equation in Case III is x'_0=-u-ab/b. If x_0 is a desired solution, then χ(x_0)=χ(u+ab/b)=-1 and χ(x_0+a)=χ(u-ab/b)=1. If x'_0 is a desired solution, then χ(x'_0)=χ(-u-ab/b)=1 and χ(x'_0+a)=χ(-u+ab/b)=-1. Obviously, x_0 and x'_0 cannot be desired solutions simultaneously. If N_II+N_III=1, then χ(u+ab/b)χ(u-ab/b)=-1, i.e., χ(u^2-a^2b^2)=-1. The proof is completed.
Note that χ(-u^2-ab-ab√(1-u^2))=0 implies that χ(u^2+a^2b^2-ab)=0. Moreover, χ(u^2+a^2b^2-ab)=0 and χ(a^2b^2-u^2)=0 cannot hold simultaneously for u ∈𝒰_0 \_3. Then we have the following corollary.
N_II+N_III=0 if one of the subsequent three disjoint conditions is met:
* χ(u^2+a^2b^2-ab)=-1.
* χ(u^2+a^2b^2-ab)=0,χ(a^2b^2-u^2)=-1.
* χ(u^2+a^2b^2-ab)=1 and χ(-u^2-ab-ab√(1-u^2))=-1.
When χ(u^2+a^2b^2-ab)=0, then ab=-1±√(1-u^2), which implies that χ(u+1/ab)=-χ((u+1)φ(u))=1. Then we conclude that N_I=N_IV=0 when N_II+N_III=1. Moreover, by propositions and corollaries demonstrated previously in this section, the discussion on the value of N_2(a,b)=N_I+N_II+N_III+N_IV is finished. Recall that N(a,b)=N_1(a,b)+N_2(a,b). By Lemma <ref>, For u∈𝒰_0 \_3, N_1(a,b)=1 or 0. When N_1(a,b)=1, then ab=1± u, the conditions in Proposition <ref> cannot hold, hence N_I=N_IV=0. Moreover, for u∈𝒰_0 \_3, if ab=1± u, then u^2+a^2b^2-ab≠ 0. We conclude that N_II+N_III≠1 when N_1(a,b)=1. We summarize the above discussion in the following Table <ref>.
By Table <ref>, we obtain the following sufficient and necessary conditions about the numbers of solutions of the differential equation (<ref>). We mention that the sufficient and necessary condition for (<ref>) to have 4 solutions was given in <cit.>.
<cit.>
When u ∈𝒰_0 \_3, the differential equation 𝔻_af_u(x)=b has four solutions if and only if (a,b) satisfies the following conditions
{[ χ(u+1/ab)=-1,; χ(1-u+1/ab)=1,; χ(1-u-1/ab)=1,; χ(u^2+a^2b^2-ab)=1,; χ(-u^2-ab-ab√(1-u^2))=1.; ].
When u ∈𝒰_0 \_3, the differential equation 𝔻_af_u(x)=b of the function f_u(x) has three solutions if and only if (a,b) satisfies one of the following conditions
* ab=1± u,χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=1.
* χ(1-u+1/ab)=-1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=1.
When u ∈𝒰_0 \_3, the differential equation 𝔻_af_u(x)=b of the function f_u(x) has two solutions if and only if (a,b) satisfies one of the following conditions
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(1-u+1/ab)=-1,χ(1+u-1/ab)=-1,χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=1.
* χ(1-u+1/ab)=-1,χ(1+u-1/ab)=1,χ(a(u+1)/b)=1,χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=1.
* χ(1-u+1/ab)=1,χ(a(u+1)/b)=1,χ(1+u-1/ab)=-1,χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=1.
* χ(1-u+1/ab)=1,χ(a(u+1)/b)=1,χ(1+u-1/ab)=1,χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=1.
When u ∈𝒰_0 \_3, the differential equation 𝔻_af_u(x)=b of the function f_u(x) has one solution if and only if (a,b) satisfies one of the following conditions
* ab=1± u,χ(u^2+a^2b^2-ab)=-1.
* ab=1± u,χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(u^2+a^2b^2-ab)=0,χ(a^2b^2-u^2)=1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=-1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(1-u+1/ab)=-1, χ(a(u+1)/b)=-1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=-1.
When u ∈𝒰_0 \_3, the differential equation 𝔻_af_u(x)=b of the function f_u(x) has no solution if and only if (a,b) satisfies one of the following conditions
* b=0.
* χ(u^2+a^2b^2-ab)=0,χ(a^2b^2-u^2)=-1.
* χ(1-u+1/ab)=-1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=-1, χ(1+u-1/ab)=1,χ(a(u+1)/b)=1,χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=-1.
* χ(1-u+1/ab)=-1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(1-u+1/ab)=-1, χ(1+u-1/ab)=1,χ(a(u+1)/b)=1,χ(u^2+a^2b^2-ab)=1, χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=1, χ(1+u-1/ab)=-1, χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=-1.
* χ(1-u+1/ab)=1, χ(a(u+1)/b)=1, χ(1+u-1/ab)=1, χ(u^2+a^2b^2-ab)=1,χ(-u^2-ab-ab√(1-u^2))=-1.
§ THE DIFFERENTIAL SPECTRUM OF F_U WHEN Χ(U+1)≠Χ(U-1)
Recall that ω_i=|{(a,b)∈_p^n^*×_p^n|δ_F(a,b)=i}|,0⩽ i⩽Δ_F, where δ_F(a,b) denotes the number of solutions to the differential equation 𝔻_aF=b. We are ready to investigate the differential spectrum of f_u.
As a prerequisite, we define two quadratic character sums, namely Γ_3 and Γ_4, as enumerated below.
Γ_3 =∑_z∈_3^nχ(g_1(z)g_4(z))=-χ(u+1)∑_z∈_3^nχ(z^3-z^2+u^2z).
Γ_4 =∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z))=-χ(u+1)∑_z∈_3^nχ(z^5-(u^2+1)z^2+(u^2-u^4)z).
These two character sums will be used in the differential spectrum of f_u. The main result of this paper is given as follows.
Let n≥3 be an odd integer and f_u(x)=ux^d_1+x^d_2 be the Ness-Helleseth function over _3^n with d_1=3^n-1/2-1 and d_2=3^n-2. Then, when u ∈𝒰_0∖_3, the differential spectrum of f_u is given by
[
ω_0 =(3^n-1)(-1+ε+1/32(5·3^n+1-17-Γ_4)),
ω_1 =(3^n-1)(3-ε+1/16(3^n+1+3+2Γ_3+Γ_4)),
ω_2 =(3^n-1)(-ε+1/4(3^n-7-Γ_3)),
ω_3 =(3^n-1)(ε+1/16(3^n+1+2Γ_3-Γ_4)),
ω_4 =(3^n-1)/32(3^n+1+Γ_4)
],
where
ε =
{[ 1, χ(u)=χ(u+1),χ((u+1)√(1-u^2)+(u-1)^2)=-1,or,; χ(u)=χ(u-1),χ((1-u)√(1-u^2)+(u+1)^2)=-1;; 0, otherwise.; ].
The proof of Theorem <ref> will be divided into five parts, where in each part ω_i (for i∈{0,1,2,3,4}) will be calculated.
* Proof of ω_4. The sufficient and necessary condition for (<ref>) to have 4 solutions was shown in Proposition <ref>. Let ab=z. For each z∈_3^n^*, there are 3^n-1 pairs of (a,b) such that ab=z. Further we have,
ω_4=(3^n-1)n_4,
where n_4 denotes the number of z satisfying the following system.
{[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ].
where g_i(i=1,2,3,4,5) are defined previously. Then by character sum n_4 can be expressed as
n_4=1/32∑_z∈_3^n\ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
By Table <ref>,
∑_z∈ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z)))=0.
By the lemmas in Section II, it follows that
n_4 =1/32∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z)))
=1/32(3^n+1+∑_z∈_3^n(g_1(z)g_2(z)g_3(z)g_4(z))-χ(u+1)χ(φ(u)-1)-χ(φ(u))χ(φ(u)-1))
=1/32(3^n+1+Γ_4).
The last identity holds since χ(u+1)χ(φ(u))=-1 and χ(u+1)+χ(φ(u))=0. Then the value of ω_4 follows.
* Proof of ω_3.
The sufficient and necessary condition for (<ref>) to have 3 solutions was shown in Proposition <ref>. Let ab=z. For each z∈_3^n^*, there are 3^n-1 pairs of (a,b) such that ab=z. Further we have
ω_3=(3^n-1)(n_3,1+n_3,2+n_3,3),
where the definitions of n_3,1 , n_3,2 and n_3,3 will be detailed below.
Let n_3,1 denote the number of z satisfying
{[ z=1± u,; χ(g_4(z))=1,; χ(g_5(z))=1.; ].
Then we get
n_3,1=
{[ 1, χ(u)=χ(u+1),χ((u+1)√(1-u^2)+(u-1)^2)=-1,or,; χ(u)=χ(u-1),χ((1-u)√(1-u^2)+(u+1)^2)=-1;; 0, otherwise.; ].
Let n_3,2, n_3,3 denote the number of z satisfying the following two equation systems respectively:
{[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ]. {[ χ(g_1(z))=1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ].
where g_i(i=1,2,3,4,5) are defined previously. Then by character sum, n_3,1 can be expressed as
32n_3,2=∑_z∈_3^n\ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
By Table <ref>,
∑_z∈ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z)))=0.
It follows that
32n_3,2=∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
Similarly, it can be concluded that
32n_3,3=∑_z∈_3^n(1+χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
By utilizing the lemmas presented in Section II, the following sum can be derived
n_3,1+n_3,2+n_3,3
= ε+1/16[3^n+1+2∑_z∈_3^nχ(g_1(z)g_4(z))-∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z))
-χ(φ(u))χ(φ(u)-1)-χ(u+1)χ(φ(u)-1)]
= ε+1/16[3^n+1+2Γ_3-Γ_4-χ(φ(u))χ(φ(u)-1)-χ(u+1)χ(φ(u)-1)]
= ε+1/16(3^n+1+2Γ_3-Γ_4),
where ϵ was defined in (<ref>).
* Proof of ω_2.
The sufficient and necessary condition for (<ref>) to have 2 solutions was shown in Proposition <ref>. Let ab=z. For each z∈_3^n^*, there are 3^n-1 pairs of (a,b) such that ab=z. Further we have
ω_2=(3^n-1)(n_2,1+n_2,2+n_2,3+n_2,4+n_2,5+n_2,6),
where n_2,1,n_2,2,n_2,3,n_2,4,n_2,5,n_2,6 denote the number of z satisfying the following six equation systems respectively:
{[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=-1,; ]. {[ χ(g_2(z))=-1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ]. {[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ].
{[ χ(g_1(z))=-1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=1,; ].
where g_i(i=1,2,3,4,5) are defined previously.
Then by character sum, n_21 can be expressed as
16n_2,1=∑_z∈_3^n\ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z))).
By Table <ref>,
∑_z∈ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z)))=0.
It follows that
16n_2,1=∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z))).
Similarly, it can be concluded that
16n_2,2 =∑_z∈_3^n\ {1± u,-1±√(1-u^2)}(1-χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
32n_2,3 =∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z)))-4.
32n_2,4 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
32n_2,5 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
32n_2,6 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1+χ(g_5(z))).
By utilizing the lemmas presented in Section II, the following sum can be derived
n_2,1+n_2,2+n_2,3+n_2,4+n_2,5+n_2,6
= 1/8[2·3^n-14-2∑_z∈_3^nχ(g_1(z)g_4(z))-χ(φ(u))χ(φ(u)-1)+χ(u+1)χ(φ(u)-1)
-2(1+χ(u-u^2))(1-χ((u+1)(√(1-u^2))+(u-1)^2))-2χ(u^2-1-√(1-u^2))
-2(1-χ(u^2+u))(1-χ((1-u)(√(1-u^2))+(u+1)^2))]
= 1/8[2·3^n-14-2Γ_3-χ(φ(u))χ(φ(u)-1)+χ(u+1)χ(φ(u)-1)
-2(1+χ(u-u^2))(1-χ((u+1)(√(1-u^2))+(u-1)^2))-2χ(u^2-1-√(1-u^2))
-2(1-χ(u^2+u))(1-χ((1-u)(√(1-u^2))+(u+1)^2))]
= 1/4(3^n-7-Γ_3+χ(u+1)-(1-χ(u^2+u))(1-χ((1-u)(√(1-u^2))+(u+1)^2))
-(1+χ(u-u^2))(1-χ((u+1)(√(1-u^2))+(u-1)^2))-χ(u^2-1-√(1-u^2)))
= 1/4(3^n-7-4ε-Γ_3+χ(u+1)-χ(u^2-1-√(1-u^2)))
= -ε+1/4(3^n-7-Γ_3),
where ε has been defined in (<ref>). The last identity holds since
χ(u^2-1-√(1-u^2))=χ(√(1-u^2)(-√(1-u^2)-1))=-χ(φ(u)) and χ(u+1)χ(φ(u))=-1.
* Proof of ω_1.
The sufficient and necessary condition for (<ref>) to have 1 solution was shown in Proposition <ref>. Let ab=z. For each z∈_3^n^*, there are 3^n-1 pairs of (a,b) such that ab=z. Further we have
ω_1=(3^n-1)(n_1,1+n_1,2+n_1,3+n_1,4+n_1,5+n_1,6+n_1,7),
where the definitions of n_1,1, n_1,2, n_1,3, n_1,4, n_1,5, n_1,6 and n_1,7 will be detailed below.
Let n_1,1, n_1,2, n_1,3 denote the number of z satisfying the following two equation systems respectively:
{[ z=1± u,; χ(g_4(z))=-1,; ]. {[ z=1± u,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ]. {[ χ(g_4(z))=0,; χ(g_5(z))=1.; ].
Then we get
n_1,1 =
{[ 1, χ(u)=χ(u-1) or χ(u)=χ(u+1);; 0, otherwise.; ].
n_1,2 =
{[ 1, χ(u)=χ(u+1),χ((u+1)√(1-u^2)+(u-1)^2)=1,or; χ(u)=χ(u-1),χ((1-u)√(1-u^2)+(u+1)^2)=1;; 0, otherwise.; ].
n_1,3 =
{[ 1, χ(u^2-1+√(1-u^2))=1 or χ(u^2-1-√(1-u^2))=1;; 0, otherwise.; ].
Note that either of χ(u)=χ(u-1) or χ(u)=χ(u+1) must hold since χ(u-1)≠χ(u+1). Similarly, χ(u^2-1+√(1-u^2))≠χ(u^2-1-√(1-u^2)) since (u^2-1+√(1-u^2))(u^2-1-√(1-u^2))=u^2(u^2-1), which is a nonsquare. It follows that either of χ(u^2-1+√(1-u^2))=1 or χ(u^2-1-√(1-u^2))=1 must hold since χ(u^2-1+√(1-u^2))≠(u^2-1-√(1-u^2)) and neither of them could be 0. Then we can conclude that n_1,1=1 and n_1,3=1.
Let n_1,4, n_1,5, n_1,6, n_1,7 denote the number of z satisfying the following four equation systems respectively:
{[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=-1,; ]. {[ χ(g_1(z))=1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=-1,; ]. {[ χ(g_1(z))=1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ]. {[ χ(g_1(z))=1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ].
where g_i(i=1,2,3,4,5) are defined previously.
Then by character sum, n_1,3 can be expressed as
16n_1,4=∑_z∈_3^n\ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1-χ(g_4(z))).
By Table <ref>,
∑_z∈ A(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1-χ(g_4(z)))=0.
It follows that
16n_1,4=∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1-χ(g_4(z))).
Similarly, it can be concluded that
16n_1,5 =∑_z∈_3^n(1+χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z))).
32n_1,6 =∑_z∈_3^n(1+χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z)))-4.
32n_1,7 =∑_z∈_3^n(1+χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z)))-4.
By utilizing the lemmas presented in Section II, the following sum can be derived
n_1,1+n_1,2+n_1,3+n_1,4+n_1,5+n_1,6+n_1,7
= 2+(1-ε)+1/16[3·3^n+3+2∑_z∈_3^nχ(g_1(z)g_4(z))+∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z))
+χ(φ(u))χ(φ(u)-1)+χ(u+1)χ(φ(u)-1)]
= 3-ε+1/16[3·3^n+3+2Γ_3+Γ_4+χ(φ(u))χ(φ(u)-1)+χ(u+1)χ(φ(u)-1)]
= 3-ε+1/16(3^n+1+3+2Γ_3+Γ_4),
where ε has been defined in (<ref>).
* Proof of ω_0.
The sufficient and necessary condition for (<ref>) to have no solution was shown in Proposition <ref>. Let ab=z. For each z∈_3^n^*, there are 3^n-1 pairs of (a,b) such that ab=z. Further we have
ω_0=(3^n-1)(n_0,1+n_0,2+n_0,3+n_0,4+n_0,5+n_0,6+n_0,7+n_0,8+n_0,9+n_0,10),
where n_0,1=1 for the condition z=0 and the definitions of n_0,2, n_0,3, n_0,4, n_0,5, n_0,6, n_0,7, n_0,8, n_0,9 and n_0,10 will be detailed below.
Let n_0,2 denote the number of z satisfying
{[ χ(g_4(z))=0,; χ(z^2-u^2)=-1,; ].
then
n_0,2=
{[ 1, χ(u^2-1+√(1-u^2))=-1 or χ(u^2-1-√(1-u^2))=-1;; 0, otherwise.; ].
Note that χ(u^2-1+√(1-u^2))≠χ(u^2-1-√(1-u^2)) since (u^2-1+√(1-u^2))(u^2-1-√(1-u^2))=u^2(u^2-1), which is a nonsquare. It follows that either of χ(u^2-1+√(1-u^2))=-1 or χ(u^2-1-√(1-u^2))=-1 must hold. Then we can conclude that n_0,2=1.
Let n_0,3, n_0,4, n_0,5, n_0,6, n_0,7, n_0,8, n_0,9, n_0,10 denote the number of z satisfying the following eight equation systems respectively:
{[ χ(g_2(z))=-1,; χ(g_3(z))=-1,; χ(g_4(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=-1,; ].
{[ χ(g_2(z))=-1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=-1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=-1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ]. {[ χ(g_1(z))=-1,; χ(g_2(z))=1,; χ(g_3(z))=1,; χ(g_4(z))=1,; χ(g_5(z))=-1,; ].
where g_i(i=1,2,3,4,5) are defined previously.
Then by character sum, n_0,3 can be expressed as
8n_0,3=∑_z∈_3^n\ A(1-χ(g_2(z)))(1-χ(g_3))(1-χ(g_4(z))).
Similarly, it can be concluded that
16n_0,4 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z))).
16n_0,5 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1-χ(g_4(z))).
16n_0,6 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1-χ(g_4(z))).
16n_0,7 =∑_z∈_3^n\ A(1-χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z))).
32n_0,8 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1-χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z))).
32n_0,9 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1-χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z))).
32n_0,10 =∑_z∈_3^n\ A(1-χ(g_1(z)))(1+χ(g_2(z)))(1+χ(g_3(z)))(1+χ(g_4(z)))(1-χ(g_5(z))).
By utilizing Table <ref> and the lemmas presented in Section II, the following sum can be derived
n_0,1+n_0,2+n_0,3+n_0,4+n_0,5+n_0,6+n_0,7+n_0,8+n_0,9+n_0,10
= 2+1/32[15·3^n-81-∑_z∈_3^nχ(g_1(z)g_2(z)g_3(z)g_4(z))+5χ(φ(u))χ(φ(u)-1)-3χ(u+1)χ(φ-1)
+8χ(u^2-1-√(1-u^2))-8(1+χ(u-u^2))(1+χ((u+1)(√(1-u^2))+(u-1)^2))
-8(1-χ(u^2+u))(1+χ((1-u)(√(1-u^2))+(u+1)^2))]
= 2+1/32[15·3^n-81-Γ_4+5χ(φ(u))χ(φ(u)-1)-3χ(u+1)χ(φ-1)
+8χ(u^2-1-√(1-u^2))-8(1+χ(u-u^2))(1+χ((u+1)(√(1-u^2))+(u-1)^2))
-8(1-χ(u^2+u))(1+χ((1-u)(√(1-u^2))+(u+1)^2))]
= (2+1/32(15·3^n-81-Γ_4-8χ(u+1)-8(1-χ(u^2+u))(1+χ((1-u)(√(1-u^2))+(u+1)^2))
+8χ(u^2-1-√(1-u^2))-8(1+χ(u-u^2))(1+χ((u+1)(√(1-u^2))+(u-1)^2))))
= (2+1/32(5·3^n+1-81-32(1-ε)-Γ_4-8χ(u+1)+8χ(u^2-1-√(1-u^2))))
= -1+ε+1/32(5·3^n+1-17-Γ_4),
where ε has beem defined in (<ref>).
This completes the proof of Theorem <ref>.
Recall that the elements ω_i (i=0,1,2,3,4) satisfy two identities in (<ref>). Namely,
{[ ω_0+ω_1+ω_2+ω_3+ω_4 = (3^n-1)3^n,; ω_1+2ω_2+3ω_3+4ω_4 = (3^n-1)3^n. ].
After the values of ω_4, ω_3 and ω_2 are determined, ω_1 and ω_0 can be deduced by solving the above system.
Let p=3, n=3 and u=w^4, where w is a primitive element in _3^n^*. Then u∈𝒰_0\_3, ε=0, Γ_3=-4 and Γ_4=4.
By Theorem <ref>, the differential spectrum of f_u is
𝕊=[ω_0=286, ω_1=208, ω_2=156, ω_3=26, ω_4=26],
which coincides with the result calculated directly by MAGMA.
Let p=3, n=5 and u=w^210, where w is a primitive element in _3^n^*. Then u∈𝒰_0\_3, ε=1,Γ_3=-4 and Γ_4=12.
By Theorem <ref>, the differential spectrum of f_u is
𝕊=[ω_0=27346, ω_1=11616, ω_2=14278, ω_3=3630, ω_4=1936].
which coincides with the result calculated directly by MAGMA.
Let p=3, n=7 and u=w, where w is a primitive element in _3^n^*. Then u∈𝒰_0\_3, ε=1,Γ_3=-28 and Γ_4=-12.
By Theorem <ref>, the differential spectrum of f_u is
𝕊=[ω_0=2240650, ω_1=891888, ω_2=1204486, ω_3=295110, ω_4=148648].
which coincides with the result calculated directly by MAGMA.
§ CONCLUDING REMARKS
In this paper, we conducted an in-depth investigation of the differential properties of the Ness-Helleseth function. For u∈𝒰_0∖_3, we expressed the differential spectrum in terms of quadratic character sums. This completed the work on the differential properties of the Ness-Helleseth function. Besides, we obtained a series of identities of character sums, which may be used in some other areas. It may be interesting to consider applications of the differential spectrum of the Ness-Helleseth function in other areas such as sequence design, coding theory and combinatorial design. Moreover, the study of the Ness-Helleseth function can be extended to p>3 <cit.>, <cit.>, and the investigation of the differential spectrum of such function will be undertaken in our further work.
IEEEtranS
|
http://arxiv.org/abs/2409.02223v1 | 20240903185031 | Orbital Architectures of Planet-Hosting Binaries III. Testing Mutual Inclinations of Stellar and Planetary Orbits in Triple-Star Systems | [
"Elise L. Evans",
"Trent J. Dupuy",
"Kendall Sullivan",
"Adam L. Kraus",
"Daniel Huber",
"Michael J. Ireland",
"Megan Ansdell",
"Rajika L. Kuruwita",
"Raquel A. Martinez",
"Mackenna L. Wood"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
firstpage–lastpage
What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets
Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014
September 9, 2024
==================================================================================================================================================================
§ ABSTRACT
Transiting planets in multiple-star systems, especially high-order multiples, make up a small fraction of the known planet population but provide unique opportunities to study the environments in which planets would have formed. Planet-hosting binaries have been shown to have an abundance of systems in which the stellar orbit aligns with the orbit of the transiting planet, which could give insights into the planet formation process in such systems. We investigate here if this trend of alignment extends to planet-hosting triple-star systems. We present long-term astrometric monitoring of a novel sample of triple-star systems that host transiting planets. We measured orbit arcs in 21 systems, including 12 newly identified triples, from a homogeneous analysis of our Keck adaptive optics data and, for some systems, astrometry. We examine the orbital alignment within the nine most compact systems (≲500 au), testing if either (or both) of the stellar orbits align with the edge-on orbits of their transiting planets. Our statistical sample of triple systems shows a tendency toward alignment, especially when assessing the alignment probability using stellar orbital inclinations computed from full orbital fits, but is formally consistent with isotropic orbits. Two-population tests where half of the stellar orbits are described by a planet-hosting-binary-like moderately aligned distribution give the best match when the other half (non-planet-hosting) has a Kozai-like misaligned distribution. Overall, our results suggest that our sample of triple-star planet-hosting systems are not fully coplanar systems and have at most one plane of alignment.
astrometry – planetary systems – binaries: visual
§ INTRODUCTION
Over 5000 exoplanets have been discovered so far,[https://exoplanetarchive.ipac.caltech.edu/docs/counts_detail.html] and it is becoming clear that they are widespread with a minimum frequency of around one per star for a wide range of stellar masses <cit.>. Multiple star systems are also common, with over half of solar-type stars having at least one stellar companion and younger stars having an even greater multiplicity fraction (e.g., ). Investigating planet-hosting multiple star systems is therefore important to gain a more complete picture of exoplanets and their characteristics in our galaxy.
Due to the observational difficulties that close binaries present, many planet-searching surveys have focused on stars that are either single or where any stellar companions are very widely separated. Recent transit surveys have provided data that is not as biased against the stellar multiplicity of the targets and thus allows planets in multiple star systems to be studied. However, planet properties estimated from transits in systems with multiple stellar components can be inaccurate due to additional stellar flux diluting the transits, especially if the planet actually orbits the secondary (or tertiary) star. Both of these scenarios result in an underestimation of planet radius (e.g., ). Characterising the components of multiple star systems that host transiting planets is therefore vital in understanding the properties of planets within these systems <cit.>.
Accurate planet properties can give an insight into the formation mechanisms of planets within multiple star systems. Theoretically, stellar companions should have a significant influence on the formation pathways of planets. Additional stars have been shown to produce hostile environments for planet formation by dynamically affecting the protoplanetary disks with processes such as truncation or misalignment (e.g., ). Even if the formation of a planet could be achieved, the interaction of the multiple stars in the system can negatively influence the overall stability of the orbital paths of the planet causing unstable states, collisions or ejections <cit.>. This is especially true for triple-star systems. With two stellar orbital planes to consider, dynamical interactions can become more prevalent causing an increase in scattering and destructive collisions <cit.>.
These dynamical barriers to formation are therefore thought to have an impact on the distribution and characteristics of planet-hosting multiple-star systems. Wide binaries with a separation of over 1000 au do not seem to impact the occurrence rate of planets and therefore the planet formation process <cit.>. Observational data collected has indicated that this, however, does not apply to close binaries (a<100 au) which shows a lack of close stellar companions to transiting planet hosts (e.g., ). This suppression agrees with theoretical models of the formation of close binaries which favours the disk fragmentation model <cit.> with the addition of stellar companions causing the disruption and truncation of the protoplanetary disk <cit.>. Observationally, the protoplanetary disks are not persistent with approximately two-thirds of close binaries dispersing their disks within ∼ 1 Myr after formation <cit.>.
Despite the formation hurdles, exoplanets have been found in close binary systems (e.g., ) suggesting that there are some pathways to successful planet formation in these hostile environments. In total, over 200 binary systems with exoplanets have been discovered <cit.>, while currently there are only 30 planet-hosting triple and quadruple systems combined <cit.>. As higher-order multiple systems are uncommon it means that while some individual systems have been well studied, population studies have been impossible thus far.
Recent transit surveys using space-based telescopes, including the Kepler mission <cit.> and the Transiting Exoplanet Survey Satellite (TESS; ) have been the main contributors to the profusion of planet-hosting visual binaries. This has provided the opportunity to begin studying the orbital architectures of such systems. Following the work of <cit.>, the orbit of planets around binaries can be split into two categories. First, S-type orbits where the planet is orbiting just one of the stars in the binary and P-type orbits where the planet orbits both stars in a circumbinary orbit. The transiting planets within this work are exclusively planets with S-type orbits, orbiting one host star.
One property that can be tested observationally is the alignment of the stellar orbital plane and the plane of the planet, as the transiting planets have the distinctive characteristic that their orbits are nearly edge-on and therefore have orbital inclinations of close to 90. <cit.> observed 45 binary systems that host Kepler planets and from the orbital motions of the stellar companions concluded that there was an overabundance of mutually aligned systems, ruling out randomly orientated orbits at 4.7σ. <cit.> performed a similar study using both planet-hosting wide binaries and a field control sample of wide binaries, both from Gaia. Using a control sample allowed them confidence that any features discovered were astrophysical and not a result of selection effects. By deriving limits on the inclinations of both samples they concluded that there was again a surplus of aligned systems in the planet-hosting subset, with a probability of 0.0037 of both samples being drawn from the same underlying distribution. Studies using other sources of transiting planets around visual binaries such as TESS candidates or K2 candidates have found similar results that point to planet-binary orbital alignment <cit.>. This orbit-orbit alignment can also be investigated as a joint distribution with spin-orbit alignment. For example, <cit.> had a sample of 40 planet-hosting binaries and found eight systems that each exhibit evidence of joint spin-orbit and orbit-orbit alignment. One triple star system within their sample, V1298 Tau, hosts a spin-orbit aligned planet as well as exhibiting orbit-orbit alignment between the primary and secondary but not the tertiary. They also found a trend in the stellar binary inclinations that strongly peaked toward alignment rather than an isotropic distribution.
Triple star systems have an additional orbital plane to consider due to the third star in the system which means not only the alignment of the planet's orbit can be tested but also the alignment of the stellar companions. Observational evidence implies that there is a tendency for triple star systems to have mutually aligned stellar orbits <cit.>. <cit.> investigated the orbital alignment of 54 hierarchical field triples with visual orbits by calculating the mutual inclination between the orbit of the inner binary and the orbit of the outer companion relative to the barycentre of the binary. They concluded that there was a strong tendency for the triple systems to be coplanar in compact systems (<50 au), especially for low-mass primaries (M<1 ) where the average mutual inclination angle was 18. However, for systems where the outer companion was separated by more than 1000 au, isotropic orientations were found. <cit.> had also previously studied the stellar alignment of 62 Kepler triple systems containing an eclipsing binary. They found 47% of these systems to be coplanar, resulting in a distribution of mutual inclination angles that had a large peak at <10, although the distribution was bimodal with a secondary peak around 40 which they attributed to Kozai-Lidov cycles.
Individual planet-hosting triples have been observed and studied but the majority of known systems are oriented such that alignment tests are not possible. For example, the nearest star system to the Sun is a planet-hosting hierarchical triple containing the binary α Cen AB and its outer companion Proxima Centauri <cit.>. This system contains one confirmed planet in the habitable zone of its host star, Proxima b <cit.>, and two candidate planets, Proxima c <cit.> and Proxima d <cit.>. As all three planets were discovered using radial velocities, very little is known about their inclinations. Transiting planets in triple-star systems provides the unique opportunity to study the orbital alignment of both the stellar planes and the planetary planes. The M dwarf triple star system LTT 1445 is an example of such a system. The primary, LTT 1445 A, hosts two transiting planets and one non-transiting planet <cit.> as well as a binary pair, LTT 1445 BC, at a separation of ∼ 7 <cit.>. <cit.> used a combination of RVs, proper motion anomalies and astrometric measurements of the three stellar components to fit the orbit of both the BC binary around the host and the orbit of C around B. They obtained a mutual inclination between these two orbits of 2.88 ± 0.63 and therefore concluded that LTT 1445 ABC is a coplanar system.
In this work, we present 12 years of Keck AO and non-redundant aperture masking (NRM) astrometric monitoring of a sample of triple systems, both compact systems and those with wider companions identified with Gaia astrometry. The main sample consists of nine compact triple systems including Kepler-13 and Kepler-444 which both contain an unresolved companion. This sample also includes previously identified triple systems KOI-0005, KOI-0652, KOI-2032 and KOI-3497 <cit.>, KOI-2626 <cit.>, as well as two newly identified triple systems KOI-0854 and KOI-3444. We derive individual stellar parameters for the 7 fully resolved triple systems and reassess the false positive probability of both the candidate and confirmed transiting planets hosted by these systems. We measure precise orbit arcs which allowed us to fit full orbits to both the inner binary's orbit and outer stellar companion's orbit relative to the barycentre of the binary for the majority of the triples in the sample. We use two different methods to constrain the alignment of both the stellar orbits to the edge-on planetary orbits, one using the partial orbital arcs and one using full orbital analysis. We find that both methods cannot rule out underlying isotropic orbits at a statistical level. While we find that the alignment in the triple systems are not consistent with the low mutual inclination trends seen in previous binary samples, there is some tentative evidence using both methods of some broad alignment, more than what would be expected for random orbits.
§ OBSERVATIONS
§.§ Sample Selection
As of 2023-12-06, has identified 2741 confirmed planets and 1984 candidate planets totalling 4725 planets around 2957 host stars.[https://exoplanetarchive.ipac.caltech.edu]. As part of our ongoing survey of planet-hosting binaries using the Keck-II telescope and its facility adaptive-optics imager NIRC2, we have observed in total 977 KOI systems. These have been prioritised from the complete list of KOI host stars, focusing on systems that are not false positives with RUWE > 1.2 and distance < 1.2kpc. The survey is described in further detail in <cit.> and Kraus et al. (in prep.).
Observations were taken using the smallest pixel scale camera using the laser guide star (LGS) AO system <cit.>. We used the broadband K' filter (2.12 μm, FWHM=0.35 μm) for the majority of the imaging and the narrowband K_ cont filter (2.17 μm, FWHM = 0.03 μm) for bright stars that would saturate in K'. For the systems with the tightest separations, we acquired both AO imaging and NRM interferograms using the 9-hole aperture mask installed in one of the filter wheels on NIRC2.
From these observations, we initially removed systems where the closest companion was separated by more than 1000 au. This narrowed down the sample to 580 systems that all contain a candidate close stellar companion. From there the aim was to identify systems with a second stellar companion, either from the observations or from additional methods.
For the visual triples, we identified systems with a second stellar companion in the observations with a magnitude difference of less than 6. This is to ensure that likely background stars are not included in the sample. 15 visual triple systems were identified using this method. While 9 of these triples have been previously published, 6 of them are newly identified here as candidate triples. Of these systems, 4 of them have the outer companion previously identified and for two systems we present both the inner and outer companion as newly identified components in the candidate triple systems.
We also compared the list of candidate binary systems to the wide-binary catalogue compiled by <cit.>, again making the 6 magnitude difference cut for the inner companion. The wide-binary catalogue uses parallax and proper motion measurements from eDR3 to identify candidate companions that are likely to be physically associated. This method revealed 10 candidate triple systems with a wide outer companion. We also performed our own independent search for wide companions to the binary candidates to ensure we include all possible candidates. We queried DR3 within 2 arcminutes of each KOI binary candidate to identify wide companions that have a similar parallax and proper motion to the primary. Such similarity is a likely indicator that the wide companion is physically associated. We identified 4 further candidate systems this way. One of these (KOI-1615) appears to have two wide, bound companions in DR3 which, along with our observations of a close stellar companion to the primary, would make it a quadruple system. We retain the system in our sample for completeness but the analysis of quadruple and higher-order multiples is beyond the scope of this paper.
Finally, we also searched the literature for any known unresolved companions to KOIs in our AO imaging sample, which resulted in adding KOI-0013 and KOI-3158 to our sample.
KOI-0013 has historically been known as the proper-motion binary BD+46 2629 AB <cit.>, consisting of two A-type stars Kepler-13 A and Kepler-13 B, with a separation of approximately 1 <cit.>. A third low-mass stellar component was discovered orbiting Kepler-13 B using radial velocities in an eccentric orbit <cit.>. The planet identified with Kepler transits <cit.> has later been shown to be a highly irradiated gas giant orbiting the primary star <cit.>.
KOI-3158 consists of a K0 dwarf (Kepler-444 A) and a tight M-type spectroscopic binary separated by ∼ 0.3 au from each other and about 66 au from the primary <cit.>. Five transiting planetary candidates are orbiting Kepler-444 A in a compact system with separations up to 0.08 au and sub-Earth radii of 0.4–0.7 discovered using Kepler light-curve data <cit.>. <cit.> constrained the orbit of Kepler-444 BC relative to A using a combination of adaptive optics (AO) imaging and radial velocities and found an eccentric, edge-on orbit that from dynamical considerations was found to have a high probability of being aligned with the planetary orbit. Using additional AO imaging, as well as RV measurements and Gaia astrometry of the primary, <cit.> further constrained the outer orbit and derived a consistent result of the minimum misalignment being 1.6–4.6.
In total, we have identified 31 candidate triple and quadruple systems.
§.§ Confirming physical association
In order to quantitatively determine the likelihood of spatially resolved companions being bound together, we calculate the probability that each is co-moving with the primary, using methods established by <cit.> and rooted in the similar concept of open-cluster membership probabilities (e.g., ; ). Our implementation will be further described by Kraus et al. (in
prep). This probability is based on the relative linear motion, relative separation, and difference in magnitude (potentially in multiple filters) of each star.
We create a model for the field star population by querying Gaia for all sources within ρ < 1, including their relative proper motions and parallaxes, and then computing their stellar parameters (M and T_eff). Depending on what Gaia measurements are available, in order of preference, we compute the parameters of these stars from the absolute M_G magnitude (as computed from the parallax and apparent G magnitude), the B_p-R_p colour, the G-R_p colour, or as a last resort by assuming a temperature of T_ eff = 4500 K as is typical for faint Gaia sources that do not have colours. In all cases, we interpolate the mass-T_ eff-color-magnitude relations of <cit.> [Retrieved on 2024 May 17 from <http://www.pas.rochester.edu/ emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt>] to determine the stellar properties of each field star. We then use this set of unrelated field stars to forward-model a population of field interlopers with values of projected separation, relative brightness in each filter where a contrast is available, relative proper motion, and relative parallax. We then also create a corresponding model for the population of all binary companions that is based on the demographics of <cit.>, again computing their projected separations, relative proper motions, relative parallaxes, and contrasts in the filters where observations are available.
Finally, we use KDEs to smooth the empirical field-star population and the synthetic binary population and produce continuous probability density functions, and use the relative densities of the field and binary populations at the phase-space location of each candidate companion to estimate its probability of being drawn from either the binary posterior or the field interloper posterior. The kernel widths were chosen to be much larger than the typical distance between adjacent simulated binaries, but not larger than the typical extent of the population: 0.2 dex in logρ, the candidate's observational uncertainty plus 0.2 magnitudes for each contrast, the quadratic sum of the observational uncertainty and predicted orbital motion for the relative proper motion, and the observational uncertainty for the relative parallax. Of the 31 candidate triples in our sample, 23 of them had probabilities of both components being bound of > 99%. KOI-4759 had a probability of both companions being bound of 78%. This system is retained in the sample as a candidate requiring follow-up observation. Two wide systems identified by <cit.> (KOI-4407 and KOI-5943) had probabilities < 0.001% of the inner component from AO imaging being bound, and one candidate system identified from <cit.> (KOI-2813) had both inner and outer components likely to not be bound with probabilities of < 0.001%. Finally, for three systems (KOI-0387, KOI-2059, KOI-2733) the inner pair was likely to be bound (> 99%), but the outer pair was not (< 0.001%), meaning that they can be retained for future work on binary systems, but we exclude them here.
Three of the triple candidates (KOI-4528, KOI-4759, and KOI-5930) only have one epoch of observations, so they cannot yet be proper-motion confirmed, and we note them here as candidate triple systems. This leaves a total of 21 confirmed triple systems in our sample, of which 9 have been previously identified in the literature and 12 are presented here as newly recognised planet-hosting triple star systems. Table <ref> provides a summary of the sample of triples described here including the probability of each companion being a background field object.
Close stellar companions (ρ<1000 au) have been shown to impact the formation and evolution of planets (e.g., ) while wider companions have little effect on planets orbiting the host star (e.g., ). For this reason we choose to focus on the closely separated systems with outer separations of ρ<1000 au. Our sample of triples contains only systems with an outer companion separation of either ρ>1200 au or ρ<600 au with no systems residing between these two limits. This means that the effective cutoff separation for compact systems in our sample is 600 au. There are 9 compact triple systems that meet the criteria of having both their stellar companions within 600 au of the primary. In 7 of the 9 systems, the primary is an individual star orbited by a close binary (A-BC) and the remaining 2 systems are close binaries with a wide tertiary companion (AB-C).
§.§ Astrometry from AO imaging
The AO imaging observations we use here were taken over 55 nights spanning from 2012 Jul 6 UT to 2023 Jun 9 UT. We used the same reduction pipeline as described in <cit.> to produce calibrated images, using techniques such as flat-fielding and dark subtraction, but we performed our own astrometric analysis. This analysis follows the methods described in <cit.>, adapted for use on triple star systems. Briefly, for the majority of our triple systems, we fitted an empirical template PSF that was computed from the image itself using StarFinder <cit.>. This PSF was fitted to each component to derive (x,y) NIRC2 positions for each star, iterating and updating both the PSF and the stellar parameters until a stable solution was reached. For the tightest systems, where the PSFs are not sufficiently separated for Starfinder to compute a PSF, we instead fitted an analytic PSF to each component, similar to our previous work <cit.>. This PSF was the sum of three concentric 2D Gaussians, each with different free parameters for FWHM, ellipticity, orientation, and amplitude, which we determined simultaneously with the binary parameters.
From the pixel coordinates, we computed angular separations and position angles (PAs) to measure the relative astrometry. To do this, the pixel scale as well as the orientation of NIRC2 and the nonlinear distortion must be accounted for. For data taken prior to 2015 Apr 13, when the AO system was realigned, we used the astrometric calibration of 9.952 ± 0.002 mas/pix <cit.>, and we used <cit.> for data collected afterwards. These calibrations provide uncertainty for the linear terms on the pixel scale (fractional error of 4×10^-4) and orientation (0.02). We measured the astrometry for individual images on a given night and then computed the mean to provide the relative astrometry for each epoch. The uncertainty in the results is a quadrature combination of the rms of the astrometry for the individual images and the calibration uncertainty for the separation and position angle. The uncertainty in the magnitude difference (Δ m) is the rms of the individual measurements. Table <ref> reports the complete set of binary parameters measured from both our AO images and NRM data.
There is also an uncertainty on the nonlinear distortion term of the calibration, but we neglect this error in this analysis. Distortion is expected to be correlated at small pixel scales (∼10 pixels) for our tight inner binaries. For the outer companions, the distortion uncertainty would be more significant (up to 1.5 mas), however, these companions have larger errors due to the linear calibration term uncertainties, which dominate for wider binaries.
The NRM data was reduced following the technique of <cit.>. Briefly, the frames are Fourier-transformed and the squared visibility and closure phase is extracted for each baseline before being calibrated against the instrumental squared visibilities and closure phases, estimated from calibrator stars observed in the same night. These closure phases can then be used to fit for a binary solution, by searching a grid to find the minimum χ^2 for the separation and position angle assuming the star is a binary. The uncertainties in the fit are then increased so that the resulting reduced χ^2 is equal to 1.
§.§ Observations with the Hobby-Eberly Telescope
For 6 of the 9 close triples, we obtained moderate-resolution, red-optical spectra using the red arm of the second-generation Low-Resolution Spectrograph (LRS2-R; ) at the Hobby-Eberly Telescope (HET) at McDonald Observatory as part of a program to spectroscopically survey planet-hosting multiple stars. Because the HET is queue scheduled, not all of our targets were observed, but a majority were. LRS2 is a moderate-resolution (R∼1700) continuously-tiled integral field spectrograph (IFS) with two settings, each with two channels. LRS2-R is the redder arm of the instrument and covers the red and far-red channels, which cover 6500 < λ < 8470 Å and 8230 < λ < 10500 Å, respectively. Although our observations included both channels simultaneously, the far-red channel had severe telluric contamination and low S/N, so we restricted our analysis to only the red channel.
Our observing strategy and the instrument details are described in <cit.>. To briefly summarize, the observations were typically taken during grey or bright times, with an upper limit on the seeing of ∼ 25. These conditions were acceptable because our systems were unresolved and we only required a single composite spectrum. Our exposure times were either 300 s or a time sufficient to achieve an S/N > 100 for the primary star. After data reduction, the source was extracted using an aperture clipped at 2.5 times the seeing, calculated in the wavelength frame with the highest S/N.
§ REVISED STELLAR PARAMETERS
For the full orbital analysis of the triples in our sample as well as to locate the barycentre of the binary component in each system, accurate stellar masses are required. To obtain constraints on the component masses for our 7 fully-resolved triples, we retrieved the individual stellar parameters for each component using the method presented in <cit.> and modified for HET data in <cit.> and <cit.> but adjusted slightly to account for the third star in the system. We summarize the method here for completeness, with an emphasis on changes in methods between <cit.> and this work.
Briefly, we assembled data including spectra from HET/LRS2-R (when available), unresolved r'i'z'JHK_s broadband photometry from the Kepler Input Catalog (KIC; ) and the 2-Micron All-Sky Survey (2MASS; ), and high-resolution adaptive optics imaging from NIRC2 on Keck. The majority of our triples had a contrast in a single photometric band from our AO imaging, but one system (KOI-2626) had optical speckle imaging reported in <cit.> with both components of the triple resolved. When analyzing KOI-2626, we included the speckle measurements in our fit along with the NIR AO contrasts. We fit the data set with a three-component spectral model using the BT-Settl stellar atmosphere models <cit.> with the <cit.> linelist.
To perform the fitting we used a custom-modified Gibbs algorithm, then used <cit.> to assess the statistical spread of the retrieved values. We retrieved a set of 8 stellar parameters: the individual stellar component and radius values, the best-fit distance to the system, and the extinction. We placed a prior on the stellar radii using stellar radii derived using the MESA Isochrones and Stellar Tracks (MIST) evolutionary models <cit.> at an age of 1 Gyr.
When available, we placed a Gaussian prior on the parallax using the Gaia DR3 <cit.> parallax. We set a Gaussian prior on the extinction using the mean and standard deviation of the predicted extinction at the appropriate distance and location using the 3D Bayesian dust map <cit.> implemented in the package[<https://dustmaps.readthedocs.io/en/latest/index.html>]. With the best-fit , we used a MIST isochrone at an age of 1 Gyr to infer a mass for each star in every system. Table <ref> summarises the retrieved stellar parameters for the components of each system. Table <ref> also summarizes the planet radius correction factor for each star in the system, which is the multiplicative factor by which to correct the reported Kepler planetary radii as defined in <cit.> and <cit.>. Table <ref> summarises the masses derived for each component in the 7 visual triples. The masses for the components in the unresolved triples, KOI-0013 and KOI-3158, have been taken from the literature.
§ PLANETARY PARAMETERS
§.§ False positive analysis
Multiple star systems cause complications when determining planet parameters from transits due to the extra flux provided by the stellar companions. This means that multiple-star systems are likely to harbour false positives (FPs). Erroneous classifications of FPs are also expected due to centroid offsets as transiting planets around stellar companions may cause such offsets. In our sample, there are 16 candidate or confirmed planets in 9 systems. KOI-0005 is the only system to have an FP (KOI-0005.02), but we retained it in the sample due to the candidate planet KOI-0005.01. None of the planets in the sample have a centroid-offset flag, which if present could indicate that the planet does not orbit the primary. KOI-0013 and KOI-3158, the only two triple systems that have an unresolved stellar companion, have both been definitively shown to have their planets orbiting the primary star <cit.>. They are therefore not included in our false positive re-analysis. For the remaining seven systems, we reassess their false positive probabilities for each stellar component to confirm their status as validated planets and potentially identify which star they orbit.
For the remaining 10 planets around 7 host systems, we use the mass and radius of each stellar component as described above to calculate stellar mean density distributions. Using the measured transit durations, we then also calculated the expected stellar density for each component assuming the planet was around each one. We excluded grazing transits by assuming a uniform impact parameter from 0 to R_p/R_⋆, assumed a Rayleigh distribution for the eccentricities with a mean of 0.05 <cit.>, and accounted for dilution from the additional stellar flux by correcting the factor of (R_p/R_s)^3/2 from the assumed R_p and R_s to the calculated one above <cit.>. Comparing these two stellar density distributions for each component gives an indication of which star the planet could orbit, following the method in <cit.> which calculates a probability from Monte-Carlo simulations. The results of these probabilities are shown in Table <ref>. We found that out of the 10 planets, 4 were consistent with orbiting any star within their triple system with a probability of >0.1% (KOI-2032.01, KOI-2626.01, KOI-3444.03, and KOI-3444.04). KOI-2626.01 has the highest probability of being around the tertiary, and the remaining three all favoured the primary, with the two planets around KOI-3444 having a high probability of being hosted by the primary (>86%). KOI-0005.01, KOI-0652.01, KOI-0854.01, KOI-3444.01 and KOI-3444.02 were consistent with orbiting the primary or secondary but not consistent with orbiting the tertiary. KOI-0854.01 is the only one of these 5 systems to favor the secondary. Finally, KOI-3497.01 was the only planet with a high probability of > 99.99% of orbiting the primary and was inconsistent with orbiting the secondary or tertiary. None of the 10 planets show evidence of being FPs as each one had at least one acceptable match between the stellar density distributions for the primary, secondary or tertiary. The four planets around KOI-3444 as well as KOI-0005.01 are the only candidate planets in our sample. As their host star probabilities were all consistent with their candidate planet status we retain them in our sample as planetary candidates. KOI-0005 has an additional observation from TESS showing a consistent planetary detection for KOI-0005.01.
§.§ Revised Planet Parameters
Table <ref> presents revised planetary radii, equilibrium temperatures, and instellation fluxes for the planets in the sample of triples. Values for whether the primary, secondary, or tertiary star is the host are all presented for completeness. Almost all of the systems undergo significant revisions to their radii, equilibrium temperatures, and instellation fluxes based on their original parameters regardless of which star in the triple is assumed to be the planet host. Based on the measured radii, our planetary sample contains four rocky planets (0.5–1 ), one super-Earth (1.0–1.75 ), two sub-Neptunes (1.75–3.5 ), two sub-Jovians (3.5–6 ) and one Jovian (6 –14.3 ). With our revised planetary radii one of the rocky planets (KOI-3497.01) is a sub-Neptune regardless of which star is the host and the super-Earth (KOI-2626.01) is either a sub-Neptune if hosted by the primary or a sub-Jovian if hosted by the secondary or tertiary star. KOI-0854.01 only had a change of classification from sub-Neptune to sub-Jovian if the tertiary was assumed to be the host star, and KOI-3444.02 was revised from a sub-Jovian to a Jovian if hosted by the secondary star. While the remaining six planets had revisions to their radii, these corrections did not result in a classification change regardless of which star was the host.
§ MEASURING ORBITAL ARCS
For small regions of an orbit, the motion of a stellar companion relative to either the primary star or the barycenter of the inner binary is expected to appear to be linear. The observed astrometric motion of our companions can therefore be approximated as such, given that the separations in Table <ref> imply an average orbital period of 1700 years.
For inner orbits, we measured the astrometry for the fainter star relative to the brighter one. Linear models were fitted to the separations and PAs as a function of time in order to measure the instantaneous orbital motion at the mean epoch. We subtracted the mean epoch from each observational epoch so that the zeroth order coefficient in each fit provides a measurement for the separation and PA at this mean epoch (ρ_0,θ_0). The first-order coefficients would therefore be equivalent to the linear motion per year (ρ̇, θ̇). We convert the angular linear motion from degrees per year to mas per year by using the separation at the mean epoch (ρ_0) so that both first-order coefficients are in units of mas/yr. The values and errors for these coefficients have been calculated using the python package numpy.polyfit. For systems that had more than two epochs, we calculated the χ^2 value for both the separation and PA linear fits and the probability of achieving this χ^2 assuming the orbital motion was linear. Values of p(χ^2) < 0.05 could indicate that a linear model is not a good fit to the data. We iterated this process starting with two epochs and adding an additional one until the probability was less than 0.05 or all epochs had been added. Only two systems showed evidence of non-linear motion: KOI-0005 and KOI-0652. KOI-0005 AB has a time baseline of approximately 18% of the estimated orbital period, so our linear fit analysis uses the first 5 epochs out of a possible 7 (≈16% of its orbit). KOI-0652 BC has a total time baseline of approximately 10% of the estimated orbital period, so our linear fit analysis uses the first 4 epochs out of a possible 6 (≈7% of its orbit).
For outer orbits, the astrometry in Table <ref> is measured for the outer single star relative to the brightest star in the inner binary. In order to fit for astrophysically meaningful linear motion, this astrometry needs to be determined relative to the barycenter on the inner binary. The relative position of the inner binary is typically measured ∼10× more precisely than the outer companion and therefore the errors on the inner astrometry can be approximated as negligible. Assuming this, the position of the outer companion relative to the barycenter of the inner binary can be written as:
Δα^*_3-1 + [M_2 / (M_1 + M_2) ×Δα ^*_1-2] = Δα^*_3-12 + μ_α^*, 3-12× t
Δδ_3-1 + [M_2 / (M_1 + M_2) ×Δδ_1-2] = Δδ_3-12 + μ_δ, 3-12× t ,
where subscripts 1 and 2 represent the brighter and fainter star of the inner binary, subscript 12 indicates their barycentre, and subscript 3 is for the outer companion <cit.>. Δα^* is the relative right ascension equal to Δαcosδ, Δδ is the relative declination, and μ is the linear motion at each epoch t, and M are the stellar masses from Table <ref>. The astrometry on the left-hand side can be found in Table <ref>. We performed 10^4 Monte-Carlo trials by randomly selecting the astrometry measurements from a Gaussian distribution with the same mean and standard deviation as the observations and randomly selecting stellar masses from the posterior distribution. The mean and standard deviation of these trials for each observational date was then used as the relative astrometry measurements within the linear motion calculations described above for the inner binaries. Table <ref> shows the results of the linear motion for both the inner binary and the outer companion relative to the barycentre of the inner binary. As the separations for the outer companion are in general much farther than the inner binary separation, the motion recorded in the same time baseline is often much smaller, and as such, none of these outer companions show evidence of non-linear motion.
§ TEST OF ORBITAL ALIGNMENT
Building on work done previously investigating orbital arcs of wide binaries by <cit.>, we use the angle γ as a test of orbital alignment. This is the angle between the line joining two stars and the star’s velocity vector. All systems in our sample have transiting planets that therefore have a nearly edge-on orbit (83 < i < 90). If the planet orbits on the same plane as the stellar companion, and hence there is mutual alignment in the system, low values of γ are expected. Figure <ref> is a pictogram depicting γ values close to 90 for a face-on orbit where all the motion is in the θ direction and close to 0 for an edge-on aligned orbit where all the motion is in the ρ direction. However, other factors such as eccentricity and viewing angle can give a small value of γ so the alignment can only be tested by a statistical sample. The angle γ is computed using the equation:
γ≡arctan(|θ̇|, |ρ̇|),
where the absolute value of the orbital motion has been used in order to limit γ to 0–90. We calculated γ for both the fainter companion in the inner binary (depicted in Figure <ref>) and the outer companion relative to the barycenter of the inner binary. We propagated the errors of the linear motion using 10^5 Monte-Carlo trials, randomly selecting values for the linear motion for each trial from Gaussian distributions with the same mean and standard deviations as the linear motion parameters. Table <ref> reports γ for the 9 triples, including 1σ and 2σ confidence intervals from the Monte Carlo trials.
Figure <ref> shows a histogram of all γ values. Six of the 9 triples have both inner and outer γ measurements, and 3 have outer γ measurements only, totalling 15 angles altogether. There is a lack of systems with the lowest values of γ (0–18) and an apparent peak at 18–36. There is no evidence here for alignment within the systems, although, at angles >20 there is a downward trend. This is not the expected result for the inner binaries, as previous work (e.g., ) would suggest there should be some evidence of alignment in these systems. Small number statistics could, however, be distorting the distribution of γ values measured. Another point to consider is that it is unknown for most systems which star the planet is around and therefore approximately half of the γ angles do not correspond to the orbits of stars that host transiting planets.
§.§ Comparing to simulated orbit arcs
To provide a comparison to our measured γ distribution, we simulated orbital arcs calculated following the technique of <cit.>. Briefly, the orbital parameters to describe a complete orbit are chosen randomly from prior distributions. The argument of periastron (ω) and PA of the ascending node (Ω) were drawn from uniform distributions from 0–360 while the period and semimajor axis were fixed at 1 and the time of periastron set to 0. The inclination is chosen from a distribution equal to arccos(𝒰), where 𝒰 is uniform from 0 to 1, simulating isotropic viewing angles. Regarding eccentricity, three distinct cases are considered: low eccentricity (uniform from 0 to 0.2), field binary (uniform from 0.1 to 0.8; ), and high eccentricity (uniform from 0.6 to 0.8). In order to calculate γ for each synthetic orbit, random observation times were chosen from a uniform distribution from 0 to 1. At this time, the separation and position angle are calculated along with two other times of ±0.01% of the period before and after the random observation time. The linear motion is then computed as the average in the difference of the motion.
The results from these simulations, where the orbital plane of the transiting planet is independent of the orbital plane of the stars, are shown in Figure <ref> along with the measured average γ distribution from Figure <ref>. Both the low-eccentricity and field-binary eccentricity distributions have maxima at high values of γ, rising up to 90. This is unlike the observed γ distribution which peaks at ≈20–40. However, the high eccentricity, isotropic scenario appears to better recreate the observed distribution. Based on the field-binary eccentricity distribution <cit.>, high eccentricity systems are expected to be rare. While this eccentricity distribution gave the best fit, it is unlikely that our sample is comprised of only highly eccentric systems.
To quantitatively compare our measured γ distribution to the simulated results, we computed the Kolmogorov-Smirnov (K-S) statistic of the corresponding cumulative distribution function (cdf). This test calculates a probability (p-value) of the null hypothesis that two distributions were drawn from the same parent population. We calculated the K-S statistic for each of the 10^5 Monte Carlo trials for the measured γ distributions compared to the average cdf from our simulated orbits. We find no simulations with isotropic inclinations that reject the null hypothesis at p<0.05. The high-eccentricity distribution gave the best K-S statistic (p=0.64) while the low eccentricity scenario gave a p-value of 0.05, and the field binary eccentricity gave a p-value of 0.23.
Instead of assuming isotropic viewing angles, we can simulate a range of mutual inclinations with respect to the inclination of the planet. Assuming the planet has an inclination ≈90, the mutual inclination between a stellar orbit and the planet's orbit (ϕ) can be written:
cosϕ_⋆ -p = cos(90 - i_⋆)cos(Ω_⋆- Ω_p),
where i_⋆ is the inclination of the stellar orbit and Ω_⋆, Ω_p are the longitude of the ascending node for the stellar orbit and the planet's orbit respectively. A more thorough discussion of this equation is given in Section <ref>. Rearranging this equation gives:
i_⋆ = arcsin (cosϕ_⋆ -p/Ω_⋆- Ω_p )
Ω_p is an unknown quantity and therefore we simulate values for Ω_⋆- Ω_p directly by assuming a uniform distribution between 0 to 360, and ensuring that the absolute value of the cosine of this angle is larger than |cosϕ_⋆ -p| so that the arcsin can be computed.
For the distribution of ϕ_⋆ -p, we look at two distinct cases. The first is that ϕ_⋆ -p values are chosen randomly from a uniform distribution from 0 < ϕ_⋆ -p < ϕ_max. This is equivalent to star-planet alignment within ϕ_max, where here we have chosen ϕ_max values from 10 to 50 in 10 intervals. The second case is where the star and planet are misaligned by a narrowly specific amount, where |ϕ_⋆ -p - ϕ_0| < 5. We have tested ϕ_0 values from 15 to 45 in 10 intervals. A ϕ_0 value of 5 would be equivalent to alignment with ϕ_max of 10, so this duplicate is not included. Testing each of these mutual inclination scenarios with each of the three eccentricity distributions described above (low, field binary-like and high) results in 27 additional simulations, combined with the isotropic viewing angle simulations giving 30 unique simulations to compare against the observed γ distribution.
Figure <ref> provides an overview of the results of these comparisons. The high eccentricity distribution with the narrowly distributed mutual inclinations from 20 < ϕ_⋆ -p < 30 (ϕ_0 = 25) gave the best K-S statistic (p=0.73). Similarly, going to a field binary-like distribution with the same mutual inclinations also provided an acceptable fit (p=0.63). While the high eccentricity with ϕ_0=35 also gave a relatively good K-S statistic (p=0.51), moving the other way to ϕ_0 = 15 gave a p-value of only 0.08. Even with high eccentricities, small and narrowly distributed mutual inclinations are not a good fit for the observed distribution. However, we rule out at the p<0.05 level scenarios with low eccentricity distributions and mutual inclinations between ϕ_0 = 35–45 (p-values of 0.006 and 0.0003 respectively). We also rule out field binary-like eccentricity distributions and ϕ_0 =45 with a p-value of 0.01.
For the cases instead where the mutual inclinations were sampled from 0 to a maximum angle ϕ_max, the scenarios with a higher value of ϕ_max gave a better fit to the observed distribution. A field binary-like eccentricity distribution with ϕ_max = 50 gave the second-best fit overall with a p-value of 0.64. Acceptable simulations also extended to both the low (p=0.61) and high (p=0.42) eccentricity distributions with ϕ_max = 50. Seven simulations produced poor fits with the observed distributions and can be ruled out at p<0.05. For every eccentricity distribution, the lowest mutual inclinations (ϕ_max = 10–20) all produced low K-S statistics (3×10^-6 < p < 9×10^-3). We also find that ϕ_max=30 is ruled out for high eccentricities. Thus, the γ distribution for our sample is not a good match to scenarios where the mutual inclination distribution spans coplanar to low-ϕ_⋆ -p orbits for all eccentricity distributions.
Overall our simulations resulted in p-values ranging from 3×10^-6 to 0.73, with the best matches being either high eccentricity distributions with narrowly distributed mutual inclinations (any ϕ_0), isotropic mutual inclinations, or low/field binary-like eccentricities with moderately misaligned mutual inclinations (ϕ_0=15–35 or ϕ_ max=30–50). We rule out 10 of our 30 simulations at the p<0.05 level, including 3 narrowly distributed mutual inclination scenarios (ϕ_0 = 35 or ϕ_0 = 45) and 7 low mutual inclination simulations across all eccentricity distributions. Ruling out the low mutual inclination scenarios suggests that the triples within the sample are not fully coplanar systems.
§.§ Comparing to planet-hosting binaries
Previous work by <cit.> performed an analysis on 45 KOI binaries with similar separations to our sample. KOI-0854, KOI-3158, and KOI-3444 were included in the <cit.> binary sample and therefore have been removed from the binaries we compare to given that they are in our sample of triples. Figure <ref> shows the comparison of our distribution of γ to their 42 binaries, similar to Figure <ref> but with a normalised density to show the comparison between the distribution for binaries and triples.
<cit.> found that there was an overabundance of low γ values that, from similar simulations as described above, could only be explained by low mutual inclinations between the planet and stellar orbit. The simulated orbits that they found best matched their data used a field binary-like eccentricity distribution and uniform inclinations between 0 and 30 (p=0.81). In comparison, our sample of triples gives p=0.09 for the same orbit simulation. It appears therefore that the γ distribution for binaries does not match our distribution for the triple systems well. However, a 2-sample K-S test over 10^5 Monte Carlo trials results in a p-value of 0.18 so we cannot rule out that these two data sets come from the same underlying distribution.
§.§ Two-Population Tests
In the previous work of <cit.>, all the γ angles within the binary sample represent stellar orbits containing at least one star known to host a planet. This is not the case for our sample of triple systems. For the 6 visual triples where we have measured orbital motion for both the inner and outer companion, we have calculated 12 values of γ in total. As each of these systems only has one known planet, these values represent an equal mix of orbits that include a planet-hosting star and orbits that only include non-planet-hosting stars. KOI-3444 contributes one γ value for the outer orbit of the primary star relative to the inner binary. All 4 planets around KOI-3444 have been shown in Section <ref> to have a high probability of orbiting the primary so we class this as a γ value associated with a planet-hosting stellar companion. For KOI-0013 and KOI-3158, their planets are known to be hosted by the primary and so the one visual orbit is also associated with the planet-hosting star. In total, 40% (=6/15) of the γ values are associated with orbits from non-planet-hosting stellar pairs, and 60% (=9/15) are from stellar pairs that are planet-hosting.
We consider therefore that a two-population model might be required to explain our observed distribution. As 60% of the distribution is a result of planet-hosting stellar companions, we set 60% of our combined model to have a field binary-like eccentricity distribution and a mutual inclination up to ϕ_max=30 to match the best case produced by previous work done by <cit.>. For the remaining 40% of the model distribution, we test each pairing of eccentricity and mutual inclination distribution as described above, resulting in 30 additional unique tests.
Figure <ref> is a summary of these two-population models, plotted with the original γ distribution for the triple systems. None of the models provided a better match for the observed distribution than the single population model (0.6 < e < 0.8 and ϕ_0 =25). However, overall they do give more acceptable simulations than the single-population models. For example, for isotropic orbits, a high eccentricity distribution was needed for a good match (p>0.3) for the single-population models, whereas all three tested eccentricity distributions for the two-population model resulted in acceptable matches (0.37<p<0.46) for isotropic orbits. In total, for the single-population models 9/30 (=30%) of the tests resulted in a p-value of greater than 0.3, whereas the two-population models resulted in 12 (=40%). The best fitting two-population model (60% field binary-like eccentricities and ϕ_ max = 35) used non-planet-hosting orbits with narrow mutual inclinations of ϕ_0=35–45 (p=0.62–0.64). These results again suggest that the triples within the sample are not completely coplanar, but instead are consistent with having at most one plane of alignment.
Triple star systems are, in general, not expected to all be mutually aligned. One of the major formation pathways to hierarchical triples is thought to be the separate formation of an inner binary and outer companion, with the gravitational interactions between the two systems forming a bound system. The introduction of a third stellar companion to the stable binary can cause Kozai-Lidov cycles where the inclination between the orbit of the inner binary and the orbit of the outer companion relative to the barycentre can vary periodically <cit.>. Short-period triples have been shown to have mutual inclinations that peak at ∼ 40 due to Kozai-Lidov cycles, which would correspond to the simulations where ϕ_0 = 35–45 <cit.>. Unlike previous work on planet-hosting binaries (e.g., ), we exclusively consider Kozai-Lidov cycles as a result of stellar-stellar interactions and not planetary-stellar interactions. The longest Kozai-Lidov timescale in our sample is from the widest system, KOI-0652, which is ∼ 0.4 Myr. As this is less than the approximate age of the stars in our sample (∼ 5 Gyr), it is feasible that Kozai-Lidov cycles could be operating in any of the triple systems. It is therefore interesting that the best match for the two-population model where the planet-hosting orbits are aligned within 30 is the non-planet hosting orbit being misaligned with exactly this range of mutual inclinations. While this is the two-population simulation that gives the best p-value, we note it is not a well-matched distribution to the observed histogram. While we are limited by the sample size, it is clear that a more complex model is needed to fully explain the shape of the observed distribution.
§ FULL ORBITAL ANALYSIS
An additional method can be used to assess the alignment between the planetary and stellar orbital planes, separate from the γ analysis. With accurate distances available for our sample, and our newly derived stellar parameters, we can perform a full Keplerian orbital analysis for these triple systems. From these complete orbits, we can use the inclination constraints to investigate the planetary-stellar alignment. The full orbital analysis also allows the alignment of the two stellar planes to be assessed which was not possible within the γ analysis. As all systems are hierarchical triples, our orbital analysis is separated into the inner binary and the outer companion relative to the barycenter of the inner binary.
For the inner binary, we fit the relative orbit using orvara (v1.1.4; ), a Markov chain Monte Carlo (MCMC) orbit fitter with an efficient eccentric anomaly solver. Although orvara has the capability to fit both RV and Hipparcos–Gaia Catalog of Accelerations (HGCA) astrometry, here we only have the astrometry in Table <ref> available, apart from KOI-3158 which is discussed separately below. The posteriors of the orbital parameters are calculated using the affine-invariant <cit.> MCMC sampler emcee <cit.> with parallel-tempering <cit.>. We fitted eight orbital parameters: eccentricity e, inclination i, argument of periastron ω, position angle of the ascending node Ω, semi-major axis a, mean longitude at the reference epoch of 2010.0 λ_ref, and masses of the components of the binary pair M_1 and M_2. The default priors have been used for all of these parameters, shown in Table <ref>, apart from the component masses for which we have adopted a Gaussian prior based on the masses in Table <ref>. A Gaussian prior for the parallax has also been imposed, in which all the parallax measurements are based on DR3 results apart from KOI-2626 which in the absence of data we have adopted a distance from <cit.>. The orvara orbital fits results are based on fits with 100 walkers and 5×10^6 steps for the MCMC, 5 temperatures for parallel tempering, 75% burn-in and thinning to retain every 50th step. To ensure convergence we used the minimum steps and burn-in needed for the median and standard deviation were stable to within at least 20% for all the systems.
Figure <ref> is an example of a sky-projected orbit fit for the inner binary of KOI-0005, showing the measured astrometry and 50 random accepted orbits. For KOI-3444, the tertiary component is resolved in only one epoch out of six. We therefore treat the centre of light for the unresolved component as the centre of mass in the remaining five epochs and fit the orbit using orvara. We also take this approach for the unresolved binary in KOI-0013. The orbit fits for these two systems, and the remaining five inner binaries, can be found in appendix <ref>.
The orbits of the outer companions relative to the barycenter of the inner binary have all been fitted using the python package lofti_gaia <cit.>, based on Orbits-For-The-Impatient (ofti; <cit.>). lofti_gaia was designed to fit the orbital parameters of binaries that are resolved in Gaia using proper motions. This has been adapted instead to use the linear motions calculated in Section <ref> at the mean epoch for each system. We again used the stellar masses in Table <ref> and the parallax to constrain the total mass and distance to the system. ofti using a rejection sampling method by computing orbits from random values of four orbital parameters (e, Ω, i, and the orbital phase relative to time of periastron τ) drawn from distributions of the priors shown in Table <ref>. By scaling the semi-major axis and rotating the longitude of the ascending node to match the input parameters, the trial orbit is either rejected or accepted based on how well its linear motion matches the input linear motion.
Figure <ref> is an example of one lofti fit for the outer companion of KOI-0005 showing 50 random accepted orbits after running the fit until 10^6 orbits were accepted. The orbits for the remaining 5 triples can be found in appendix <ref>. Table <ref> shows the orbital parameters calculated as a result of both the orbital analyses of the inner binary and the outer companion.
§.§ Kepler-444 (KOI-3158)
The orbit of the unresolved binary Kepler-444 BC around the planet-hosting primary Kepler-444 A has previously been studied by <cit.>, <cit.> and <cit.>. Here, we provide a new, independent analysis of the orbit using specialized AO imaging astrometry measurements. Our data for this target span the time before and after the most recent Keck II AO system realignment in 2015. When we observed this target, we ensured that the orientation of NIRC2 was fixed (north up) and that the primary was at the same (x,y) pixel location on NIRC2. Given the nonlinear distortion of NIRC2 has never been shown to be variable over time, this observing strategy should enable higher accuracy astrometry than is normally possible. Thus, we used only observations of Kepler-444 after the 2015 NIRC2 realignment and also where the primary is within the box of x=510–520 and y=520–530 on NIRC2 in full frame mode coordinates. Our astrometry errors are thereby only limited by the error on the PA and pixel scale, which is 0.004 mas/pix <cit.>.
Our orbital analysis uses other published data for the system.
<cit.> obtained spectra of KOI-3158A from 2012 July to 2015 July including three epochs of spectra of the companion KOI-3158BC using the HIRES spectrometer. <cit.> also obtained spectra from 2008 November to 2013 July of KOI-3158 A with the High Resolution Spectrograph (HRS) on the Hobby-Eberly Telescope (HET), including one epoch of the companion binary. 167 previously published RVs of the primary were also collated. <cit.> re-analysed the spectra of KOI-3158BC previously published by <cit.>, including their additional spectra. They combined a system velocity (RV_BC = -124.35 ± 0.11 km s^-1) with the known RV of the primary to derive a ΔRV of -3.1 ± 0.2 km s^-1 at the epoch 2456783.1 JD.
In the orvara fit we combine the relative astrometry measurements, KOI-3158 A's multi-epoch RVs, the single-epoch relative RV and Hipparcos-Gaia absolute astrometry. We also impose a Gaussian prior on the mass of the primary of M_A = 0.75 ± 0.03 , following the method of <cit.>. Figure <ref> presents a sky-projected orbit fit along with the separation, PA, absolute astrometry from Hipparcos and and the multi-epoch RVs as a function of time with these orbit solutions overlaid. The fit is run using 100 walkers, 5 temperatures for parallel tempering, 10^5 steps with thinning to retain every 50th step and 10% burn-in. Significantly less steps were needed for the solution to converge in comparison to the orbit fits described above that are based only on our astrometry. The fitted orbital characteristic solutions of this highly eccentric orbit are shown in Table <ref>. Our parameters and uncertainties are comparable to the results of orbital fit by <cit.>. Despite the uncertainties on the astrometry being on average ∼4× smaller than the astrometry used previously, our astrometry covers a smaller time baseline of five years in comparison to the previously used nine years. However, <cit.> showed that the orbital fit was dominated by the Δ RV and so the new astrometry has made very little impact on the measured orbital parameters.
§.§ Mutual inclination between the stellar orbits
The true mutual inclination of the orbital plane of the inner binary and the outer star can be measured using the equation:
cosi_I-O = cosi_Icosi_O + sini_Isini_Ocos(Ω_I- Ω_O)
where i_I and i_O are the inclinations of the inner binary and the outer component, Ω_I and Ω_O are the longitude of the ascending nodes for the inner binary and the outer component, and i_I-O is the misalignment between these orbital planes <cit.>, equivalent to ϕ_⋆-⋆.
To measure the inclinations and longitudes of the ascending nodes, complete orbits need to be fitted to the relative astrometry measurements. Visual orbits result in a 180 ambiguity in Ω as they do not distinguish between the ascending and descending nodes without auxiliary information. The values in Table <ref> from the orbital analysis are quoted in the range of 0-180apart from for KOI-3158 which due to the radial velocities does not have this uncertainty. The ambiguity in Ω is equivalent to a ±180 in the last term of Equation <ref> and results in two values for the mutual inclination for each system, ϕ^+_⋆-⋆ and ϕ^-_⋆-⋆.
Mutual inclination values for the triple systems have been calculated using values of i and Ω from the orbit fits in Table <ref>. The posteriors for i and Ω have been taken to produce 10^5 values for the mutual inclination for each system. The median ± 1 σ values are shown in Table <ref>. There is no clear pattern of low values of mutual inclination between the stellar orbits, suggesting no preference for alignment. Due to the lack of precision in the orbital fits for the outer companion due to the small time baseline in the astrometry compared to the orbital periods, the values of the mutual inclination between the stellar orbital planes typically have large distributions and therefore large uncertainties. This, combined with the ambiguity in the mutual inclination angle and the small number of triples with visual orbits for both the inner and outer companions, means the alignment of the stellar orbits has a large uncertainty and we do not attempt a quantitative assessment.
<cit.> found a strong preference for alignment in systems that had an outer component with a separation of <50 au. Formation theories suggest that this is approximately the scale of the circumstellar disks which would have driven the evolution of the systems. For systems with a wider separation of larger than 1000 au, they found no tendency for alignment. Only 2 of our triple systems, KOI-0854 and KOI-2626, fall into the regime where the outer component is separated by less than 50 au, so with such a small sample of extremely compact triples it is therefore unsurprising that we do not find any evidence of mutual stellar alignment.
§.§ Mutual inclination between the stellar and planetary orbits
In addition to studying the alignment of the stellar planes, the alignment of the stellar orbits and the planetary orbit can also be investigated. As it is not known which component hosts the planets, the alignment of both the inner binary and the outer companion's orbit against the planet's edge-on orbit can be measured.
Equation <ref> can be rewritten to investigate the planet alignment as follows:
cosi_⋆-p = cosi_⋆cosi_p + sini_⋆sini_pcos(Ω_⋆- Ω_p),
where i_⋆-p is the misalignment between the orbital plane of the planet and the host star (ϕ_⋆ -p), i_P is the inclination of the planet, i_⋆ is the inclination of the stellar orbit, Ω_p is the longitude of the ascending node for the planet and Ω_⋆ is the longitude of the ascending node for the stellar orbit. For the triple systems, the stellar orbits can either be the inner binary or the outer companion relative to the barycenter of the binary.
The transiting planet must have an inclination of close to 90 and so this equation simplifies to Equation <ref>, where cosϕ_⋆ -p∝cos(90 - i_⋆).
In this case, the longitude of the ascending node for the planet is unknown so the true mutual misalignment i_⋆-p cannot be measured directly. Instead, |90 - i_⋆| is used as an equivalent to the minimum misalignment. If the longitude of the ascending node for the planet and stellar orbits were equal then |90 - i_⋆| would be equal to the true misalignment. Due to this, the mutual alignment between the planet and stellar orbits can only be investigated statistically.
High values of |90 - i_⋆| from large relative inclinations are a result of misaligned systems. However, due to the unknown longitude of the ascending node, low values of |90 - i_⋆| do not necessarily correspond to aligned systems. As the longitude of the ascending node is expected to be distributed randomly, an overabundance of low |90 - i_⋆| values would suggest that there is more alignment in the systems than would be expected for random orbits.
Figure <ref> shows a histogram for the minimum misalignment between the planet's orbit and both corresponding stellar orbits in each system, plotted both as |90 - i_⋆| and sin(|90 - i_⋆|). If the stellar orbital inclinations were drawn from an isotropic distribution, they would produce a flat distribution in sin(|90 - i_⋆|) space. There is an apparent overdensity of inclinations close to 90 resulting in values of sin(|90 - i_⋆|) < 0.4 to be more common than expected from a flat distribution. While individual inclination measurements close to 90 do not directly imply those systems are aligned due to the unknown longitude of the ascending node for the planet, a significant overdensity like this would imply that there is more alignment between the orbit of the planet and the stellar orbits than there would be if the orbits are random. However, performing a K-S test between the observed distribution of sin(|90 - i_⋆|) and the expected flat distribution for isotropic orbits resulted in a p-value of 0.085 and therefore we cannot rule out an underlying isotropic distribution of inclinations.
These results are in broad agreement with the results from the γ distribution. Both methods cannot rule out underlying isotropic orbits at a significant statistical level. However, in the full orbital analysis there is tentative evidence for an overabundance of aligned orbits seen in a peak of small values of sin(|90 - i_⋆|). From the γ distribution there is also tentative evidence for alignment. For the one-population tests described in Section <ref>, all of the cases where the mutual inclination was less than 50, 40 or 30 (apart from the high eccentricity case) could not be ruled out. These scenarios are more aligned than what would be expected for isotropic orbits, so while we rule out highly aligned scenarios we again see tentative evidence for minor alignment in the planet-hosting triples.
Many previous works have shown alignment between the stellar orbit and the planetary orbit in binaries (e.g., ), and our results using two different methods provide tentative evidence for similar alignment in triples with an abundance of systems with inclinations close to 90. The limiting factor in both methods is the sample size which may not be large enough to detect this alignment at the 2σ level and therefore we cannot rule out isotropic orbits. Another point to consider is that it is unknown which star the planet is orbiting and therefore our distributions of sin(|90 - i_⋆|) include stellar pairs that host a planet as well as stellar pairs with no transiting planets. This combination could mean orbits of non-planet-hosting companions are attenuating the peak toward mutual alignment. As discussed in Section <ref>, Kozai-Lidov cycles may cause the mutual inclination of triple systems to vary periodically, and hence cause misalignment between the non-planet hosting stellar orbit and the edge-on orbit of the planet. With this possible pathway to misalignment of stellar orbits, it would then be unsurprising that the sample of stellar orbits as a whole does not result in significant evidence of alignment.
§ SUMMARY
We present results from 12 years of astrometric orbit monitoring of 24 candidate triple star systems that host planets, including 9 compact systems where all three stellar components are within 600 au. Seven of the compact triple systems are fully spatially resolved, and two more, KOI-0013 and KOI-3158, have an unresolved inner companion. The goal of our observations is to determine the stellar orbital parameters and thereby statistically assess the alignment between the edge-on orbits of the transiting planets, the orbital planes of the inner stellar binaries, and the orbital planes of the outer stellar companions in these hierarchical triple systems.
Our full sample includes compact visual triples identified with AO imaging as well as stellar pairs resolved in AO imaging that have an outer component identified with astrometry. We use Keck LGS AO imaging and non-redundant aperture masking of our sample of triple systems over multiple epochs to measure the separation, position angle, and magnitude of each component relative to the primary star. From this, we derive stellar parameters, including masses, and update the planetary radii from the initial one derived from the measurements assuming the star was single. We also rule out three candidate triples as the chance association of a physically bound binary and a background star.
For the 7 fully resolved compact triple systems within the sample, we compare the stellar density distribution calculated from the stellar parameters to the distribution derived from the transit parameters to constrain which of the three stars in each system could be the host star. We find that only one planet is most likely to be hosted by the tertiary and one planet is most likely to be hosted by the secondary. The remaining planets were all most likely to be hosted by the primary, with one planet being consistent with only the primary. All of the planets in the sample were consistent with being hosted by at least one of the stellar components.
Using high-precision relative astrometry, we measured the linear motion in each of our systems. From this, we computed the angle γ between the vector of orbital motion and the vector of the corresponding stellar pair as a test for alignment. As the transiting planets are in edge-on orbits, if the stellar orbits were aligned they would have motion in predominantly the separation direction and hence would have a small angle of γ. Our results are based on 15 γ angles from 9 triple systems.
We found that low mutual inclinations (ϕ=0–20) cannot explain the observed results for any of the three tested eccentricity distributions, suggesting that there is not a clear trend of both stellar planes being aligned with the plane of the planet. A single underlying distribution of high eccentricities (0.6 < e < 0.8) with a mutual inclination between the planetary and stellar orbit of 20 < ϕ_⋆-p < 30 was the best match to our observations, but our sample is unlikely to contain exclusively high eccentricity systems. However, a wide range of simulated distributions was consistent with the data, including any eccentricity distribution with ϕ_max=40, ϕ_max=50 or ϕ_0=25. Isotropic orbits with either a high eccentricity distribution or a field binary-like distribution were also consistent with the observed data.
We tested two-population models assuming that each system only has one planet-hosting stellar pair and that their orbits follow an underlying distribution of mutual alignment up to ϕ = 30 with a field binary-like eccentricity distribution. This is modelled after the best-matching mutual inclination distribution found by <cit.>. The non-planet-hosting orbits in our two-population tests could have any mutual inclination distribution, and the best fit was 40 < ϕ < 50 with a field binary-like eccentricity distribution. These results were consistent with having a combination of stellar orbits aligned with the plane of the planet, and orbits from the non-planet hosting companion (either the outer companion relative to the planet-hosting binary, or the plane of the non-planet hosting binary) being consistent with either isotropic orbits or orbits driven by Kozai-Lidov cycles. These cycles can only influence orbits that are already misaligned, which is consistent with our finding that there is not a tendency for both stellar orbits in triple systems to be aligned with the planetary orbit.
We used an additional independent method to test the alignment of the triple systems. The relative astrometry was used to fit complete sets of orbital parameters for the visual components of the compact triples. We used the resulting orbital angles (i and Ω) to directly calculate the mutual inclination between the two stellar orbital planes and constrain the mutual inclination between the planetary and stellar orbital planes. The results from this method are in broad agreement with the results from the γ analysis. Again, isotropic orbits could not be ruled out at the 2σ level, possibly because the sample size is not sufficiently large. The mutual inclination analysis also provided tentative evidence for an abundance of aligned systems, agreeing with the previous results that there is likely a combination of aligned planet-hosting stellar orbits and misaligned non-planet hosting stellar orbits with respect to the edge-on orbit of the transiting planet.
Our observations of multiple-star systems that host planets are ongoing. We aim to continue to monitor the 9 compact triples presented here to increase our orbital coverage and hence improve the precision of our orbital characteristics. There are also 3 candidate triple systems presented here that currently only have one epoch of observations each. We plan on monitoring these systems further, not only to verify their existence but also to include them in our sample once orbital motion is obtained.
Alignment tests using visual orbits can only be conducted using statistical samples and hence our small sample size limits its statistical power, despite being the largest sample of planet-hosting triple systems analysed to date. Our orbital studies are also severely hindered by the large distances to most of the planet hosts. At these distances, the spatial resolution of AO imaging results in wide stellar separations and periods that will be impossible to observe more than a fraction of in our lifetime. TESS planet hosts present a solution to both these problems by finding planets around nearby stars. Multiplicity surveys of these planet hosts will reach closer separations and thus should provide a larger sample of compact planet-hosting triple-star systems that will undergo faster orbital motion. Such a sample would allow the alignment of the planetary and stellar orbits in triple systems to be more rigorously investigated.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for comments that improved our manuscript. T. Dupuy acknowledges support from UKRI STFC AGP grant ST/W001209/1. DH acknowledges support from the Alfred P. Sloan Foundation, the National Aeronautics and Space Administration (80NSSC22K0781), and the Australian Research Council (FT200100871).
Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a partnership between the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
This work has made use of data from the European Space Agency (ESA) mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement.
The authors thank Michael C. Liu and Mark W. Phillips for obtaining some of the Keck data presented here. We would also like to thank Lewis Warrey for their graphic design contributions to Figure <ref>.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
§ DATA AVAILABILITY
All of our NIRC2 data are available on the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[AitkenAitken1904]Aitken1904
Aitken R. G., 1904, @doi [LicOB] 10.5479/ADS/bib/1904LicOB.3.6A, 3, 6
[AllardAllard2014]Allard2014
Allard F., 2014, in Booth M., Matthews B. C., Graham J. R., eds, Exploring
the Formation and Evolution of Planetary Systems. No. S299 in IAU Symposium.
Cambridge University Press, pp 271–272
[Allard, Homeier, Freytag, Schaffenberger &
RajpurohitAllard et al.2013]Allard2013
Allard F., Homeier D., Freytag B., Schaffenberger W., Rajpurohit A. S.,
2013, @doi [Mem. Soc. Astron. Ital.] 10.48550/arXiv.1302.6559, 24,
128-13
[Anglada-Escudé
et al.,Anglada-Escudé et al.2016]Anglada2016
Anglada-Escudé G., et al., 2016, @doi [] 10.1038/nature19106,
536, 437
[Artymowicz & LubowArtymowicz &
Lubow1994]Artymowicz1994
Artymowicz P., Lubow S. H., 1994, @doi [] 10.1086/173679, 421, 651
[Bailer-Jones, Rybizki, Fouesneau, Demleitner
& AndraeBailer-Jones et al.2021]BailerJones2021
Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Demleitner M., Andrae R.,
2021, @doi [] 10.3847/1538-3881/abd806, 161, 147
[Baraffe, Homeier, Allard & ChabrierBaraffe
et al.2015]Baraffe2015
Baraffe I., Homeier D., Allard F., Chabrier G., 2015, @doi []
10.1051/0004-6361/201425481, 577, A42
[Baranec, Ziegler, Law, Morton, Riddle,
Atkinson, Schonhut & CreppBaranec et al.2016]Baranec2016
Baranec C., Ziegler C., Law N. M., Morton T., Riddle R., Atkinson D.,
Schonhut J., Crepp J., 2016, @doi [] 10.3847/0004-6256/152/1/18,
152, 18
[Barenfeld et al.,Barenfeld
et al.2019]Barenfeld2019
Barenfeld S. A., et al., 2019, @doi [] 10.3847/1538-4357/ab1e50, 878,
45
[Barnes, Linscott & ShporerBarnes
et al.2011]Barnes2011
Barnes J. W., Linscott E., Shporer A., 2011, @doi []
10.1088/0067-0049/197/1/10, 197, 10
[Behmard, Dai & HowardBehmard
et al.2022]Behmard2022
Behmard A., Dai F., Howard A. W., 2022, @doi []
10.3847/1538-3881/ac53a7, 163, 160
[Berger, Huber, Gaidos & van
SadersBerger et al.2018]Berger2018
Berger T. A., Huber D., Gaidos E., van Saders J. L., 2018, @doi
[] 10.3847/1538-4357/aada83, https://ui.adsabs.harvard.edu/abs/2018ApJ...866...99B 866, 99
[Bergfors et al.,Bergfors
et al.2013]Bergfors2013
Bergfors C., et al., 2013, @doi [] 10.1093/mnras/sts019, 428, 182
[Blunt et al.,Blunt et al.2017]Blunt2017
Blunt S., et al., 2017, @doi [] 10.3847/1538-3881/AA6930, 153, 229
[Borkovits, Hajdu, Sztakovics, Rappaport,
Levine, Bíró & KlagyivikBorkovits
et al.2016]Borkovits2016
Borkovits T., Hajdu T., Sztakovics J., Rappaport S., Levine A.,
Bíró I. B., Klagyivik P., 2016, @doi []
10.1093/mnras/stv2530, 455, 4136
[Brandt, Dupuy, Li, Brandt, Zeng, Michalik,
Gagliuffi & Raposo-PulidoBrandt et al.2021]Brandt2021
Brandt T. D., Dupuy T. J., Li Y., Brandt G. M., Zeng Y., Michalik D.,
Gagliuffi D. C. B., Raposo-Pulido V., 2021, @doi []
10.3847/1538-3881/AC042E, 162, 186
[Brown, Latham, Everett & EsquerdoBrown
et al.2011]Brown2011
Brown T. M., Latham D. W., Everett M. E., Esquerdo G. A., 2011, @doi
[] 10.1088/0004-6256/142/4/112, 142, 112
[Buldgen et al.,Buldgen
et al.2019]Buldgen2019
Buldgen G., et al., 2019, @doi [] 10.1051/0004-6361/201936126, 630,
A126
[Cadman, Hall, Fontanive & RiceCadman
et al.2022]Cadman2022
Cadman J., Hall C., Fontanive C., Rice K., 2022, @doi []
10.1093/mnras/stac033, 511, 457
[Caffau, Ludwig, Steffen, Freytag &
BonifacioCaffau et al.2011]Caffau2011
Caffau E., Ludwig H. G., Steffen M., Freytag B., Bonifacio P., 2011,
@doi [] 10.1007/S11207-010-9541-4/METRICS, 268, 255
[Campante et al.,Campante
et al.2015]Campante2015
Campante T. L., et al., 2015, @doi [] 10.1088/0004-637X/799/2/170,
799, 170
[Cassan et al.,Cassan
et al.2012]Cassan2012
Cassan A., et al., 2012, @doi [] 10.1038/nature10684, 481, 167
[Choi, Dotter, Conroy, Cantiello, Paxton &
JohnsonChoi et al.2016]Choi2016
Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D.,
2016, @doi [] 10.3847/0004-637X/823/2/102, 823, 102
[Chonis, Hill, Lee, Tuttle & VattiatChonis
et al.2014]Chonis2014
Chonis T. S., Hill G. J., Lee H., Tuttle S. E., Vattiat B. L., 2014, in
Ramsay S. K., McLean I. S., Takami H., eds, Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series Vol. 9147, Ground-based
and Airborne Instrumentation for Astronomy V. pp 68–93 (@eprint
1407.6016), @doi10.1117/12.2056005
[Chonis et al.,Chonis
et al.2016]Chonis2016
Chonis T. S., et al., 2016, in Evans C. J., Simard L., Takami H., eds,
Ground-based and Airborne Instrumentation for Astronomy VI Vol. 9908,
Ground-based and Airborne Instrumentation for Astronomy VI. pp 1345–1372,
@doi10.1117/12.2232209
[Christian et al.,Christian
et al.2022]Christian2022
Christian S., et al., 2022, @doi [] 10.3847/1538-3881/ac517f, 163, 207
[Ciardi, Beichman, Horch & HowellCiardi
et al.2015]Ciardi2015
Ciardi D. R., Beichman C. A., Horch E. P., Howell S. B., 2015, @doi
[] 10.1088/0004-637X/805/1/16, 805, 16
[Clark, van Belle, Ciardi, Lund, Howell,
Everett, Beichman & WintersClark et al.2022]Clark2022
Clark C. A., van Belle G. T., Ciardi D. R., Lund M. B., Howell S. B.,
Everett M. E., Beichman C. A., Winters J. G., 2022, @doi []
10.3847/1538-3881/ac6101, 163, 232
[Cuntz, Luke, Millard, Boyle & PatelCuntz
et al.2022]Cuntz2022
Cuntz M., Luke G. E., Millard M. J., Boyle L., Patel S. D., 2022, @doi
[] 10.3847/1538-4365/ac9302, 263, 33
[Damasso et al.,Damasso
et al.2020]Damasso2020
Damasso M., et al., 2020, @doi [Science Advances]
10.1126/sciadv.aax7467, https://ui.adsabs.harvard.edu/abs/2020SciA....6.7467D 6, eaax7467
[Deacon et al.,Deacon
et al.2016]Deacon2016
Deacon N. R., et al., 2016, @doi [] 10.1093/mnras/stv2132, 455,
4212
[Dieterich, Henry, Golimowski, Krist
& TannerDieterich et al.2012]Dieterich2012
Dieterich S. B., Henry T. J., Golimowski D. A., Krist J. E.,
Tanner A. M., 2012, @doi [] 10.1088/0004-6256/144/2/64, https://ui.adsabs.harvard.edu/abs/2012AJ....144...64D 144, 64
[Diolaiti, Bendinelli, Bonaccini, Close, Currie
& ParmeggianiDiolaiti et al.2000]Diolaiti2000
Diolaiti E., Bendinelli O., Bonaccini D., Close L., Currie D.,
Parmeggiani G., 2000, , 147, 335
[Domingos, Winter & IzidoroDomingos
et al.2015]Domingos2015
Domingos R. C., Winter O. C., Izidoro A., 2015, @doi [Int. J.
Astrobiol.] 10.1017/S1473550414000330, 14, 153
[DotterDotter2016]Dotter2016
Dotter A., 2016, @doi [] 10.3847/0067-0049/222/1/8, 222, 8
[DuchêneDuchêne2010]Duchene2010
Duchêne G., 2010, @doi [] 10.1088/2041-8205/709/2/L114, 709,
L114
[Duchêne & KrausDuchêne &
Kraus2013]Duchene2013
Duchêne G., Kraus A., 2013, @doi []
10.1146/annurev-astro-081710-102602, 51, 269
[Dupuy, Liu & IrelandDupuy
et al.2009]Dupuy2009
Dupuy T. J., Liu M. C., Ireland M. J., 2009, @doi []
10.1088/0004-637X/692/1/729, https://ui.adsabs.harvard.edu/abs/2009ApJ...692..729D 692, 729
[Dupuy, Kratter, Kraus, Isaacson, Mann,
Ireland, Howard & HuberDupuy et al.2016]Dupuy2016
Dupuy T. J., Kratter K. M., Kraus A. L., Isaacson H., Mann A. W., Ireland
M. J., Howard A. W., Huber D., 2016, @doi []
10.3847/0004-637X/817/1/80, 817, 80
[Dupuy et al.,Dupuy et al.2019]Dupuy2019
Dupuy T. J., et al., 2019, @doi [] 10.3847/1538-3881/AB3CD1, 158, 174
[Dupuy, Kraus, Kratter, Rizzuto, Mann, Huber
& IrelandDupuy et al.2022a]Dupuy2022
Dupuy T. J., Kraus A. L., Kratter K. M., Rizzuto A. C., Mann A. W., Huber
D., Ireland M. J., 2022a, @doi [] 10.1093/MNRAS/STAC306, 512,
648
[Dupuy, Liu, Evans, Best, Pearce, Sanghi,
Phillips & Bardalez GagliuffiDupuy et al.2022b]Dupuy2022a
Dupuy T. J., Liu M. C., Evans E. L., Best W. M., Pearce L. A., Sanghi A.,
Phillips M. W., Bardalez Gagliuffi D. C., 2022b, @doi []
10.1093/MNRAS/STAC3557, 519, 1688
[DvorakDvorak1982]Dvorak1982
Dvorak R., 1982, OAWMN, https://ui.adsabs.harvard.edu/abs/1982OAWMN.191..423D 191, 423
[El-Badry, Rix & HeintzEl-Badry
et al.2021]El-Badry2021
El-Badry K., Rix H. W., Heintz T. M., 2021, @doi []
10.1093/MNRAS/STAB323, 506, 2269
[Fabrycky & TremaineFabrycky &
Tremaine2007]Fabrycky2007
Fabrycky D., Tremaine S., 2007, @doi [] 10.1086/521702, 669, 1298
[Faria et al.,Faria et al.2022]Faria2022
Faria J. P., et al., 2022, @doi [] 10.1051/0004-6361/202142337, 658,
A115
[Fontanive & Bardalez GagliuffiFontanive
& Bardalez Gagliuffi2021]Fontanive2021
Fontanive C., Bardalez Gagliuffi D., 2021, @doi [Front. Astron. Space
Sci.] 10.3389/fspas.2021.625250, 8, 1
[Fontanive, Rice, Bonavita, Lopez,
Mužić & BillerFontanive et al.2019]Fontanive2019
Fontanive C., Rice K., Bonavita M., Lopez E., Mužić K.,
Biller B., 2019, @doi [] 10.1093/mnras/stz671, 485, 4967
[Foreman-Mackey, Hogg, Lang &
GoodmanForeman-Mackey et al.2013]Foreman-Mackey2013
Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, @doi []
10.1086/670067/XML, 125, 306
[FrancicFrancic1989]Francic1989
Francic S. P., 1989, @doi [] 10.1086/115186, https://ui.adsabs.harvard.edu/abs/1989AJ.....98..888F 98, 888
[Furlan et al.,Furlan
et al.2017]Furlan2017
Furlan E., et al., 2017, @doi [] 10.3847/1538-3881/153/2/71, 153, 71
[Gaidos, Mann, Kraus & IrelandGaidos
et al.2016]Gaidos2016
Gaidos E., Mann A. W., Kraus A. L., Ireland M., 2016, @doi []
10.1093/mnras/stw097, 457, 2877
[Gilliland, Cartier, Adams, Ciardi, Kalas &
WrightGilliland et al.2014]Gilliland2014
Gilliland R. L., Cartier K. M. S., Adams E. R., Ciardi D. R., Kalas P.,
Wright J. T., 2014, @doi [] 10.1088/0004-6256/149/1/24, 149, 24
[Goodman, Weare, Goodman & WeareGoodman
et al.2010]Goodman2010
Goodman J., Weare J., Goodman J., Weare J., 2010, @doi [Commun. Appl.
Math. Comp. Sci.] 10.2140/CAMCOS.2010.5.65, 5, 65
[Green, Schlafly, Zucker, Speagle &
FinkbeinerGreen et al.2019]Green2019
Green G. M., Schlafly E., Zucker C., Speagle J. S., Finkbeiner D., 2019,
@doi [] 10.3847/1538-4357/AB5362, 887, 93
[HaghighipourHaghighipour2006]Haghighipour2006
Haghighipour N., 2006, @doi [] 10.1086/503351, 644, 543
[Hatzes, Cochran, Endl, McArthur, Paulson,
Walker, Campbell & YangHatzes et al.2003]Hatzes2003
Hatzes A. P., Cochran W. D., Endl M., McArthur B., Paulson D. B., Walker
G. A. H., Campbell B., Yang S., 2003, @doi [] 10.1086/379281,
599, 1383
[Holman & WiegertHolman &
Wiegert1999]Holman1999
Holman M. J., Wiegert P. A., 1999, @doi [] 10.1086/300695, 117, 621
[Howell, Scott, Matson, Horch &
StephensHowell et al.2019]Howell2019
Howell S. B., Scott N. J., Matson R. A., Horch E. P., Stephens A., 2019,
@doi [] 10.3847/1538-3881/ab2f7b, 158, 113
[InnesInnes1915]Innes1915
Innes R. T. A., 1915, Circular of the Union Observatory Johannesburg, https://ui.adsabs.harvard.edu/abs/1915CiUO...30..235I 30, 235
[Jang-CondellJang-Condell2015]Jang-Condell2015
Jang-Condell H., 2015, @doi [] 10.1088/0004-637X/799/2/147, 799, 147
[Kaib, Raymond & DuncanKaib
et al.2013]Kaib2013
Kaib N. A., Raymond S. N., Duncan M., 2013, @doi []
10.1038/nature11780, 493, 381
[Koch et al.,Koch et al.2010]Koch2010
Koch D. G., et al., 2010, @doi [] 10.1088/2041-8205/713/2/L79, 713,
L79
[Kratter & PeretsKratter &
Perets2012]Kratter2012
Kratter K. M., Perets H. B., 2012, @doi []
10.1088/0004-637X/753/1/91, 753, 91
[Kraus, Ireland, Hillenbrand &
MartinacheKraus et al.2012]Kraus2012
Kraus A. L., Ireland M. J., Hillenbrand L. A., Martinache F., 2012,
@doi [] 10.1088/0004-637X/745/1/19, 745
[Kraus, Ireland, Huber, Mann & DupuyKraus
et al.2016]Kraus2016
Kraus A. L., Ireland M. J., Huber D., Mann A. W., Dupuy T. J., 2016,
@doi [] 10.3847/0004-6256/152/1/8, 152, 8
[Lavie et al.,Lavie
et al.2023]Lavie2023
Lavie B., et al., 2023, @doi [] 10.1051/0004-6361/202143007, https://ui.adsabs.harvard.edu/abs/2023A A...673A..69L 673, A69
[Law et al.,Law et al.2014]Law2014
Law N. M., et al., 2014, @doi [] 10.1088/0004-637X/791/1/35, 791, 35
[Lee, Offner, Kratter, Smullen & LiLee
et al.2019]Lee2019
Lee A. T., Offner S. S. R., Kratter K. M., Smullen R. A., Li P. S., 2019,
@doi [] 10.3847/1538-4357/ab584b, 887, 232
[Lester et al.,Lester
et al.2021]Lester2021
Lester K. V., et al., 2021, @doi [] 10.3847/1538-3881/ac0d06, 162, 75
[Lester et al.,Lester
et al.2023]Lester2023
Lester K. V., et al., 2023, @doi [] 10.3847/1538-3881/acf563, 166, 166
[Lillo-Box, Barrado & BouyLillo-Box
et al.2014]Lillo2014
Lillo-Box J., Barrado D., Bouy H., 2014, @doi []
10.1051/0004-6361/201423497, 566, A103
[Liu, Leggett, Golimowski, Chiu, Fan,
Geballe, Schneider & BrinkmannLiu et al.2006]Liu2006
Liu M. C., Leggett S. K., Golimowski D. A., Chiu K., Fan X.,
Geballe T. R., Schneider D. P., Brinkmann J., 2006, @doi []
10.1086/505561, https://ui.adsabs.harvard.edu/abs/2006ApJ...647.1393L 647, 1393
[Martin, Nixon, Lubow, Armitage, Price, Doǧan
& KingMartin et al.2014]Martin2014
Martin R. G., Nixon C., Lubow S. H., Armitage P. J., Price D. J., Doǧan
S., King A., 2014, @doi [] 10.1088/2041-8205/792/2/L33, 792,
1
[Mathur et al.,Mathur
et al.2017]Mathur2017
Mathur S., et al., 2017, @doi [] 10.3847/1538-4365/229/2/30, 229, 30
[Moe & Di StefanoMoe & Di
Stefano2017]Moe2017
Moe M., Di Stefano R., 2017, @doi [] 10.3847/1538-4365/aa6fb6,
230, 15
[Moe & KratterMoe &
Kratter2021]Moe2021
Moe M., Kratter K. M., 2021, @doi [] 10.1093/mnras/stab2328, 507,
3593
[Paxton, Bildsten, Dotter, Herwig, Lesaffre &
TimmesPaxton et al.2011]Paxton2011
Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F.,
2011, @doi [] 10.1088/0067-0049/192/1/3, 192, 3
[Paxton et al.,Paxton
et al.2013]Paxton2013
Paxton B., et al., 2013, @doi [] 10.1088/0067-0049/208/1/4, 208, 4
[Paxton et al.,Paxton
et al.2015]Paxton2015
Paxton B., et al., 2015, @doi [] 10.1088/0067-0049/220/1/15, 220, 15
[Pearce, Kraus, Dupuy, Mann, Newton, Tofflemire
& VanderburgPearce et al.2020]Pearce2020
Pearce L. A., Kraus A. L., Dupuy T. J., Mann A. W., Newton E. R.,
Tofflemire B. M., Vanderburg A., 2020, @doi []
10.3847/1538-4357/AB8389, 894, 115
[Pecaut & MamajekPecaut &
Mamajek2016]Pecaut2016
Pecaut M. J., Mamajek E. E., 2016, @doi []
10.1093/mnras/stw1300, https://ui.adsabs.harvard.edu/abs/2016MNRAS.461..794P 461, 794
[Raghavan et al.,Raghavan
et al.2010]Raghavan2010
Raghavan D., et al., 2010, @doi [] 10.1088/0067-0049/190/1/1, 190, 1
[Rajpurohit, Reylé, Allard, Homeier,
Schultheis, Bessell & RobinRajpurohit et al.2013]Rajpurohit2013
Rajpurohit A. S., Reylé C., Allard F., Homeier D., Schultheis M.,
Bessell M. S., Robin A. C., 2013, @doi []
10.1051/0004-6361/201321346, 556, A15
[Rice, Gerbig & VanderburgRice
et al.2024]Rice2024
Rice M., Gerbig K., Vanderburg A., 2024, @doi [arXiv e-prints]
10.48550/arXiv.2401.04173, https://ui.adsabs.harvard.edu/abs/2024arXiv240104173R p. arXiv:2401.04173
[Ricker et al.,Ricker
et al.2014]Ricker2014
Ricker G. R., et al., 2014, @doi [J. Astron. Telesc. Instruments, Syst.]
10.1117/1.JATIS.1.1.014003, 1, 014003
[Rodriguez, Duchêne, Tom, Kennedy,
Matthews, Greaves & ButnerRodriguez et al.2015]Rodriguez2015
Rodriguez D. R., Duchêne G., Tom H., Kennedy G. M., Matthews
B., Greaves J., Butner H., 2015, @doi []
10.1093/mnras/stv483, https://ui.adsabs.harvard.edu/abs/2015MNRAS.449.3160R 449, 3160
[SandersSanders1971]Sanders1971
Sanders W. L., 1971, , https://ui.adsabs.harvard.edu/abs/1971A A....14..226S 14, 226
[Santerne et al.,Santerne
et al.2012]Santerne2012
Santerne A., et al., 2012, @doi [] 10.1051/0004-6361/201219899, 544,
L12
[Seager & Mallen‐OrnelasSeager &
Mallen‐Ornelas2003]Seager2003
Seager S., Mallen‐Ornelas G., 2003, @doi [] 10.1086/346105, 585,
1038
[Service, Lu, Campbell, Sitarski, Ghez &
AndersonService et al.2016]Service2016
Service M., Lu J. R., Campbell R., Sitarski B. N., Ghez A. M., Anderson
J., 2016, @doi [] 10.1088/1538-3873/128/967/095004, 128, 095004
[Skrutskie et al.,Skrutskie
et al.2006]Skrutskie2006
Skrutskie M. F., et al., 2006, @doi [] 10.1086/498708/FULLTEXT/, 131,
1163
[Stalport, Matthews, Bourrier, Leleu,
Delisle & UdryStalport et al.2022]Stalport2022
Stalport M., Matthews E. C., Bourrier V., Leleu A., Delisle
J. B., Udry S., 2022, @doi [] 10.1051/0004-6361/202243971,
https://ui.adsabs.harvard.edu/abs/2022A A...667A.128S 667, A128
[Sullivan & KrausSullivan &
Kraus2022]Sullivan2022c
Sullivan K., Kraus A. L., 2022, @doi [] 10.3847/1538-3881/AC89ED,
164, 138
[Sullivan, Kraus & MannSullivan
et al.2022]Sullivan2022b
Sullivan K., Kraus A. L., Mann A. W., 2022, @doi []
10.3847/1538-4357/AC7BE9, 935, 141
[Sullivan et al.,Sullivan
et al.2023]Sullivan2023
Sullivan K., et al., 2023, @doi [] 10.3847/1538-3881/acbdf9, 165, 177
[Szabó et al.,Szabó
et al.2011]Szabo2011
Szabó G. M., et al., 2011, @doi [] 10.1088/2041-8205/736/1/L4,
736, L4
[Thompson et al.,Thompson
et al.2018]Thompson2018
Thompson S. E., et al., 2018, @doi [] 10.3847/1538-4365/AAB4F9, 235,
38
[TokovininTokovinin2017]Tokovinin2017
Tokovinin A., 2017, @doi [] 10.3847/1538-4357/aa7746, 844, 103
[Tokovinin & KiyaevaTokovinin &
Kiyaeva2015]Tokovinin2015
Tokovinin A., Kiyaeva O., 2015, @doi [] 10.1093/mnras/stv2825,
456, 2070
[Tokovinin & MoeTokovinin &
Moe2020]Tokovinin2020
Tokovinin A., Moe M., 2020, @doi [] 10.1093/mnras/stz3299, 491,
5158
[Toonen, Hamers & ZwartToonen
et al.2016]Toonen2016
Toonen S., Hamers A., Zwart S. P., 2016, @doi [Comp. Astro. Cosmo.]
10.1186/s40668-016-0019-0, 3, 6
[Vallenari et al.,Vallenari
et al.2023]Gaia2023
Vallenari A., et al., 2023, @doi [] 10.1051/0004-6361/202243940, 674,
A1
[Van Eylen & AlbrechtVan Eylen &
Albrecht2015]VanEylen2015
Van Eylen V., Albrecht S., 2015, @doi []
10.1088/0004-637X/808/2/126, 808, 126
[Vousden, Farr & MandelVousden
et al.2016]Vousden2016
Vousden W. D., Farr W. M., Mandel I., 2016, @doi []
10.1093/MNRAS/STV2422, 455, 1919
[Wang, Fischer, Xie & CiardiWang
et al.2014]Wang2014a
Wang J., Fischer D. A., Xie J.-W., Ciardi D. R., 2014, @doi []
10.1088/0004-637X/791/2/111, 791, 111
[Winters et al.,Winters
et al.2019]Winters2019
Winters J. G., et al., 2019, @doi [] 10.3847/1538-3881/ab364d, https://ui.adsabs.harvard.edu/abs/2019AJ....158..152W 158, 152
[Winters et al.,Winters
et al.2022]Winters2022
Winters J. G., et al., 2022, @doi [] 10.3847/1538-3881/ac50a9, 163,
168
[Wizinowich et al.,Wizinowich
et al.2006]Wizinowich2006
Wizinowich P., et al., 2006, @doi [] 10.1086/499290/XML, 118, 297
[WorleyWorley1967]Worley1967
Worley C., 1967, in Dommanget J., ed., IAU Colloq. Vol. 17, Evol. Double
Stars. p. 221
[Yelda, Lu, Ghez, Clarkson, Anderson, Do &
MatthewsYelda et al.2010]Yelda2010
Yelda S., Lu J. R., Ghez A. M., Clarkson W., Anderson J., Do T.,
Matthews K., 2010, @doi [] 10.1088/0004-637X/725/1/331, 725, 331
[Zhang et al.,Zhang et al.2023]Zhang2023
Zhang Z., et al., 2023, @doi [] 10.3847/1538-3881/aca88c, 165, 73
[Zhang et al.,Zhang
et al.2024]Zhang2024
Zhang J., et al., 2024, @doi [] 10.3847/1538-3881/ad1189, https://ui.adsabs.harvard.edu/abs/2024AJ....167...89Z 167, 89
[Ziegler et al.,Ziegler
et al.2017]Ziegler2017
Ziegler C., et al., 2017, @doi [] 10.3847/1538-3881/153/2/66, 153, 66
[Ziegler, Tokovinin, Latiolais, Briceño,
Law & MannZiegler et al.2021]Ziegler2021
Ziegler C., Tokovinin A., Latiolais M., Briceño C., Law N., Mann
A. W., 2021, @doi [] 10.3847/1538-3881/ac17f6, 162, 192
[van Dam et al.,van Dam
et al.2006]vanDam2006
van Dam M., et al., 2006, @doi [] 10.1086/499498/XML, 118, 310
width=,labelfont=bf
lcccccc
Relative astrometry measurements of our KOIs with two stellar companions from our Keck/NIRC2 adaptive optics imaging and aperture-masking interferometry.
1cName 2cEpoch 1cSeparation 1cPosition Angle 1cΔ m 1cFilter
[1.5ex]
(UT) (MJD) (mas) () (mag)
[1.5ex]
3c
: continued
1cName 2cEpoch 1cSeparation 1cPosition Angle 1cΔ m 1cFilter
[1.5ex]
(UT) (MJD) (mas) () (mag)
[1.5ex]
KOI-0005 AB 2012-08-14 56153.45 28.1 ± 1.5 142.8 ± 0.9 0.20 ± 0.09 K'
KOI-0005 AB 2013-08-20 56524.42 29.6 ± 1.5 146 ± 4 0.34 ± 0.09 K_cont
KOI-0005 AB 2014-07-28 56866.45 31.1 ± 1.3 151 ± 3 0.34 ± 0.10 K'
KOI-0005 AB 2015-07-22 57225.43 31.6 ± 2.2 149.6 ± 2.0 0.42 ± 0.08 K'
KOI-0005 AB 2017-06-28 57932.40 32.02 ± 0.16 155.9 ± 1.9 0.26 ± 0.08 K'
KOI-0005 AB 2019-06-12 58646.35 30.0 ± 0.4 156.9 ± 2.0 0.22 ± 0.08 K'
KOI-0005 AB 2020-06-18 59018.58 30.1 ± 1.0 162.3 ± 1.9 0.30 ± 0.06 K'
KOI-0005 AC 2012-08-14 56153.45 120.2 ± 1.3 305.8 ± 1.5 1.800 ± 0.021 K'
KOI-0005 AC 2013-08-20 56524.42 125.6 ± 1.4 305.7 ± 0.4 1.97 ± 0.04 K_cont
KOI-0005 AC 2014-07-28 56866.45 127.0 ± 0.9 305.1 ± 0.8 1.98 ± 0.08 K'
KOI-0005 AC 2015-07-22 57225.43 128.4 ± 1.4 306.04 ± 0.26 2.00 ± 0.07 K'
KOI-0005 AC 2017-06-28 57932.40 130.4 ± 0.7 305.8 ± 0.3 1.93 ± 0.04 K'
KOI-0005 AC 2019-06-12 58646.35 134.5 ± 1.2 306.66 ± 0.22 1.90 ± 0.04 K'
KOI-0005 AC 2020-06-18 59018.58 137.7 ± 0.5 306.75 ± 0.23 1.97 ± 0.03 K'
KOI-0013 A-BC 2013-06-13 56456.48 1157.2 ± 1.1 280.00 ± 0.08 0.16 ± 0.04 K'
KOI-0013 A-BC 2013-08-07 56511.35 1157.87 ± 0.23 279.948 ± 0.021 0.128 ± 0.007 K'
KOI-0013 A-BC 2020-06-18 59018.57 1156.2 ± 0.5 279.870 ± 0.010 0.204 ± 0.007 K'
KOI-0013 A-BC 2021-07-19 59414.47 1154.46 ± 0.24 279.840 ± 0.008 0.242 ± 0.008 K'
KOI-0288 AB 2012-08-14 56153.43 347.30 ± 0.09 319.38 ± 0.05 3.101 ± 0.009 K'
KOI-0288 AB 2014-07-28 56866.54 347.46 ± 0.26 319.60 ± 0.06 3.083 ± 0.009 K'
KOI-0288 AB 2020-06-18 59018.64 350.8 ± 1.8 319.98 ± 0.11 3.07 ± 0.04 K'
KOI-0307 AB 2019-06-12 58646.53 68.51 ± 0.27 241.4 ± 0.6 0.17 ± 0.06 K'
KOI-0307 AB 2021-07-19 59414.38 64.1 ± 0.5 239.2 ± 0.3 0.15 ± 0.04 K'
KOI-0307 AB 2022-07-05 59765.39 61.9 ± 0.5 239.70 ± 0.05 0.0600 ± 0.0006 K'
KOI-0652 AB 2014-06-12 56820.44 1210.2 ± 0.8 272.826 ± 0.016 0.70 ± 0.04 K_cont
KOI-0652 AB 2014-07-18 56856.49 1209.7 ± 1.3 272.83 ± 0.05 0.77 ± 0.08 K'
KOI-0652 AB 2014-08-13 56882.35 1209.24 ± 0.17 272.876 ± 0.016 0.717 ± 0.018 K'
KOI-0652 AB 2020-06-10 59010.59 1211.3 ± 0.4 272.840 ± 0.015 0.80 ± 0.04 K'
KOI-0652 AB 2022-07-06 59766.37 1213.9 ± 1.0 272.845 ± 0.023 0.811 ± 0.019 K'
KOI-0652 AB 2023-06-08 60103.61 1214.02 ± 0.16 272.903 ± 0.012 0.738 ± 0.025 K'
KOI-0652 BC 2014-06-12 56820.44 65.0 ± 1.2 290.9 ± 1.2 1.09 ± 0.07 K_cont
KOI-0652 BC 2014-07-18 56856.49 64.9 ± 0.6 289.7 ± 2.4 1.03 ± 0.08 K'
KOI-0652 BC 2014-08-13 56882.35 65.4 ± 0.4 290.43 ± 0.22 1.025 ± 0.021 K'
KOI-0652 BC 2020-06-10 59010.59 66.1 ± 0.4 291.96 ± 0.24 0.983 ± 0.022 K'
KOI-0652 BC 2022-07-06 59766.37 64.40 ± 0.27 292.4 ± 0.5 0.996 ± 0.011 K'
KOI-0652 BC 2023-06-08 60103.61 62.41 ± 0.27 295.4 ± 0.5 1.046 ± 0.025 K'
KOI-0854 AB 2013-07-17 56490.53 16.1 ± 1.0 209 ± 5 0.30 ± 0.23 K' + 9H
KOI-0854 AB 2016-09-20 57651.29 19.3 ± 0.6 235.4 ± 2.7 -0.050 ± 0.009 K' + 9H
KOI-0854 AC 2013-07-17 56490.53 154.6 ± 0.6 181.5 ± 0.5 3.65 ± 0.11 K'
KOI-0854 AC 2014-07-29 56867.40 153.2 ± 2.6 181.4 ± 0.9 3.81 ± 0.09 K'
KOI-0854 AC 2017-07-01 56867.50 162 ± 4 179.5 ± 2.7 3.82 ± 0.30 K'
KOI-0854 AC 2023-06-09 60104.48 159 ± 7 180.0 ± 1.5 3.60 ± 0.13 K'
KOI-1613 AB 2012-08-14 56153.39 211.69 ± 0.21 184.489 ± 0.029 1.044 ± 0.010 K'
KOI-1613 AB 2013-08-25 56529.27 210.7 ± 0.4 184.45 ± 0.10 1.057 ± 0.010 K'
KOI-1613 AB 2014-08-13 56882.45 209.11 ± 0.04 184.59 ± 0.05 1.0544 ± 0.0022 K'
KOI-1613 AB 2015-07-27 57230.48 206.8 ± 0.5 184.64 ± 0.06 1.14 ± 0.04 K'
KOI-1613 AB 2016-06-16 57555.57 206.31 ± 0.07 184.528 ± 0.024 1.0433 ± 0.0023 K'
KOI-1613 AB 2016-07-15 57584.48 206.15 ± 0.18 184.60 ± 0.04 1.070 ± 0.005 K'
KOI-1613 AB 2017-07-07 57941.34 204.58 ± 0.15 184.43 ± 0.04 1.053 ± 0.006 K'
KOI-1613 AB 2018-06-07 58276.57 203.27 ± 0.09 184.532 ± 0.021 1.053 ± 0.005 K'
KOI-1613 AB 2023-06-09 60104.35 194.87 ± 0.07 184.61 ± 0.04 1.0461 ± 0.0010 K'
KOI-1615 AB 2012-07-06 56114.62 31.8 ± 1.6 122.0 ± 1.6 1.81 ± 0.10 K' + 9H
KOI-1615 AB 2014-07-30 56868.55 30.2 ± 2.8 138.5 ± 2.8 2.23 ± 0.20 K' + 9H
KOI-1615 AB 2014-11-30 56991.20 23.4 ± 2.2 139 ± 4 1.76 ± 0.29 K' + 9H
KOI-1615 AB 2016-09-20 57651.27 17.5 ± 0.9 146 ± 3 0.81 ± 0.27 K' + 9H
KOI-1961 AB 2014-07-31 56869.49 34.60 ± 0.20 258.10 ± 0.20 0.155 ± 0.008 K' + 9H
KOI-1961 AB 2015-07-21 57224.39 36.99 ± 0.15 261.55 ± 0.29 0.190 ± 0.007 K' + 9H
KOI-1961 AB 2016-11-07 57699.21 40.1 ± 0.4 263.5 ± 0.8 0.234 ± 0.028 K' + 9H
KOI-1961 AB 2017-07-01 57935.50 42.08 ± 0.11 268.37 ± 0.29 0.160 ± 0.008 K' + 9H
KOI-1961 AB 2019-06-12 58646.39 45.17 ± 0.15 273.51 ± 0.17 0.209 ± 0.010 K' + 9H
KOI-1961 AB 2023-03-29 60032.60 46.52 ± 0.18 282.06 ± 0.18 0.159 ± 0.013 K' + 9H
KOI-2032 AB 2012-08-13 56152.42 1085.7 ± 0.5 138.39 ± 0.05 0.19 ± 0.07 K'
KOI-2032 AB 2014-08-13 56882.49 1086.3 ± 0.5 138.482 ± 0.026 0.218 ± 0.007 K'
KOI-2032 AB 2021-07-19 59414.33 1091.2 ± 1.4 138.35 ± 0.04 0.216 ± 0.011 K'
KOI-2032 AB 2023-06-08 60103.63 1091 ± 3 138.49 ± 0.14 0.14 ± 0.10 K'
KOI-2032 AC 2012-08-13 56152.42 1149.8 ± 0.4 138.09 ± 0.03 0.34 ± 0.05 K'
KOI-2032 AC 2014-08-13 56882.49 1148.0 ± 0.5 137.900 ± 0.022 0.443 ± 0.006 K'
KOI-2032 AC 2021-07-19 59414.33 1145.7 ± 1.7 137.67 ± 0.04 0.41 ± 0.04 K'
KOI-2032 AC 2023-06-08 60103.63 1140 ± 6 137.5 ± 0.1 0.56 ± 0.14 K'
KOI-2032 BC 2012-08-13 56152.42 63.91 ± 0.11 128.87 ± 0.26 0.167 ± 0.011 K'
KOI-2032 BC 2014-08-13 56882.49 62.84 ± 0.23 128.0 ± 0.5 -0.254 ± 0.027 K'
KOI-2032 BC 2021-07-19 59414.33 54.73 ± 0.21 120.2 ± 1.0 0.154 ± 0.024 K'
KOI-2032 BC 2023-06-08 60103.63 49.8 ± 0.6 121.0 ± 0.9 0.33 ± 0.05 K'
KOI-2117 AB 2015-07-25 57228.54 329.13 ± 0.18 111.11 ± 0.05 0.567 ± 0.012 K'
KOI-2117 AB 2019-07-05 58669.55 328.91 ± 0.09 111.337 ± 0.022 0.575 ± 0.009 K'
KOI-2117 AB 2023-06-09 60104.54 328.60 ± 0.12 111.56 ± 0.03 0.577 ± 0.007 K'
KOI-2517 AB 2019-06-26 58660.52 192.53 ± 0.20 154.58 ± 0.06 2.477 ± 0.010 K'
KOI-2626 AB 2013-07-06 56479.55 206.01 ± 0.16 212.88 ± 0.06 0.480 ± 0.011 K'
KOI-2626 AB 2013-07-18 56491.52 205.6 ± 0.1 212.854 ± 0.010 0.4589 ± 0.0008 K'
KOI-2626 AB 2014-07-28 56866.52 203.9 ± 0.4 212.59 ± 0.06 0.464 ± 0.005 K'
KOI-2626 AB 2014-07-29 56867.37 204.41 ± 0.26 212.60 ± 0.07 0.489 ± 0.020 K'
KOI-2626 AB 2015-06-21 57194.51 203.36 ± 0.21 212.26 ± 0.07 0.474 ± 0.009 K'
KOI-2626 AB 2017-06-29 57933.41 200.3 ± 0.8 211.4 ± 0.4 0.50 ± 0.03 K'
KOI-2626 AB 2019-06-12 58646.61 198.56 ± 0.10 211.07 ± 0.04 0.447 ± 0.004 K'
KOI-2626 AC 2013-07-06 56479.55 161.7 ± 0.3 184.79 ± 0.05 1.044 ± 0.008 K'
KOI-2626 AC 2013-07-18 56491.52 161.60 ± 0.17 184.71 ± 0.04 1.023 ± 0.012 K'
KOI-2626 AC 2014-07-28 56866.52 160.2 ± 0.6 184.37 ± 0.18 1.022 ± 0.015 K'
KOI-2626 AC 2014-07-29 56867.37 161.3 ± 0.3 184.77 ± 0.12 1.09 ± 0.03 K'
KOI-2626 AC 2015-06-21 57194.51 160.4 ± 0.5 184.39 ± 0.07 1.021 ± 0.025 K'
KOI-2626 AC 2017-06-29 57933.41 156.3 ± 0.7 183.6 ± 0.3 0.97 ± 0.05 K'
KOI-2626 AC 2019-06-12 58646.61 156.72 ± 0.21 184.14 ± 0.08 1.057 ± 0.009 K'
KOI-2626 BC 2013-07-06 56479.55 99.05 ± 0.18 83.10 ± 0.20 0.564 ± 0.012 K'
KOI-2626 BC 2013-07-18 56491.52 98.99 ± 0.13 83.22 ± 0.07 0.564 ± 0.013 K'
KOI-2626 BC 2014-07-28 56866.52 98.4 ± 0.4 82.93 ± 0.16 0.558 ± 0.018 K'
KOI-2626 BC 2014-07-29 56867.37 97.41 ± 0.27 83.22 ± 0.16 0.600 ± 0.014 K'
KOI-2626 BC 2015-06-21 57194.51 97.0 ± 0.3 82.85 ± 0.23 0.547 ± 0.028 K'
KOI-2626 BC 2017-06-29 57933.41 95.76 ± 0.21 81.0 ± 0.8 0.47 ± 0.08 K'
KOI-2626 BC 2019-06-12 58646.61 92.21 ± 0.22 81.42 ± 0.15 0.610 ± 0.011 K'
KOI-2971 AB 2015-07-25 57228.50 296.07 ± 0.21 273.75 ± 0.07 3.579 ± 0.013 K'
KOI-3158 AB 2015-06-22 57195.52 1842.5 ± 0.4 252.833 ± 0.020 2.070 ± 0.027 K_cont
KOI-3158 AB 2015-07-21 57224.45 1842.3 ± 0.4 252.823 ± 0.020 2.09 ± 0.06 K_cont
KOI-3158 AB 2016-06-16 57555.64 1841.6 ± 0.4 252.833 ± 0.020 2.16 ± 0.03 K_cont
KOI-3158 AB 2017-11-27 58084.18 1839.9 ± 0.4 252.912 ± 0.020 2.166 ± 0.019 K_cont
KOI-3158 AB 2018-06-07 58276.47 1840.2 ± 0.4 252.913 ± 0.020 2.068 ± 0.020 K_cont
KOI-3158 AB 2020-06-11 59011.63 1838.2 ± 0.4 253.022 ± 0.020 2.168 ± 0.024 K_cont
KOI-3196 AB 2013-08-06 56510.43 126.6 ± 0.6 74.5 ± 0.5 5.10 ± 0.11 K'
KOI-3196 AB 2013-08-20 56524.39 127 ± 6 74.2 ± 0.7 4.9 ± 0.1 K_cont
KOI-3196 AB 2014-07-31 56869.30 134 ± 4 70.6 ± 2.4 4.97 ± 0.17 K'
KOI-3196 AB 2023-06-09 60104.39 135.2 ± 1.0 68.6 ± 0.5 4.9 ± 0.1 K'
KOI-3444 A-BC 2014-08-13 56882.29 1083.17 ± 0.09 10.240 ± 0.017 2.466 ± 0.010 K'
KOI-3444 A-BC 2014-11-30 56991.22 1082.3 ± 0.6 10.212 ± 0.016 2.435 ± 0.020 K'
KOI-3444 A-BC 2015-05-28 57170.58 1084.9 ± 0.9 10.24 ± 0.07 2.502 ± 0.020 K'
KOI-3444 A-BC 2015-07-26 57229.50 1085.10 ± 0.20 10.256 ± 0.011 2.476 ± 0.019 K'
KOI-3444 A-BC 2016-06-16 57555.61 1087.70 ± 0.18 10.208 ± 0.008 2.418 ± 0.009 K'
KOI-3444 BC 2020-08-29 59090.43 53.66 ± 0.18 186.45 ± 0.28 0.232 ± 0.006 K'
KOI-3497 AB 2013-08-06 56510.51 843.4 ± 0.5 176.127 ± 0.024 1.05 ± 0.03 K'
KOI-3497 AB 2015-07-21 57224.43 840.9 ± 0.4 176.263 ± 0.029 1.041 ± 0.016 K'
KOI-3497 AB 2017-06-29 57933.41 837.5 ± 0.9 176.46 ± 0.06 1.118 ± 0.023 K'
KOI-3497 AB 2019-06-12 58646.41 835.6 ± 0.5 176.499 ± 0.024 1.13 ± 0.01 K'
KOI-3497 AB 2021-07-19 59414.43 831.6 ± 0.5 176.64 ± 0.04 1.054 ± 0.015 K'
KOI-3497 AC 2013-08-06 56510.51 767.12 ± 0.20 173.680 ± 0.025 1.679 ± 0.020 K'
KOI-3497 AC 2015-07-21 57224.43 770.6 ± 0.3 173.695 ± 0.029 1.643 ± 0.003 K'
KOI-3497 AC 2017-06-29 57933.41 773.3 ± 0.4 173.76 ± 0.04 1.682 ± 0.014 K'
KOI-3497 AC 2019-06-12 58646.41 777.5 ± 0.4 173.805 ± 0.027 1.742 ± 0.022 K'
KOI-3497 AC 2021-07-19 59414.43 780.9 ± 0.4 173.88 ± 0.03 1.687 ± 0.009 K'
KOI-3497 BC 2013-08-06 56510.51 83.7 ± 0.8 19.17 ± 0.21 0.624 ± 0.013 K'
KOI-3497 BC 2015-07-21 57224.43 79.0 ± 0.3 22.17 ± 0.12 0.602 ± 0.015 K'
KOI-3497 BC 2017-06-29 57933.41 74.6 ± 0.8 25.7 ± 0.5 0.564 ± 0.023 K'
KOI-3497 BC 2019-06-12 58646.41 69.4 ± 0.4 28.3 ± 0.4 0.613 ± 0.021 K'
KOI-3497 BC 2021-07-19 59414.43 63.9 ± 0.4 32.74 ± 0.27 0.633 ± 0.014 K'
KOI-4329 AB 2013-08-21 56525.37 1845.4 ± 0.5 118.230 ± 0.018 2.738 ± 0.029 K_cont
KOI-4329 AB 2019-06-13 58647.56 1843.79 ± 0.19 118.279 ± 0.005 2.945 ± 0.018 K'
KOI-4329 AB 2021-07-19 59414.55 1843.6 ± 0.7 118.293 ± 0.022 2.918 ± 0.020 K'
KOI-4329 AB 2023-06-09 60104.46 1844.7 ± 0.1 118.322 ± 0.003 2.828 ± 0.004 K'
KOI-4528 AB 2020-08-29 59018.59 68.1 ± 1.5 264.9 ± 1.0 0.34 ± 0.04 K'
KOI-4528 AC 2020-08-29 59018.59 182.4 ± 1.6 46.1 ± 0.4 1.207 ± 0.019 K'
KOI-4661 AB 2014-08-18 56887.32 3851.7 ± 0.6 198.014 ± 0.004 1.240 ± 0.022 K'
KOI-4661 AB 2021-06-29 59394.42 3848.6 ± 0.5 197.976 ± 0.004 1.443 ± 0.018 K'
KOI-4661 AC 2014-08-18 56887.32 153.6 ± 0.6 86.34 ± 0.17 2.238 ± 0.003 K'
KOI-4661 AC 2021-06-29 59394.42 156.4 ± 0.4 85.48 ± 0.15 2.257 ± 0.020 K'
KOI-4661 BC 2014-08-18 56887.32 3911.09 ± 0.06 20.1060 ± 0.0020 0.999 ± 0.025 K'
KOI-4661 BC 2021-06-29 59394.42 3911.1 ± 0.6 20.093 ± 0.005 0.814 ± 0.022 K'
KOI-5581 AB 2022-07-05 59765.48 167 ± 7 127.6 ± 1.4 3.75 ± 0.07 K'
KOI-5930 AB 2019-07-16 58680.37 1411.35 ± 0.24 148.859 ± 0.010 1.436 ± 0.023 K'
KOI-5930 AC 2019-07-16 58680.37 74 ± 4 87 ± 4 2.69 ± 0.27 K'
KOI-5930 BC 2019-07-16 58680.37 1379 ± 5 331.58 ± 0.19 1.25 ± 0.27 K'
KOI-7842 AB 2020-08-29 59090.39 78 ± 13 149.7 ± 1.1 2.12 ± 0.22 K'
§ ORBIT PLOTS
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the visual components of KOI-0013. The colour of the orbit indicates the eccentricity and the positions of the unresolved binary companion KOI-0013 BC, relative to the primary KOI-0013 A (shown with a black star) are marked with white circles. Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the inner binary of KOI-0652. The colour of the orbit indicates the eccentricity and the positions of the companion KOI-0652 C, relative to KOI-0652 B (shown with a black star) are marked with white circles.Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the inner binary of KOI-0854. The colour of the orbit indicates the eccentricity and the positions of the companion KOI-0854 B, relative to KOI-0854 A (shown with a black star) are marked with white circles. Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the inner binary of KOI-2032. The colour of the orbit indicates the eccentricity and the positions of the companion KOI-2032 C, relative to KOI-2032 B (shown with a black star) are marked with white circles. Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the inner binary of KOI-2626. The colour of the orbit indicates the eccentricity and the positions of the companion KOI-2626 C, relative to KOI-2626 B (shown with a black star) are marked with white circles.Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the outer companion of KOI-3444. The colour of the orbit indicates the eccentricity and the positions of the primary KOI-3444 A, relative to the barycenter of the inner binary KOI-3444 BC (shown with black stars; separation of the binary stars relative to the companion is not to scale) are marked with white circles. Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureLeft: 50 random orbits from the posterior sample for the orvara fit for the inner binary of KOI-3497. The colour of the orbit indicates the eccentricity and the positions of the companion KOI-3497 C, relative to KOI-3497 B (shown with a black star) are marked with white circles. Right: The measured astrometry for the position angle and relative separation over time, overlaid by 50 possible orbital solutions.
type=figure
< g r a p h i c s >
figureComplete set of orbital fits for the outer companions in the 6 visual triples. For each system, 50 orbits from the posterior sample for the lofti fit for the outer companion are shown relative to the inner binary (black stars; separation of the binary stars relative to the companion is not to scale). The colour of the orbit indicates the eccentricity.
§ CORNER PLOTS
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-0005. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the visual components of KOI-0013. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-0652. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-0854. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-2032. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-2626. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the visual components of KOI-3158. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the outer companion relative to the inner binary of KOI-3444. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
type=figure
< g r a p h i c s >
figurePosteriors from our orvara orbital fit for the inner binary of KOI-3497. Details about each parameter, including credible intervals and the best-fit values of these parameters, are listed in Table <ref>.
|
http://arxiv.org/abs/2409.03471v1 | 20240905123352 | Role of anisotropic confining potential and elliptical driving in dynamics of a Ge hole qubit | [
"Bashab Dey",
"John Schliemann"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
OMLcmmbit
⟨⟩ł↑̆↓̣
§ ABSTRACT
The squeezing of a Ge planar quantum dot enhances the Rabi frequency of electric dipole spin resonance by several orders of magnitude due to a strong Direct Rashba spin-orbit interaction (DR-SOI) in such geometries https://doi.org/10.1103/PhysRevB.104.115425[Phys. Rev. B 104, 115425 (2021)]. We investigate the geometric effect of an elliptical (squeezed) confinement and its interplay with the polarization of driving field in determining the Rabi frequency of a heavy-hole qubit in a planar Ge quantum dot.
To calculate the Rabi frequency, we consider only the p-linear SOIs viz. electron-like Rashba, hole-like Rashba and hole-like Dresselhaus which are claimed to be the dominant ones by recent studies on planar Ge heterostructures. We derive approximate analytical expressions of the Rabi frequency using a Schrieffer-Wolff transformation for small SOI and driving strengths. Firstly, for an out-of-plane magnetic field with magnitude B, we get an operating region with respect to B, squeezing and polarization parameters
where the qubit can be operated to obtain `clean' Rabi flips. On and close to the boundaries of the region, the higher orbital levels strongly interfere with the two-level qubit subspace and destroy the Rabi oscillations, thereby putting a limitation on squeezing of the confinement. The Rabi frequency shows different behaviour for electron-like and hole-like Rashba SOIs. It vanishes for right (left) circular polarization in presence of purely electron-like (hole-like) Rashba SOI in a circular confinement. Secondly, for an in-plane magnetic field, Rabi frequency is maximum for linear polarization when the driving electric field is parallel (perpendicular) to the magnetic field in presence of purely Rashba (Dresselhaus) SOI. For both the orientations of magnetic field, higher Rabi frequencies are achieved for squeezed configurations when the ellipses of polarization and the confinement equipotential have their major axes aligned but with different eccentricities.
Role of anisotropic confining potential and elliptical driving in dynamics of a Ge hole qubit
Bashab Dey and John Schliemann
Institute of Theoretical Physics, University of Regensburg, Regensburg, Germany
September 9, 2024
=====================================================================================================================
§ INTRODUCTION
Hole spin qubits have drawn immense interest in recent times due to several advantageous features over their electronic counterparts such as stronger spin-orbit interaction (SOI) enabling faster electrical manipulation <cit.>, reduced contact-hyperfine interaction leading to longer decoherence times <cit.>, and absence of valley degeneracy <cit.>. These qubits are based on the valence band states of group IV (Si, Ge) and III-V (GaAs, InSb etc.) semiconductors <cit.>. Among them, Germanium turns out to be a favorite due to the low effective mass of holes <cit.> which allows larger dot sizes, isotropic purification <cit.> suppressing decoherence from nuclear spins and stronger SOI than Si <cit.> facilitating rapid qubit control. Ge hole qubits have shown significant advancements in recent years <cit.> highlighted by the demonstration of single- and two-qubit control <cit.>, singlet-triplet encoding <cit.>, four-qubit processor <cit.> and successful charge control in a sixteen-dot array <cit.>. These qubits are hosted in quantum dots based on planar Ge/SiGe heterostructures, nanowires and hut wires.
In planar Ge/SiGe quantum wells, the dot is formed by a strong confinement along the growth direction (say z) and weak lateral confinement created by the smoothly varying gate voltages. The low energy quasiparticles in these dots are the heavy hole states carrying effective spin J=3/2<cit.>. These states are primarily influenced by p-cubic Rashba SOI <cit.>, which includes cubic and spherically-symmetric terms, with the latter being more dominant. These terms arise from heavy hole (HH)/light hole (LH) mixings derived through second-order perturbation theory applied to the Luttinger-Kohn Hamiltonian <cit.> and depend on valence band anisotropies <cit.> and lateral confinement anharmonicities <cit.>.
Recent studies also suggest the presence of p-linear SOIs, both Rashba and Dresselhaus types, in Ge/SiGe heterostructures <cit.>. The p-linear Rashba SOI, attributed to the local C_2v interface <cit.> and determined through atomistic pseudopotential method calculations <cit.>, is believed to drive electric dipole spin resonance (EDSR) in planar Ge quantum dots observed in experiments <cit.> with in-plane magnetic fields. For an out-of-plane magnetic field, the less significant cubic symmetric component of p-cubic Rashba SOI is shown to be responsible for EDSR, resulting in slower spin rotations <cit.>.
Another form of weak p-linear Rashba SOI has been identified <cit.>, resulting from the interaction between the HH/LH manifold and remote conduction bands due to the structural inversion asymmetry of the heterostructure. Dresselhaus SOI was known to be absent in Ge due to its centrosymmetric structure. It has been reported that symmetry breaking at the Ge/GeSi interfaces gives rise to a p-linear Dresselhaus-type SOI <cit.>, which can be stronger than cubic Rashba SOI and may dominate the behavior of quasicircular dots under out-of-plane magnetic fields, assuming the strains are uniform. Furthermore, moving the dot across inhomogeneous strain fields combined with g-factor modulations can induce a specific kind of p-linear Rashba SOI that can fasten the Rabi oscillations <cit.>. Inhomogenous and inseparable electric fields can also induce an SOI that causes Rabi rotations under in-plane magnetic fields <cit.>.
In Ge/Si (core/shell) nanowires, the hole states have a strong p-linear Direct Rashba spin-orbit interaction (DRSOI) <cit.> which can be used to leverage spin rotations about 100 times faster than the hole qubits in planar quantum dots. Unlike the conventional Rashba coupling which arises due to structural inversion asymmetry, the DRSOI results from the dipolar coupling between the quasidegenerate ground and excited states of the nanowire under a hard-wall boundary condition along the radial direction <cit.>. Its effect has been simulated in a squeezed (elongated) planar Ge quantum dot and large Rabi frequencies have been reported even at small driving amplitudes <cit.>. Hence, the DRSOI holds the prospect of designing lower power ultrafast quantum gates in squeezed geometries.
The mechanism of hole spin EDSR has been theoretically investigated in both single <cit.> and double <cit.> planar Ge quantum dots. A recent study has also examined the combined effects of p-linear and cubic Rashba SOIs, as well as the behaviour of photoinduced Rabi oscillations under strong circular driving (beyond second-order perturbation theory) in an isotropic planar Ge dot <cit.>. Although the DRSOI-induced EDSR has been studied recently in squeezed dots <cit.>, the specific impact of squeezing or anisotropy in planar Ge quantum dots and its interplay with the direction of the applied electric field on the Rabi frequency has not yet been addressed yet. In this study, we examine the Rabi oscillations of an anisotropic planar Ge quantum dot under the influence of a coherent laser beam with generic polarization, considering the recently discovered p-linear Rashba and Dresselhaus SOIs <cit.> but not the DRSOI. Although squeezing the dot may affect the SOI strengths and g-factors, we assume that they remain constant for the sake of simplicity <cit.>. Instead of gate voltages, we consider the driving force provided by the electric field of a coherent laser beam, as its polarization offers tunability and a broader understanding of the directional dependence of the Rabi frequency on the driving field.
We employ both analytical and numerical approaches to study the qubit dynamics. For an out-of-plane magnetic field, we use the exact Fock-Darwin states of an elliptical potential and study the dynamics analytically using a Schrieffer-Wolff projection to the lowest Zeeman-split block. Numerical simulations using Floquet theory reveal approximate `anisotropy cutoffs,' beyond which Rabi oscillations become heavily distorted as the excited states approach the qubit block. We demonstrate that increasing anisotropy (while keeping other system parameters constant) results in a significant rise in the Rabi frequency magnitude. The Rabi frequency is enhanced when the major axes of both the ellipses align in the same direction. We also calculate Rabi frequencies for in-plane magnetic fields, commonly used in experiments, and study their variation with the rotation of the magnetic field vector on the qubit plane. We analyze the results for both Rashba and Dresselhaus SOIs, identifying the role of squeezing in determining the Rabi frequency. We derive an analytical expression showing the condition that the eccentricities of the polarization and equipotential ellipses must satisfy to achieve maximum Rabi frequency.
The paper is organized as follows. In Sec. <ref>, we discuss the physics in presence of an out-of-plane magnetic field. In Sec. <ref>, we present the theoretical model of the elliptical quantum dot and map it to the Fock-Darwin model whose eigenstates constitute the set of basis states for our problem. In Sec. <ref> and <ref>, we derive the the approximate analytical expressions of the Rabi frequency for electron- and hole-like SOIs respectively. In Sec. <ref>, we discuss the physics for an in-plane magnetic field. In Sec. <ref>, we model the quantum dot as an anisotropic harmonic oscillator. In Sec. <ref> and <ref>, we deduce the the approximate analytical expressions of the Rabi frequency for electron- and hole-like SOIs respectively. In Sec. <ref>, we present and analyse the results of Rabi frequency for realistic system parameters and driving strengths. In Sec.s <ref> and <ref>, we analyse the behaviour of Rabi frequency using the analytical results obtained in Sec.s <ref> and <ref> for an out-of- and in-plane magnetic field respectively. In Sec. <ref>, we show results of the Rabi oscillations for the squeezing parameters where the analytical expressions of Rabi frequency are inaccurate or cannot be obtained. Finally, we conclude our results in Sec. <ref>.
§ OUT-OF-PLANE MAGNETIC FIELD
§.§ Fock-Darwin Model
The Hamiltonian of a Ge heavy hole in an anisotropic planar quantum dot, as shown in Fig. <ref>, can be modelled as H=H_0+H_SOI where
H_0=p^2/2m+ 1/2m(ω_x^2 x^2+ω_y^2 y^2)
with m being the effective heavy-hole mass and H_SOI is the spin-orbit interaction term for the heavy-holes. The expression of H_SOI depends on the specific type of SOI considered in the problem (which we discuss in the subsequent sections). In presence of an out-of-plane magnetic field B_⊥=(0,0,B), the orbital motion interacts with the field through minimal coupling p→ P= p-|e| A( r) where A (r)=B ł(-y,x)̊/2 in the symmetric gauge and the spins couple directly with the field through the Zeeman interaction. The resulting Hamiltonian is H_⊥=H_FD+H_Z,⊥+H_SOI,⊥ where H_FD is the Fock-Darwin (FD) Hamiltonian responsible for confinement, H_Z is the Zeeman Hamiltonian required to create the two-level spin-qubit system and H_SOI,⊥ is the B-dependent (through minimal coupling) SOI that can cause EDSR upon periodic driving.
The FD Hamiltonian can be written as
H_FD=1/2m(p_x^2+p_y^2+Ω_x^2 x^2+ Ω_y^2 y^2 - mω_c L_z)
where Ω_x,y^2=m^2(ω_x,y^2+ω_c^2/4), ω_c=|e| B /m and L_z=x p_y-y p_x.
The above Hamiltonian is exactly solvable with the following coordinate transformations <cit.>:
x= cosχ q_1 - χ_2 sinχ p_2,
y= cosχ q_2 - χ_2 sinχ p_1 ,
p_x= χ_1 sinχ q_2 + cosχ p_1,
p_y= χ_1 sinχ q_1 + cosχ p_2,
where χ_1=-Ω/2, χ_2=1/χ_1 and χ=tan^-1[√(2) m ω_c Ω/(Ω_x^2-Ω_y^2)]/2 with Ω=√(Ω_x^2+Ω_y^2) and [q_i,q_j]=[p_i,p_j]=0, [q_i,p_j]=iħδ_i,j. Upon transformation, the Hamiltonian can be simplified as
H_FD=p_1^2/2m_1+p_2^2/2m_2+1/2m_1ω_1^2 q_1^2+1/2m_2ω_2^2 q_2^2
where m_1,2=m/α_1,2^2 and ω_1,2=α_1,2β_1,2/m with
α_1^2=Ω_x^2+3Ω_y^2+sgn[Ω_x^2-Ω_y^2] Ω_3^2/2 Ω^2,
α_2^2=3Ω_x^2+Ω_y^2- sgn[Ω_x^2-Ω_y^2] Ω_3^2/2 Ω^2,
β_1^2=1/4ł(3Ω_x^2+Ω_y^2+ sgn[Ω_x^2-Ω_y^2] Ω_3^2)̊,
β_2^2=1/4ł(Ω_x^2+3Ω_y^2-sgn[Ω_x^2-Ω_y^2] Ω_3^2)̊
and
Ω_3^2=ł[ (Ω_x^2 - Ω_y^2)^2 + 2m^2ω_c^2 Ω^2]̊^1/2.
Here, sgn is the signum function defined as sgn[x]=±1 for x≷0.
In terms of ladder operators
a_i=1/√(2)ł(q_i/𝒳_i+ip_i/𝒫_i)̊, a^†_i=1/√(2)ł(q_i/𝒳_i-ip_i/𝒫_i)̊
with 𝒳_i=√(ħ /(m_i ω_i)), 𝒫_i=√(ħ m_i ω_i){i=1,2}, the FD Hamiltonian can be rewritten as
H_FD=ħω_1( a_1^† a_1+1/2)+ħω_2(a_2^† a_2+1/2).
The Zeeman Hamiltonian can be defined as
H_Z,⊥=-ħω_Z/2σ_z
where ħω_Z≡ g_⊥μ_B B/2 is the Zeeman splitting with g_⊥ being the out-of-plane g-factor for the holes.
The eigenstates and eigenenergies of H_FD + H_Z,⊥ are |n_1, n_2,s⟩ and E_n_1,n_2,s=ħω_1(n_1+1/2)+ħω_2(n_2+1/2)-sgn[s]ħω_Z/2 respectively
where s=±3/2 and (n_1,n_2) represent the quantum numbers of the two independent harmonic oscillators along directions (q_1,q_2). We shall use the FD basis {|n_1, n_2, s⟩} to obtain the approximate analytical and exact numerical solutions to the time dependent Schrodinger equation upon driving by a coherent laser in presence of out-of-plane B.
§.§ EDSR with electron-like Rashba SOI
The conventional SOI known for the heavy holes in planar Ge heterostructures is p-cubic Rashba, given by
H_SOI^c=iα_R^(1) p_+ p_- p_+ σ_+ + iα_R^(2)p_+^3 σ_- +H.c.,
while the Dresselhaus SOI is absent due to bulk inversion symmetry of Ge. For an out-of-plane magnetic field, EDSR can occur only if α_R^(1)≠0<cit.>. However, the magnitude of α_R^(1) is very small in these systems which leads to extremely low Rabi frequencies. Hence, we ignore the p-cubic Rashba coupling for the rest of the paper. In Ref. <cit.>, it has been reported that the SOI responsible for the EDSR observed in experiments with planar Ge quantum dots is of p-linear Rashba type, which has the form
H_SOI^l=-iα_l(p_-σ_+-p_+σ_-).
This Rashba SOI has similar form as that of the conduction electrons and hence we term it as `electron-like' Rashba SOI.
For an out-of-plane magnetic field, the SOI also becomes B-dependent through minimal coupling and simplifies as
H_SOI,⊥^l= α_lł(-f^(+)_1- a_1 + f^(-)_1- a_1^†+i f_2+^(-) a_2 -i f_2+^(+) a_2^†)̊σ_+
+ H.c.
Here, f^(a)_bc are real-valued functions defined as f^(±)_i±=f^(𝒫)_i±± f^(𝒳)_i± with
f^(𝒫)_i±=𝒫_i/√(2)ł(cosχ±m ω_c χ_2 sinχ/2)̊
and
f^(𝒳)_i±=𝒳_i/√(2)ł(χ_1 sinχ±m ω_c cosχ/2)̊.
Hence, the total Hamiltonian of the heavy hole in presence of out-of-plane magnetic field and electron-like Rashba SOI can be written as H^l_⊥=H_FD+H_Z,⊥+H_SOI,⊥^l. To observe EDSR, we drive the system with an electrical pulse provided by a coherent laser beam.
Let us consider a beam of generic polarization incident normally on the planar dot with the electric field vector E( r,t)=[E_0xsin(ω t + kz),E_0ycos (ω t + kz),0]. Then, the driving potential at the quantum dot plane (z=0) can be written in the length gauge as <cit.>
V( r,t)=-|e|∫_ r E· d r^'=-(F_0x x sinω t + F_0y y cosω t)
where F_0 x,y=|e|E_0 x,y. In terms of ladder operators, we have
V( r,t)=v_1(t) a_1+ v_2(t) a_2 + H.c.,
where
v_1(t)=-1/√(2)ł(𝒳_1 F_0xsinω t cosχ+i 𝒫_1 F_0ycosω t χ_2 sinχ)̊
and
v_2(t)=-1/√(2)ł(i 𝒫_2 F_0xsinω t χ_2 sinχ + 𝒳_2 F_0ycosω t cosχ)̊.
The total Hamiltonian including the periodic drive is H^l_⊥+V( r,t). On performing a Schrieffer-Wolff transformation [see Appendix <ref>], we get an effective EDSR Hamiltonian for the qubit as
[H^l_⊥]_eff(t) =-ł(ħω_Z+Δ^l_⊥/2)̊σ_z
+ħ/2 (ω_res,⊥^l ^i ω t +ω_off,⊥^l^-i ω t) σ_+ + H.c.
where
ω_res,⊥^l=1/√(2)χ_1×
{χ_1 cosχł[i 𝒳_1F_0x (S^(2)_1a-S^(2)_1b) +𝒳_2F_0y (-S^(2)_2a+S^(2)_2b)]̊
+sinχł[ i 𝒫_1 F_0y (S^(2)_1a+S^(2)_1b)+ 𝒫_2F_0x (S^(2)_2a+S^(2)_2b) ]̊},
ω_off,⊥^l=1/√(2)χ_1×
{χ_1 cosχł[-i 𝒳_1F_0x (S^(2)_1a-S^(2)_1b) +𝒳_2F_0y (-S^(2)_2a+S^(2)_2b)]̊
+sinχł[ i 𝒫_1 F_0y (S^(2)_1a+S^(2)_1b)- 𝒫_2F_0x (S^(2)_2a+S^(2)_2b) ]̊}
and
Δ_⊥^l=α_l^2/ħł[(f_1-^(+))^2/ω_1+ω_Z + (f_2+^(-))^2/ω_2+ω_Z-(f_1-^(-))^2/ω_1-ω_Z - (f_2+^(+))^2/ω_2-ω_Z]̊.
The expressions of S^(2)_1a, S^(2)_1b, S^(2)_2a and S^(2)_2b are provided in Appendix <ref>. For ω_off^⊥≪ω_Z, the term ∝^iω t in Eq. (<ref>) contributes to the Rabi oscillations with resonant frequency |ω^l_res,⊥| while the term ∝^-iω t gives the rapidly oscillating contributions which can be discarded by the rotating wave approximation. The resonance condition is ω=ω_Z+Δ_⊥^l/ħ.
The orientation of the ellipse of polarization can also be varied on the x-y plane (keeping the centre fixed). Let the ellipse be rotated through some angle θ about the z-axis of the squeezed confinement. We label θ as the `orientation' angle. The electric field then transforms as E_θ ( r,t)= R_θ E ( r, t) where R_θ is the standard rotation matrix about the z-axis defined as
R_θ=ł([ cosθ -sinθ; sinθ cosθ ])̊.
Then, the resonant Rabi frequency for an orientation angle θ is given by |ω_res,⊥^l(θ)| where
ω_res,⊥^l(θ)=ω_res,⊥^l cosθ +sinθ/√(2)χ_1×
{χ_1 cosχł[𝒳_1F_0y (S^(2)_1a-S^(2)_1b) +i𝒳_2F_0x (S^(2)_2a-S^(2)_2b)]̊
+sinχł[ 𝒫_1 F_0x (S^(2)_1a+S^(2)_1b)- i𝒫_2F_0y (S^(2)_2a+S^(2)_2b) ]̊}
where ω_res,⊥^l is defined in equation (<ref>).
§.§ EDSR with hole-like Dresselhaus and Rashba SOI
In Ref. <cit.>, p-linear Dresselhaus (H_D^(+)) and Rashba (H_R^(+)) SOIs have been derived for heavy holes in planar Ge/Si heterostructures where
H_D^(+)=α_D(p_xσ_x+p_yσ_y)=α_D(p_-σ_++p_+σ_-)
and
H_R^(+)=α_R(p_xσ_y+p_yσ_x)=-iα_R(p_+σ_+-p_-σ_-)
such that the net SOI is
H_SOI,⊥^(+) =α_D(p_-σ_+ + p_+σ_-) -i α_R (p_+σ_+-p_-σ_-)
=(α_D p_- -i α_R p_+)σ_+ + H.c.
Here, the `+' sign replaces the conventional `-' sign between the σ_x and σ_y terms present for electrons in the Rashba or Dresselhaus SOIs because spin 3/2 transforms differently from spin 1/2 under certain symmetry operations <cit.>.
In presence of an out-of-plane magnetic field, p_±→ P_±=P_x ± i P_y and hence we get the B-dependent hole-like SOI as
H_SOI,⊥^(+)=(h_1a a_1 + h_1b a^†_1 + h_2a a_2 + h_2b a^†_2)σ_+ + H.c.
where
h_1a=-ł(iα_D f_1-^(+)+α_R f_1-^(-))̊
h_1b=iα_D f_1-^(-)+α_R f_1-^(+)
h_2a=-ł(α_D f_2+^(-)+iα_R f_2+^(+))̊
h_2b=α_D f_2+^(+)+iα_R f_2+^(-)
where f^(a)_bc are defined in Eqs. (<ref>) and (<ref>). Using a Schrieffer-Wolff transformation and driving with V( r, t), we get the effective EDSR Hamiltonian as
[H_⊥^(+)]_eff(t)= -ł(ħω_Z + Δ_⊥^(+)/2)̊σ_z
+ł[ħ/2(ω_res,⊥^(+)^iω t+ω_off,⊥^(+)^-iω t)σ_+ + H.c.]̊
where
Δ_⊥^(+) = 1/ħł[|h_1a|^2/ω_1+ω_Z + |h_2a|^2/ω_2+ω_Z-|h_1b|^2/ω_1- ω_Z - |h_2b|^2/ω_2- ω_Z]̊,
and ω_res,⊥^(+) and ω_off,⊥^(+) have same expressions as ω^l_res,⊥ and ω^l_off,⊥ in (<ref>) and (<ref>) respectively but with new {S^(2)_lm} defined as:
S^(2)_1a=-h_1a/ħω_1+ħω_Z,
S^(2)_1b=h_1b/ħω_1-ħω_Z,
S^(2)_2a=-h_2a/ħω_2+ħω_Z
and
S^(2)_2b=h_2b/ħω_2-ħω_Z.
The resonant Rabi frequency is |ω_res,⊥^(+)| and the resonance condition ω=ω_Z+Δ_⊥^(+)/ħ.
§ IN-PLANE MAGNETIC FIELD
§.§ Model
Let us consider a generic in-plane magnetic field which makes an angle ϕ with the x-axis i.e. B=(B_x,B_y,0)=B(cosϕ, sinϕ,0). The vector potential can be chosen as A( r)=B(0,0, ycosϕ - x sinϕ), which does not couple to the orbital degree of freedom as the out of plane motion of the hole is quenched. Then, the 2D heavy-hole Hamiltonian is H_||=H_0+H_Z,||+H_SOI where H_0 is defined in (<ref>), H_SOI can be electron- or hole-like as defined in Eqs. (<ref>) and (<ref>) respectively, and
H_Z,||=-g_||μ_B/2(σ_x B_x - σ_y B_y)=-ħω_Z/2ł(^iϕσ_+ + ^-iϕσ_-)̊
with ω_Z=g_||μ_B B/2. Thus the in-plane g-factor is anisotropic i.e. g_yy=-g_xx=-g_||.
Consequently, the spin vector ⟨σ(t)⟩ of a heavy hole makes an angle 2ϕ or π-2ϕ with the direction of B in the |+⟩ or |-⟩ eigenstates respectively. This is in contrast with the electronic qubits where the spin vector of |±⟩ states are aligned along/opposite to B.
Since we want to deduce an effective 2-level Rabi Hamiltonian upon driving by the laser, we first diagonalize (<ref>) by the unitary transformation: H̃_Z,||=U^† H_Z,|| U = -ħω_Z/2σ_z where
U=1/√(2)ł ( [ 1 1; ^-iϕ -^-iϕ ])̊.
Similarly, H̃_0=U^† H_0 U=H_0. In terms of ladder operators
a_x=1/√(2)ł(x/X_0+ip_x/P_x0)̊, (a_x)^†=a_x^†
and
a_y=1/√(2)ł(y/Y_0+ip_y/P_y0)̊, (a_y)^†=a_y^†
with X_0=√(ħ /(m ω_x)), P_x0=√(ħ m ω_x), Y_0=√(ħ /(m ω_y)) and P_y0=√(ħ m ω_y), we can write
H̃_0=ħω_xł(a_x^† a_x+1/2)̊+ħω_ył(a_y^† a_y+1/2)̊.
Hence, the eigenstates and eigenvalues of H̃_0+H̃_Z,|| are |n_x, n_y,s⟩ and E_n_x,n_y,s=ħω_1(n_x+1/2)+ħω_2(n_y+1/2)-sgn[s]ħω_Z/2 respectively
where s=±3/2 and (n_x,n_y) represent the quantum numbers of the two uncoupled harmonic oscillators along directions (x,y). For in-plane magnetic field, we shall use the oscillator basis {|n_x, n_y, s⟩} later to obtain the approximate analytical and exact numerical results of the Rabi frequency.
§.§ EDSR with electron-like Rashba SOI
For the electron-like Rashba SOI of Eq. (<ref>), the unitary transformation yields
H̃_SOI^l =U^† H_SOI^lU=-i α_l/2[ł( p_- ^-iϕ - p_+ ^iϕ)̊σ_z
+ł( p_- ^-iϕ + p_+ ^iϕ)̊ł(σ_- - σ_+)̊]
=-α_l/√(2)[ił{𝒫_xsinϕ (a_x^†-a_x) + 𝒫_ycosϕ (a_y^†-a_y)}̊σ_z
+ł{𝒫_x cosϕ (a_x^†-a_x)-𝒫_y sinϕ (a_y^†-a_y)}̊ł(σ_+ - σ_-)̊]
Similarly, the drive Ṽ( r,t)=U^† V( r,t) U=V( r,t) can be written as
Ṽ( r,t)=-1/√(2)[F_0x X_0 (a_x^† + a_x) sinω t + F_0y Y_0 (a_y^† + a_y) cosω t ].
The total Hamiltonian with driving is hence H̃_||^l=H̃_0+ H̃_Z,||+H̃_SOI^l+Ṽ( r,t).
Again, performing SW transformation, the effective EDSR Hamiltonian of the qubit is obtained as
[H_||^l]_eff= -ł(ħω_Z+Δ_||^l (ϕ)/2)̊σ_z
+ħ/2[ ω^l_res,|| (ϕ) ^i ω t
+ H.c. ]σ_+ + H.c.
where
Δ_||^l (ϕ)=α_R^2 /2ħ [P_0x^2 cos^2 ϕł( 1/ω_x+ω_Z - 1/ω_x-ω_Z)̊
+P_0y^2 sin^2 ϕł( 1/ω_y+ω_Z - 1/ω_y-ω_Z)̊]
and
ω_res, ||^l (ϕ)=iα_R /2 ħ [ F_0xcosϕł( 1/ω_x-ω_Z - 1/ω_x+ω_Z)̊
-i F_0ysinϕł( 1/ω_y-ω_Z - 1/ω_y+ω_Z)̊].
Thus, the resonant Rabi frequency is
|ω_res, ||^l (ϕ)|=α_R ω_Z/ħł[F_0x^2 cos^2 ϕ/(ω_x^2-ω_Z^2)^2 + F_0y^2 sin^2 ϕ/(ω_y^2-ω_Z^2)^2]̊^1/2
For a linearly polarized radiation, |ω_res, ||^l (ϕ)| vanishes if E(t)⊥ B and is maximum when E(t) || B. For x-polarized beams, |ω_res, ||^l (ϕ)| peaks when B||ê_x and ω_x ≳ω_Z whereas for y-polarized beams, |ω_res, ||^l (ϕ)| peaks when B||ê_y and ω_y≳ω_Z. The transformation γ→γ±π change the sense of rotation of elliptical polarization, while ϕ→ϕ±π flips the direction of B. We observe that |ω_res, ||^l (ϕ)| is independent of the sense of rotation of E(t) and B-flip operation.
For ω_x,ω_y≫ω_Z i.e. stronger confinement (smaller quantum dots) or low magnetic fields, the Rabi frequency is approximately
|ω_res, ||^l (ϕ)|≈α_R ω_Z/ħł[F_0x^2 /ω_x^4cos^2 ϕ + F_0y^2 /ω_y^4sin^2 ϕ]̊^1/2.
In such cases, if F_0x/F_0y=ω_x^2/ω_y^2, then ω_res, ||^l≈α_R F_0xω_Z/(ħω_x^2) =α_R F_0yω_Z/(ħω_y^2) is independent of the orientation of B. In other words, if the major axes of the polarization and potential ellipses are perpendicular to each other and their eccentricities e_E and e_C (respectively) satisfy the relation √(1-e_E^2)=1-e_C^2, the Rabi-frequency is ϕ-independent.
§.§ EDSR with hole-like Rashba and Dresselhaus SOIs
For the hole-like Dresselhaus and Rashba SOIs of Eq. (<ref>), the unitary transformation yields
H̃_SOI^(+)=i/√(2)σ_z [P_0x (α_D cosϕ-α_R sinϕ)(a_x^†-a_x)
+P_0y (-α_D sinϕ+α_R cosϕ) (a_y^†-a_y) ]
-1/√(2)(σ_+-σ_-)[P_0x(α_D sinϕ + α_R cosϕ)(a_x^†-a_x)
+P_0y(α_D cosϕ + α_R sinϕ)(a_y^†-a_y)].
The total Hamiltonian with driving is hence H̃_||^(+)=H̃_0+ H̃_Z,||+H̃_SOI^(+)+Ṽ( r,t).
Again, performing SW transformation, the effective EDSR Hamiltonian of the qubit is obtained as
[H_||^l]_eff= -ł(ħω_Z+Δ_||^(+) (ϕ)/2)̊σ_z
+ħ/2[ ω^(+)_res,|| (ϕ) ^i ω t
+ H.c. ]σ_+ + H.c.
where
ω^(+)_res,|| (ϕ)=iω_Z/ħ×
ł[F_0x(α_D sinϕ +α_R cosϕ)/ω_x^2-ω_Z^2+iF_0y(α_D cosϕ +α_R sinϕ)/ω_y^2-ω_Z^2]̊
and
Δ_||^(+) (ϕ)
=1 /2ħ[P_0x^2 (α_D sinϕ+α_R cosϕ)^2ł( 1/ω_x+ω_Z - 1/ω_x-ω_Z)̊
+P_0y^2 (α_D cosϕ+α_R sinϕ)^2ł( 1/ω_y+ω_Z - 1/ω_y-ω_Z)̊]
=-1 /ħ[ P_0x^2 ω_Z/ω_x^2-ω_Z^2 (α_D sinϕ+α_R cosϕ)^2
P_0y^2 ω_Z/ω_y^2-ω_Z^2 (α_D cosϕ+α_R sinϕ)^2].
The resonant Rabi frequency is
|ω^(+)_res,|| (ϕ)|=ω_Z/ħ×
ł[F^2_0x(α_D sinϕ +α_R cosϕ)^2/(ω_x^2-ω_Z^2)^2+F^2_0y(α_D cosϕ +α_R sinϕ)^2/(ω_y^2-ω_Z^2)^2]̊^1/2
Resonance condition is ω=ω_Z+Δ_||^(+) (ϕ)/ħ.
We find that the expression, and hence the behaviour, of the resonant Rabi frequency is identical for purely electron- and hole-like p-linear Rashba SOIs (i.e. α_D=0). For a purely hole-like Dresselhaus SOI (i.e. α_R=0), on irradiation by a linearly polarized beam, the Rabi frequency vanishes if E(t) || B and is maximum if E(t) ⊥ B. For x-polarized beams, Rabi frequency peaks when B||ê_y and ω_x ≳ω_Z whereas for y-polarized beams, it peaks when B||ê_x and ω_y ≳ω_Z. Similar to the case of Rashba SOI, the Rabi frequency does not change on flipping B or the sense of rotation of E(t). Thus, the behaviour of the Rabi frequency for hole-like Dresselhaus SOI has stark differences from that of Rashba SOI. These features can hence act as probes to detect the nature of p-linear SOI present in the planar heterostructure and also estimate their relative strengths.
For circularly polarized radiation (F_0x=F_0y=F_0) and isotropic confinement (ω_x=ω_y=ω_0), we deduce the Rabi frequency from equation (<ref>) as
ł[|ω^(+)_res,|| (ϕ)|]̊_cir,iso =ω_Z F_0/ħ(ω_0^2-ω_Z^2)×
ł[α_D^2+α_R^2+2α_R α_D sin 2ϕ]̊^1/2
The above equation shows that the Rabi frequency is π-periodic in ϕ with the maximum value ω_Z F_0 (α_D+α_R)/ħ(ω_0^2-ω_Z^2) at ϕ=π/4 and minimum value ω_Z F_0 |α_D-α_R|/ħ(ω_0^2-ω_Z^2) at ϕ=3π/4. A similar ϕ dependence can be seen for a general polarization and confinement when α_D=α_R=α,
ł[|ω^(+)_res,|| (ϕ)|]̊_α=ω_Zα/ħ×
ł[ł(F^2_0x/(ω_x^2-ω_Z^2)^2+F^2_0y/(ω_y^2-ω_Z^2)^2)̊ł(1+sin 2ϕ)̊]̊^1/2.
In this case, no Rabi oscillations occur when ϕ=3π/4.
§ RESULTS AND DISCUSSION
§.§ Analytical results
Let us parameterize the electric field amplitudes as E_0x=E_0 cosγ and E_0y=E_0 sinγ where γ controls the polarization of the beam. For example, γ=0,π/4,π/2 and 3π/4 denote x-polarized, left-circular, y-polarized and right-circular beams respectively. The driving amplitude F_0 = |e|√(E_0x^2+E_0y^2)=|e| E_0 is constant with respect to the variation of polarization. This allows us to see purely the polarization effect on the Rabi frequency through the tuning of γ without changing the driving strength. Similarly, we can also parameterize the confinement frequencies as ω_x=ω_0 cosζ and ω_y=ω_0 sinζ. We hence label γ and ζ as the `polarization' and `squeezing' angles respectively. The variation of polarization of the beam and contours of the confining potential with γ and ζ respectively are shown in Fig <ref>.
Let us define dimensionless quantities as ω̃_Z=ω_Z/ω_0, ω̃_c=ω_c/ω_0, α̃_l=α_l p_0/(ħω_0), α̃_R/D^(+)=α_R/D^(+) p_0/(ħω_0) and F̃_0=F_0 /(p_0ω_0) where p_0=√(ħ m ω_0). For a confinement length l_0=20 nm and using known values of parameters for Ge/Si quantum wells <cit.> i.e. m∼0.09 m_e, g_⊥≈15.7, g_||≈0.21,
α_l=2.01 meV A/ħ, we get α̃_l=0.0047, ω̃_c=0.606 Bω̃_Z=0.428 B and 0.00572 B for out-of-plane and in-plane magnetic fields respectively where B is the magnetic field strength in tesla.
§.§.§ Out-of-plane magnetic field
Electron-like Rashba SOI: Figure <ref> shows the dependence of |ω_res,⊥^l| on the angles γ and ζ. The two dark lines show that the resonant Rabi frequencies sharply peak at two particular values of squeezing angles, say ζ_1 and ζ_2, which are B-dependent and form a pair of complementary angles. This is due to the fact that the energy levels |0,0,-3/2⟩ and |0,1,3/2⟩ cross at ζ_1,2 [see Fig. <ref>] in absence of the SOI and are quasidegenerate in presence of it. As a result, some of the S^(2)_lm given in App. <ref> diverge and the perturbation theory breaks down. Hence, the SWT does not describe the physics correctly at these points. We shall see in the next section that the Rabi oscillations get heavily distorted close to the lines and completely lose their characteristics at ζ_1,2 (B). Hence, the region between but excluding the lines on the γ-ζ plane can be termed as the `operating region' for the qubit to perform coherent Rabi oscillations. The fidelity of the operation is lower close these lines. The lines approach each other with increasing B thereby shrinking the operating region. There also exist curves on which Rabi frequency is vanishingly small. The shape of these curves varies with the magnetic field strength. The range of polarization angle for which we get these diminished frequencies increases with the magnetic field.
Figure <ref> shows the dependence of |ω_res,⊥^l| on magnetic field B and squeezing angle ζ. The peaked values of Rabi frequency trace out curves resembling parabolas on the B-ζ plane. This is also consistent with the existence of a complementary pair (ζ_1,ζ_2) for a given B. The region enclosed by the curves and the ζ axis is the operating region for the qubit on the B-ζ plane. The shape of the these curves is independent of the polarization implying that they only depend on the ellipse of the confinement. We can also see curves (light yellow) of diminishing Rabi frequencies whose shapes vary with the polarization. For certain polarizations, a part of the curve lies inside the operating region. For γ=π/4, the curve only touches the region tangentially implying that there is always a resonably high Rabi frequency when the system is driven with left circularly polarized light.
The variation of Rabi frequency with the squeezing angle ζ is shown in Fig. <ref> for various polarizations at different magnetic field strengths. The Rabi frequency increases (decreases) with ζ for x-polarized (y-polarized) light. This implies that higher Rabi frequency is favored when the ellipse of polarization tends to align with that of the confining potential. With increase in B, the Rabi frequency becomes vanishingly small at certain squeezing angles for all but left circularly polarized light (γ=π/4).
As expected, the variation of the Rabi frequency is symmetric about ζ=π/4 i.e circular confinement, for both left and right circularly polarized lights as it should favor squeezing equally along both x- and y-directions.
The variation of Rabi frequency with the polarization angle γ is shown in Fig. <ref> for various squeezing angles at different magnetic field strengths. For each squeezing angle, the Rabi is π-periodic in γ and diminishes for some γ=γ_ζ(B). With an isotropic confinement, the Rabi frequency vanishes for γ=3π/4 at all allowed values of B. Using the approach of Ref. <cit.>, we find the Rabi frequency for an isotropic dot and elliptical drive to be
|ω_res,⊥^l|=2α_l F_0 ω_Z/(ω_1-ω_Z)(ω_2+ω_Z)ł(cosγ + sinγ)̊
where ω_1,2=√(ω^2_0+ω_c^2/4)±ω_c/2.
From the above expression, we see that the Rabi frequency vanishes for right circular polarization i.e. γ=3π/4 and γ=7π/4 independent of other parameters.
Maximum Rabi frequency is obtained for values of ζ close to ζ_1 or ζ_2 i.e. highly squeeezed dots within the operating region.
The variation of the Rabi frequency with orientation angle θ for ζ=0.85π/2 and different polarizations is shown in Fig. <ref>. The Rabi frequency has oscillatory behaviour in θ with a π-periodicity for all polarizations except circular. Since the circular polarized radiation is invariant under rotation through θ (upto a phase), the Rabi frequency is independent of it. Driving with left circular light gives higher Rabi frequency than the right circular one.
Hole-like Rashba SOI: Figure <ref> shows the variation of natural logarithm of |ω_res,⊥^(+)| for purely hole-like Rashba SOI i.e. α_R≠0 (=α_l) and α_D=0 with the angles γ and ζ. As expected, the operating region which only depends on the ellipse of confinement for a given B is identical to that obtained in the case of electron-like Rashba SOI. However, in contrast to the electron-like Rashba SOI, the curves representing the diminished Rabi frequencies do not change their shapes with B. The curves also have a `horizontally flipped' orientation with respect to that of electron-like Rashba SOI. Unlike the electron-like Rashba SOI, the range of polarization angle for which EDSR is suppressed remains constant with respect to change in the magnetic field.
The variation of |ω_res,⊥^(+)| with ζ is shown in Fig. <ref>. Similar to electron-like Rashba SOI, enhanced Rabi frequencies are observed for higher squeezing when the ellipse of the squeezed configuration is similar to that of the polarization. In contrast to the case of electron-like Rashba SOI, the Rabi frequency vanishes in an isotropic confinement for left circularly polarized light (γ=π/4) instead of right-circular one. The frequency never diminishes for right circularly polarized light at any squeezed configuration. Hence, the left and right circular polarization switch roles for electron- and hole-like Rashba SOIs. This feature can be used as an experimental probe to decipher the nature of Rashba SOI in heavy holes. Figure <ref> shows the variation of the Rabi frequency with polarization angle γ. The plots are similar to that of electron-like Rashba SOI except the fact that the point of diminished Rabi frequency for a given ζ does not change with B in this case.
Hole-like Dresselhaus SOI: The behaviour of Rabi frequency in presence of purely hole-like Dresselhaus SOI is identical to that of electron-like Rashba SOI.
§.§.§ In-plane magnetic field
Electron-like Rashba SOI: The variation of the natural logarithm of |ω_res,||^l| with γ and ζ for electron-like Rashba SOI is shown in Fig. <ref> for different magnetic field angles ϕ. Since the g_||≪ g_⊥, we ramp up the magnetic field to 10 T in order to get sufficient Zeeman splitting. Unlike the case of out-of-plane magnetic field, the operating region extends from ζ≈ 0 to ≈π/2 for all values of ϕ and moderate strengths of magnetic field ∼ 10 T. This is due to the fact that Zeeman splitting is low for in-plane magnetic field allowing for crossing of the energy levels at ζ→0 and ζ→π/2. The Rabi frequency vanishes at γ=π/2, 3π/2 for ϕ=0 and at γ=0,πfor ϕ=π/2 as shown by the light vertical lines. This is consistent with the fact the Rabi frequency vanishes when the E(t)⊥ B for Rashba SOI <cit.>.
In Fig. <ref>, we see that the points of vanishing Rabi frequency on the ϕ-γ plane are at ł[(2n+1)π/2,nπ]̊ where n is an integer. The variation of the Rabi frequency is π-periodic in both γ and ϕ. For ζ<π/4, the maxima are located at ł[(2n+1)π/2,(2m+1)π/2]̊ while for ζ>π/4, the maxima are located at ł[n π, m π]̊ where n,m are integers. The dashed arrow shows the direction along which the maxima shifts as the squeezing angle increases from ζ→π/2-ζ.
Hole-like Rashba and Dresselhaus SOIs: The behaviour of rabi frequency is identical for electron- and hole-like Rashba SOIs. For hole-like Dresselhaus, the Rabi frequency simply has a phase shift of π/2 in ϕ with respect to that of Rabi driving by Rashba SOI. Consequently, the Rabi frequency vanishes when the E(t)
|| B for Dresselhaus SOI <cit.>.
§.§ Insights from numerical simulations
In this section, we present the numerical results of the time evolution of the qubit for low radiation amplitudes. Since the drive is periodic, we use Floquet theory to compute the time dynamics taking into account 30 energy levels of H_FD+H_Z,⊥ or H_0+H_Z,|| following the methodology given in Ref. <cit.>.
The Rabi frequency is found to be in excellent agreement with the numerical values for points within the operating region of the qubit. As ζ→ζ_1 or ζ_2 (within the operating region), the oscillations begin to lose their characteristic behaviour and nearly vanish [see Fig. <ref>]. Since energy levels cross at ζ_1 and ζ_2, the effective 2×2 Hamiltonian obtained using SW transformation in the |0,0,±3/2⟩ block is no longer a good approximation as the interference effects due to the third level become stronger near ζ=ζ_1 (or ζ_2).
§ CONCLUSION
We have studied the interplay of squeezing of the confining potential and polarization of the driving electric field on the dynamics of a single hole qubit in a planar germanium quantum dot in presence of p-linear SOIs. The squeezing and polarization are parameterized by the angles ζ and γ repectively. We consider two orientations of magnetic field – in-plane and out-of-plane, which leads to distinct Zeeman couplings owing to the large difference in g_⊥ and g_|| and the anisotropic nature of g_||. We study the role of electron-like Rashba SOI and hole-like Rashba and Dresselhaus SOI on the Rabi frequencies for each orientation of magnetic field. For an out-of-plane magnetic field, we model the system with the Fock-Darwin Hamiltonian for an anisotropic harmonic potential. We get an operating region on the ζ-γ plane bounded by the lines ζ=ζ_1 and ζ=ζ_2 within which the qubit can be operated efficiently to obtain high fidelity Rabi oscillations. The oscillations get heavily distorted close to and on these lines. This is attributed to the crossing of higher orbital levels with one of the Zeeman-split levels of the qubit. So, the qubit can no longer be effectively treated as a two-level system. The operating region shrinks with increase in B. Higher Rabi frequencies are obtained when the major axes of the ellipses of confinement and polarization are aligned in the same direction. Inside the operating region, curves of highly diminished Rabi frequencies emerge whose shapes of the curves are different for electron- and hole-like Rashba SOIs. The Rabi frequency vanishes for right (left) circular driving in presence of purely electron-like (hole-like) Rashba SOI in a circular confinement. The behaviour of Rabi frequency for hole-like Dresselhaus is identical to that for the electron-like Rashba SOI. The Rabi frequency has a sinusoidal dependence on the orientation angle θ of the ellipse of polarization.
For an in-plane magnetic field, the operating regions are approximately B-independent and ζ_1≈0 and ζ_2≈π/2 due to very small g_||, which corresponds to extremely squeezed configurations. The Rabi frequency vanishes when the driving electric field is linearly polarized with its electric vector perpendicular (parallel) to the static magnetic field in presence of purely electron- or hole-like Rashba (Dresselhaus) SOI. For ζ<π/4, the maximum Rabi frequency is obtained when the driving electric field is linearly polarized along y-axis with its vector parallel (perpendicular) to the static magnetic field in presence of purely electron- or hole-like Rashba (Dresselhaus) SOI. For ζ>π/4, the maximum Rabi frequency is obtained for a similar orientation but with the electric field polarization along the x-direction. In both the cases, the maximum value with respect to ζ occurs for ζ≈ 0 and ζ≈π/2, i.e. highly squeezed configurations.
Thus, we elucidate the role of squeezing of the confining potential and electric field polarization in the EDSR of a single Ge spin-hole qubit and highlight the operating region of the qubit for distortion-free Rabi oscillations. Although extreme squeezing sharply increases the Rabi frequency, the leakage of higher energy levels into the qubit subspace strongly interferes with the Rabi oscillations, which puts a limitation on the value of the squeezing parameter. Our results highlight the differences in behaviour of the Rabi frequencies for electron/hole-like Rashba and Dresselhaus SOIs in presence of both in-plane and out-of-plane magnetic fields. We have shown that the Rabi frequencies can be significantly enhanced by squeezing the dot (within the perturbative regime) and tuning polarization of the radiation, without the need of increasing the driving and SOI strengths. In conclusion, our work emphasizes the importance of the geometrical properties of the potential and driving field in EDSR mechanisms. This may offer valuable insights for experimental studies seeking optimal configurations for minimizing the spin-flip times without resorting to stronger electric pulses or SOI strengths, which could increase the decoherence.
§ SCHRIEFFER-WOLFF TRANSFORMATION
Although V( r,t) does not contain spin-mixing terms, it is the combination of SOI and V( r,t) that brings about the desired spin rotations. This can be seen through a Schrieffer-Wolff transformation (SWT) <cit.> where we get an effective Rabi Hamiltonian for the spins upon electrical driving by including the effect of SOI perturbatively. In the following, we derive the effective EDSR Hamiltonian for the case of out-of-plane magnetic field and electron-like SOI. Similar approach is to be followed for other cases as well.
For a small α_l, the SWT removes the off-diagonal elements linear in the α_l,
H^l_SW,⊥ =^S (H_FD+ H_Z,⊥+H^l_SOI,⊥) ^-S
≈ H_FD+ H_Z,⊥+1/2[S,H^l_SOI,⊥].
where S^†=-S and [H_FD+ H_Z,⊥ , S]=H_SOI,⊥^l.
Taking the ansatz S=S^(1)σ_z+S^(2)σ_+-S^(2)^†σ_-, we get S^(1)=0 and
Ŝ^(2)= S^(2)_1aâ_1+S^(2)_1bâ_1^†+S^(2)_2aâ_2+S^(2)_2bâ_2^†
where
S^(2)_1a=α_l f_1-^(+)/ħω_1+ħω_Z,
S^(2)_1b=α_l f_1-^(-)/ħω_1-ħω_Z,
S^(2)_2a=-iα_l f_2+^(-)/ħω_2+ħω_Z,
and
S^(2)_2b=-iα_l f_2+^(+)/ħω_2-ħω_Z.
where f^(a)_bc are defined in Eqs. (<ref>) and (<ref>). Evaluating [S,H_SOI,⊥^l] and projecting Eq. (<ref>) into the lowest energy block spanned by the states |0,0,±3/2⟩, we get the 2× 2 diagonal Hamiltonian
[H^l_SW,⊥]_2×2=ł([ E_0 + E_0^(2) 0; 0 E_1 + E_1^(2) ])̊,
where E_0/1=ħ(ω_1+ω_2∓ω_Z)/2 are the Zeeman split energies and E_0/1^(2) are the second order energy corrections in α_l given by
E_0^(2)=-α_l^2/ħł[(f_1-^(+))^2/ω_1+ω_Z + (f_2+^(-))^2/ω_2+ω_Z]̊
and
E_1^(2)=-α_l^2/ħł[(f_1-^(-))^2/ω_1-ω_Z + (f_2+^(+))^2/ω_2-ω_Z]̊.
Equation (<ref>) constitutes the effective 2-level Hamiltonian of the spin qubit in this system in absence of an external drive or interaction with environment.
For a weak electrical driving, the time-dependent SW Hamiltonian can be written upto first order in the driving strength as
H_SW,⊥^l(t) =H^l_SW,⊥+^S V( r, t) ^-S
≈ H^l_SW,⊥+V( r, t)+[S,V( r, t)].
Again, projecting H_SW,⊥^l(t) into the lowest energy block, we get a 2×2 Hamiltonian as
[H^l_SW,⊥]_2×2(t)=
ł[[ E_0 + E_0^(2) ħ/2(ω_res,⊥^l ^i ω t +ω_off,⊥^l^-i ω t); ħ/2{(ω_res,⊥^l)^* ^-i ω t +(ω_off,⊥^l)^*^i ω t} E_1 + E_1^(2) ]]̊.
Removing the global energy shifts,
we can write the effective EDSR Hamiltonian for the qubit as
[H^l_⊥]_eff(t) =-ł(ħω_Z+Δ^l_⊥/2)̊σ_z
+ħ/2 (ω_res,⊥^l ^i ω t +ω_off,⊥^l^-i ω t) σ_+ + H.c.
where ω_res,⊥^l, ω_off,⊥^l and Δ^l_⊥ are defined in Eqs. (<ref>), (<ref>) and (<ref>) respectively. Thus, through SWT, we get an effective Hamiltonian which resembles a Rabi problem with resonant Rabi frequency |ω_res,⊥^l| and resonance condition ω=ω_Z+Δ^l_⊥/ħ.
|
http://arxiv.org/abs/2409.03155v1 | 20240905011158 | Debate on Graph: a Flexible and Reliable Reasoning Framework for Large Language Models | [
"Jie Ma",
"Zhitao Gao",
"Qi Chai",
"Wangchun Sun",
"Pinghui Wang",
"Hongbin Pei",
"Jing Tao",
"Lingyun Song",
"Jun Liu",
"Chen Zhang",
"Lizhen Cui"
] | cs.CL | [
"cs.CL",
"cs.AI",
"I.2.4"
] |
The YMDB catalog: Young massive detached binaries for the determination of high-precision absolute stellar parametersFull version of Tables 2, 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/.
Pablo Martín-Ravelo
1,2
Roberto Gamen
3,4
Julia I. Arias
1
André-Nicolas Chené
2
Rodolfo H. Barbá In Memoriam (1962–2021)
1
Received: 20 June 2024 / Accepted: 05 July 2024
================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Large Language Models (LLMs) may suffer from hallucinations in real-world applications due to the lack of relevant knowledge. In contrast, knowledge graphs encompass extensive, multi-relational structures that store a vast array of symbolic facts. Consequently, integrating LLMs with knowledge graphs has been extensively explored, with Knowledge Graph Question Answering (KGQA) serving as a critical touchstone for the integration. This task requires LLMs to answer natural language questions by retrieving relevant triples from knowledge graphs. However, existing methods face two significant challenges: excessively long reasoning paths distracting from the answer generation, and false-positive relations hindering the path refinement. In this paper, we propose an iterative interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs (DoG). Specifically, DoG employs a subgraph-focusing mechanism, allowing LLMs to perform answer trying after each reasoning step, thereby mitigating the impact of lengthy reasoning paths. On the other hand, DoG utilizes a multi-role debate team to gradually simplify complex questions, reducing the influence of false-positive relations. This debate mechanism ensures the reliability of the reasoning process. Experimental results on five public datasets demonstrate the effectiveness and superiority of our architecture. Notably, DoG outperforms the state-of-the-art method ToG by 23.7% and 9.1% in accuracy on WebQuestions and GrailQA, respectively. Furthermore, the integration experiments with various LLMs on the mentioned datasets highlight the flexibility of DoG. Code is available at <https://github.com/reml-group/DoG>.
§ INTRODUCTION
Large Language Models (LLMs), characterized by their substantial parameter amount <cit.> and training on extensive, diverse, and unlabeled data <cit.>, exhibit remarkable proficiency in a wide range of natural language understanding and generation tasks <cit.>. For example, GPT-4 <cit.> demonstrates human-level performance across a majority of professional and academic exams originally intended for humans. However, recent studies <cit.> have revealed that they may suffer from hallucinations in real-world applications due to a deficiency in relevant knowledge.
Knowledge graphs <cit.> are large-scale, multi-relational structures housing a plethora of symbolic facts, such as the triple . The incorporation of these structured facts may tackle the aforementioned issue of hallucinations in LLMs <cit.>. One approach to evaluating the integration of knowledge graphs with LLMs is through Knowledge Graph Question Answering (KGQA) <cit.>, which requires machines to answer natural language questions by retrieving relevant facts from knowledge graphs. Recent works <cit.> primarily follow an iterative inference paradigm, consisting of two steps: (1) identifying the initial entity in the question, and (2) retrieving and refining the inference path iteratively until reaching the answer or obtaining sufficient evidence to answer the question. Although they have achieved significant success, they still suffer from excessively long paths and false-positive relations.
Challenge 1: excessively long paths distracting from the answer generation. Existing methods <cit.> usually feed a lengthy evidence path like {, ⋯, , ⋯} at the top of Fig. <ref> into LLMs to perform answer generation in a single step, which may make it challenging for LLMs to discern the key points in the path. For instance, LLMs may focus on the tail entity and employ their internal prior knowledge to generate answers. This will result in answers that appear reasonable but are incorrect.
Challenge 2: false-positive relations hindering the path refinement. Current methods <cit.> typically focus on identifying relations within graphs that closely match or have the same meaning as those in the questions, even if the relations have already been identified in previous reasoning steps. For example, at the top of Fig. <ref>, these methods may select , which was used in the previous reasoning step and is mentioned in the question, to expand paths rather than choosing when dealing with the entity . This will lead to incomplete evidence paths.
To address these challenges, we propose an iterative interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs, dubbed DoG. Unlike existing approaches <cit.> that seek to construct a complete evidence chain before answering questions, our architecture employs a subgraph-focusing mechanism that allows LLMs to perform answer trying after each reasoning step. For each filtered triple, DoG uses LLMs to assess whether sufficient information is available to answer the current question. In this way, the triple in each reasoning step, such as in the bottom of Fig. <ref>, can be deeply pondered by LLMs. If the triple does not support answering the current question, DoG employs a multi-role LLM team to debate and simplify the question based on the triple. The iterative process allows complex multi-hop questions to be gradually transformed into single-hop questions, which enables LLMs not to be disturbed by the relation that is retrieved in the previous reasoning step. For example, the relation that is linked with will not disturb reasoning after the simplification procedure 173. This is inspired by the human brain in tackling complex questions, which guides LLMs to reason on graphs through chain-of-thought <cit.>. The simplification process can also enhance the transparency of the reasoning process.
To verify the effectiveness and superiority of our architecture, we conduct thorough experiments on five public KGQA datasets: MetaQA <cit.>, WebQSP <cit.>, CWQ <cit.>, WebQuestions <cit.>, and GrailQA <cit.>. Our findings show that DoG achieves state-of-the-art results on all datasets, except for the 2-hop and 3-hop questions within MetaQA. Notably, DoG outperforms the strong baseline ToG <cit.> by 23.7% and 9.1% in accuracy on WebQuestions and GrailQA, respectively. In summary, our contributions are threefold.
* We propose a flexible and reliable reasoning framework, DoG, which enables LLMs to reason and debate over knowledge graphs and answer questions after thorough deliberation.
* We introduce a strategy, which transforms questions from complex to easy through the interactive learning of a multi-role LLM team, for handling complex reasoning on knowledge graphs. This guides LLMs to engage in step-by-step reasoning, thereby enhancing the reliability of the reasoning process.
* Extensive experiments and ablation studies are carried out on five public datasets to demonstrate the effectiveness and superiority of our architecture. Furthermore, we also conduct integration experiments with various LLMs to verify the flexibility of DoG.
§ RELATED WORK
The methods of LLM reasoning over knowledge graphs can be classified into batch triple recalling, and reasoning path refining from the perspective of evidence gathering.
Batch triple recalling. Knowledge graphs typically store an extensive amount of facts <cit.>. For instance, Freebase <cit.> contains over 1.9 billion triples, and even the smaller non-open-domain MetaQA <cit.> includes over 130,000 triples. The number of relevant triples can be substantial even when constrained by the entities in a given question. Injecting all these triples into the context window of LLMs to perform reasoning not only incurs a high encoding cost but also introduces significant noise <cit.>. To address this issue, previous studies <cit.> focus on how to filter suitable facts. For instance, KAPING <cit.> projects questions and triples into the same space to obtain relevant knowledge by semantic similarity. KG-GPT <cit.> further focuses on fine-grained question representations, decomposing multi-hop questions into sub-questions and matching the relations associated with entities in those sub-questions, then selecting the top-k relevant relations to form evidence triples. Similarly, KGR <cit.> splits the retrieved triples into several chunks and utilizes LLM to distinguish the critical triple relevant with questions.
Reasoning path refining. The paradigm of this kind of method <cit.> is first to identify the initial entity in the question, then to iteratively retrieve and refine the reasoning path until reaching the answer or obtaining sufficient evidence to answer the question, and finally to employ LLMs to generate answers based on the refined path. For example, <cit.> proposed an iterative reading-reasoning approach, which iterates an invoking-linearization-generation procedure. It utilizes LLMs to perform reasoning on the interface that is specifically designed for reading structured data until deriving the final answer. Similarly, <cit.> introduced a deep and responsible reasoning framework, which first conducts a beam search on a graph from the entity within questions and then acquires multiple reasoning paths as evidence for answer generation. It is noteworthy that these methods all treat the LLM as a tool for accomplishing specific tasks, conceptualizing it as function executors, and relying on in-context learning <cit.> or fine-tuning to refine its outputs <cit.>. However, some studies <cit.> have demonstrated that LLMs can be induced to exhibit human personality traits and role distinctions to undertake complex reasoning tasks.
Communicative Agents. The primary objective of agents is to collaboratively address complex tasks in a productive and efficient manner through autonomous communication and negotiation <cit.>. LLMs such as ChatGPT and Vicuna <cit.> are frequently employed as these communicative agents. Recently, numerous studies have investigated the application of these agents in various domains, including AI societies <cit.>, software development <cit.>, translation <cit.>, arithmetic problem-solving <cit.>, dialogue response generation <cit.>, and strategic planning among robots <cit.>. Specifically, <cit.> guided ChatGPT to emulate expert system reviewers, thereby improving the quality of its literature retrieval queries. <cit.> introduced a strategically designed role-playing prompt method to enhance reasoning abilities by assigning appropriate expert roles for tasks. Additionally, <cit.> assessed the changes in decision-making abilities when LLM assumes different personality traits. Inspired by these studies, we explore the benefit of multi-agent role differentiation and debates for complex reasoning on knowledge graphs.
§ METHOD
§.§ Task Formulation
Given a knowledge graph 𝒢 consisting of N triples, represented as { (e_i^l, r_l, e_i+1^l) | e_i ∈ℰ, r_l ∈ℛ, i ∈ [1, I], l ∈ [1, L] }, where e_i^l and e_i+1^l denote the head and tail entity, respectively, I is the number of entities, L denotes the number of relations, and r_l is the relation between entities, KGQA requires machines to answer natural language questions q based on retrieved evidence paths P = {p_j}_j=1^m with p_j representing a triple and m denoting the number of triples. In this paper, we leverage LLMs to reason over P and generate answers â word by word.
§.§ Overview
As depicted in Fig. <ref>, given a K-hop question q and the initial topic entity e_i^l within q, our framework first invokes knowledge graphs to retrieve the set of candidate relations R linked to e_i^l. Then, it enables LLMs to filter out the most relevant relation r̂_l from R based on in-context learning. Subsequently, the knowledge graph is invoked again to complete the triple information from (e_i^l, r̂_l, ?) to (e_i^l, r̂_l, e_i+1^l). Fourthly, DoG focuses on the current reasoning state and employs LLMs to decide on the subsequent action based on the completed triple: providing a direct answer to the question or performing deep thinking with further iterations. In the latter scenario, a multi-role LLM team leverages the mentioned triple to transform the K-hop question to a K-1 hop (slightly easier) one through debate, with the tail entity e_i+1^l being the subsequent topic entity for the simplified question in the next iteration. All of these debate steps are autonomously executed by the LLM team. The iteration will be ended until LLMs generate answers in the fourth step.
§.§ Knowledge Graph Invoking
Reasoning on graphs requires LLMs first to identify relevant knowledge triples. To facilitate this, we have designed two interactive interfaces specifically tailored to retrieve these triples from knowledge graphs. The interfaces are invoked as needed, depending on the requirements.
* get_relations(e_i^l): This interface is designed to retrieve the candidate relation set R associated with the entity e_i^l. For example, in Fig. <ref>, it is invoked to retrieve the candidate relation set of .
* triple_filling(e_i^l, r̂_l): This interface is responsible for obtaining the tail entity e_i^l, r̂_l, ? given the head entity and the filtered relation. We will introduce relation filtering in the next subsection.
The underlying mechanisms of these interfaces are implemented through either SPARQL (for Freebase queries) or specific matching (for questions in MetaQA). To facilitate comprehension and generation by LLMs, all entities and relationships above the interfaces are expressed in natural language, with the conversion between a Machine ID (MID) and a corresponding friendly name carried out exclusively within the interfaces. The MID facilitates efficient access to comprehensive details related to the entity. More specifically, in Freebase, the MID is a unique identifier assigned to each entity, allowing for straightforward retrieval of entity-specific information. The friendly name of the MID is a natural language descriptor. For example, the MID of the friendly name is m.03_r3.
§.§ Relation Filtering
Through get_relations(e_i^l), we obtain a candidate relation set R associated with the initial entity in the question. Subsequently, DoG selects the optimal relation r̂_l from this set through in-context learning. The prompt and in-context examples are detailed in the In-context Learning subsection of the appendix. Specifically, DoG first utilizes LLMs to identify the first-hop problem to be solved in the given question q. Then, it allows LLMs to choose the optimal relation according to the mentioned sub-question. This serves as a guiding principle for relation selection, avoiding the constant reliance on the complete multi-hop question throughout the entire reasoning stage, as seen in previous studies <cit.>. We believe this short-sighted greedy strategy can guide a correct progression on the graph, alleviating the need to account for future inferential information regarding the multi-hop question. For example, for the question in Fig. <ref> “In what year was the movie Joe Anderson starring in released”, the first-hop question to be addressed is “Which film starred Joe Anderson?”. The linearized relation set is {; ; } (“∼” represents a passive relationship), from which the optimal relation can be easily selected.
§.§ Answer Trying
After obtaining the optimal relation, our architecture invokes the triple-filling interface triple_filling(e_i^l, r̂_l) to acquire a complete triple, such as in Fig. <ref>. Then, DoG utilizes LLMs to determine whether the retrieved triple can sufficiently support answering the question. If the triple is insufficient, DoG prompts LLMs to deeply contemplate the current question based on the provided triple. This allows DoG to generate answers based on a single triple, thus avoiding excessively long and potentially confusing paths composed of multiple triples. The prompt and in-context examples are detailed in the In-context Learning subsection of the appendix. Notably, if the maximum iteration limit is reached without successfully generating an answer, the parameterized knowledge of LLMs is utilized to respond.
§.§ Question Simplifying
Once LLMs determine that a question is unanswerable with the current retrieved triple, it represents that further exploration is required. Inspired by how humans tackle complex questions, our architecture employs a question-simplifying strategy to transform questions from K hop to K-1 hop based on the retrieved triple. Specifically, DoG utilizes a team of agents with distinct roles to engage in debate, ensuring the reliability of the reasoning process. The debate team consists of three roles.
* Question simplifying expert (R1). This expert provides initial simplifications for questions, which may contain apparent errors. For example, the original question in Fig. <ref> is initially simplified as “What are some notable films in which Joe Anderson has acted?". This is far from the intention of the original question.
* Critic (R2). The critic examines the simplification efforts of the above expert and offers suggestions for modifications. For instance, the above question is modified into “When did High Life which was starred by Joe Anderson release?".
* Linguist (R3). This role ensures that the simplified question is not only semantically correct but also free from redundant information of previously resolved sub-questions. For example, the mentioned question is further refined to “When did [High Life] release?".
Due to the interdependency and progressive nature of the roles played by the three agents, DoG employs a one-by-one discussion strategy <cit.>. Each agent, implemented by ChatGPT, takes turns contributing to the ongoing optimization of the simplified question, with the statements made by other agents serving as references for guiding subsequent remarks generation. After simplification, we obtain a slightly easier K-1 hop question, prompting LLMs to undergo iteration once again. In this way, the relation in the first-hop sub-question is removed in the simplified question, effectively avoiding the impact of false positive relations. The iteration process, from knowledge graph invocation to question simplification, continues until LLMs make an answerable decision in the answer-trying module. The prompt and in-context examples are shown in the In-context Learning subsection of the appendix.
§ EXPERIMENTS
§.§ Dataset and Evaluation
We select five public datasets to evaluate the reasoning ability over knowledge graphs: MetaQA <cit.>, WebQSP <cit.>, CWQ <cit.>, WebQuestions <cit.>, and GrailQA <cit.>. MetaQA comprises a movie ontology derived from the WikiMovies dataset <cit.> and contains three sets of natural language question-answer pairs: 1-hop, 2-hop, and 3-hop. WebQSP contains questions sourced from the WebQuestions dataset, which are answerable using Freebase. CWQ is designed for answering complex questions that require reasoning over multiple web snippets. GraiQA, which tests three-level generalizations including i.i.d., compositional, and zero-shot, covers 3,720 relations and 86 domains from Freebase. Following <cit.>, we uniformly sample 500 instances per type for the mentioned five datasets to reduce computational cost. We use (Hits@1) to evaluate the reasoning performance of our framework and baselines following previous works <cit.>. For the experiment of integrating DoG with GPT-4, we uniformly sample only 100 instances per type from the mentioned datasets to reduce costs.
§.§ Implementation Settings
We preprocess the MetaQA dataset to construct a structured knowledge graph, facilitating subsequent query and retrieval operations. A local Virtuoso server is deployed for datasets derived from the Freebase. We utilize the OpenAI API to call ChatGPT (gpt-3.5-turbo-0125) and GPT-4 (gpt-4-0613). Additionally, we employ Qwen-14B and Llama-3-8B, running on 8 V100 GPUs, to verify the flexibility of DoG. The maximum number of debate rounds for the multi-agent team is limited to three, with only the best unique relation being recalled. We implement in-context learning across multiple modules: specifically, 10 exemplars for Relation Filtering and Answer Trying, and one exemplar for Question Simplifying.
§.§ Baselines
Inspired by <cit.>, we compare DoG with previous state-of-the-art supervised learning and in-context learning-based methods, to verify its effectiveness and superiority. Supervised learning: KV-Mem <cit.>, GraftNet <cit.>, PullNet <cit.>, EmbedKGQA <cit.>, NSM <cit.>, TransferNet <cit.>, UniKGQA <cit.>. In-context learning: StructGPT <cit.>, KG-GPT <cit.>, KB-BINDER <cit.>, ToG <cit.>. The baselines are detailed in the Baseline Introduction subsection of the appendix.
§.§ Reasoning on Knowledge Graphs
§.§.§ Main Result
Table <ref> presents a comparison across five public datasets. Taking GPT-3.5 as an example, we observe that DoG enables it to achieve competitive results on MetaQA and the best results on the other four datasets compared with baselines. Specifically, DoG outperforms the best-supervised method, UniKGQA, by 11.4% on WebQSP. Additionally, it surpasses the best in-context learning method, ToG, by 23.7% and 9.1% on WebQuestions and GrailQA, respectively. These datasets comprise complex and compositional questions. Therefore, these results not only highlight the effectiveness and superiority of DoG but also confirm its capability for complex reasoning.
§.§.§ Flexibility Verification
We conduct experiments on the aforementioned datasets to explore whether DoG enables other LLMs, including QWen, Llama, and GPT-4, to achieve complex reasoning on knowledge graphs. Experimental results in Table <ref> show that DoG facilitates improvements in some cases compared to GPT-3.5. Specifically, DoG with Llama achieves a 1.6% improvement on WebQSP. It also allows GPT-4 to achieve the most significant improvement on the mentioned datasets. These results clearly demonstrate the flexibility and effectiveness of our architecture. We observe that the performance of DoG with Qwen is slightly lower than with other LLMs. This could be attributed to its marginally weaker complex reasoning capabilities compared to other LLMs.
§.§ Ablation Studies
We conduct ablation experiments on the aforementioned datasets to analyze the contribution of each component of DoG. The ablation results for DoG with GPT-3.5 are presented in Table <ref>. We perform experiments on the 2-hop and 3-hop splits of MetaQA, as the 1-hop questions do not require complex reasoning. Row 1 shows the results without the subgraph-focusing and question-simplifying components. In other words, this configuration allows LLMs to answer complex questions directly after collecting the whole set of evidence triples, rather than reasoning step by step. We observe a significant performance decrease compared to the results in Row 4, strongly demonstrating the effectiveness of the mentioned modules. Rows 2 and 3 aim to verify the contribution of the expert role in the debate team. The results show consistent improvements across five public datasets, suggesting that each agent plays a critical role in simplifying questions. This also highlights the importance of transforming complex questions into simpler ones for LLMs step-by-step reasoning on knowledge graphs. Row 5 aims to verify the necessity of the debating process, where the tasks of the three roles are performed by a single agent. The average result decreases by 10.3% compared to Row 4, strongly supporting the effectiveness of the debating mechanism.
§.§ Analyses for Debate Rounds
We conduct experiments to explore how the number of debate rounds affects LLM reasoning on knowledge graphs. Fig. <ref> shows the performance trend of DoG with GPT-3.5 as the number of debate rounds increases across the five datasets mentioned. We observe that DoG achieves the best results on the majority of datasets with just a single round of debates. Additionally, increasing the number of debate rounds leads to a performance decrease in some datasets. DoG utilizes a one-by-one discussion strategy, which makes each agent aware of the historical debate record. This makes the agents more susceptible to being influenced by the views of others, potentially leading to inaccurate decisions for question simplifications. We may also conclude that the agent is sufficiently strong to achieve the goal of instructions without needing iterative debates.
§.§ Exemplar Impacts
DoG leverages in-context learning to guide LLMs in performing relation filtering, question simplification, and answer trying during iterative reasoning. Specifically, DoG provides instructions and exemplars to help LLMs achieve these objectives. We conduct experiments on five public datasets to explore the impact of the number of exemplars on LLM reasoning. Fig. <ref> shows the analyses for the mentioned three modules. In Relation Filtering, we observe that reasoning performance improves as the number of exemplars increases in the majority of datasets. However, reasoning errors caused by relation filtering account for a large proportion, which we will discuss in the next subsection. In Question Simplifying, the performance improvement is not significant with the increase in the number of exemplars, likely due to the complexity of this task. Converting questions from complex to simple step-by-step may be challenging for LLMs, and they may not be able to infer strategies for addressing this issue from exemplars. In Answer Trying, we see that reasoning performance improves with the increase in the number of exemplars in most cases. In summary, the number of exemplars plays a critical role in decision-making, especially for less complex tasks. In contrast, for more complex tasks, detailed instructions may have a greater impact on LLM reasoning.
§.§ Error Analyses
To analyze the deficiency of DoG, we randomly select 50 failure cases from each dataset, including MetaQA, WebQSP, and GrailQA, for manual inspection. Fig. <ref> shows the proportion of factors contributing to these errors. We observe that relation filtering errors are quite common. This may be caused by too many relations linked to the entities in questions, making it difficult for LLMs to accurately filter the most relevant relation. Iteration stopping errors denote LLMs make inaccurate decisions in the answer-trying module, either terminating the iterative reasoning too early or too late. This type of error is particularly prevalent in GrailQA cases. Answer aliasing errors mean the generated answers do not have the same description or wording as the annotations, even though they are semantically consistent. This error can be mitigated by introducing a rich collection of aliases. Answer generation errors refer to that LLMs provide incorrect answers based on accurately retrieved triples and simplified questions. Question simplifying errors represent that LLMs fail to transform questions from complex to easy. Additionally, other errors account for 4% of the failure cases in each dataset. This type of error often occurs due to API access issues, an excessively long context, or exceeding the token limit per minute. More details can be found in the Failure Cases subsection of the appendix.
§ CONCLUSION AND FUTURE WORK
This paper proposes an iterative interactive framework, DoG, for knowledge graph question answering. It leverages the interactive learning and reasoning capabilities of LLMs to perform debating on knowledge graphs. Specifically, it employs a team of multi-role agents to transform questions from complex to simple, enabling LLMs to perform reliable step-by-step reasoning based on the retrieved knowledge triples. Extensive experiments across five public datasets demonstrate the effectiveness and superiority of DoG in the few-shot setting, outperforming nearly all in-context and supervised learning-based baselines. Additionally, the integration results with different LLMs verify its flexibility. In the future, we will explore enhancing relation filtering performance from knowledge graphs given the entity of questions.
§ APPENDIX
§.§ In-context Learning
Table <ref> shows the prompt, instruction, and exemplar utilized in the module within DoG.
§.§ Baseline Introduction
The brief introduction of supervised-based baselines is as follows.
* KV-Mem <cit.> is a key-value memory network that enhances the viability of reading knowledge sources, such as documents or knowledge bases, by utilizing different encodings during the addressing and output stages of the memory read operation.
* GraftNet <cit.> aims to provide answers based on a question-specific subgraph that includes text, entities, and relations. It employs heterogeneous update rules to handle knowledge base nodes differently from text nodes and utilizes a directed propagation method to constrain the propagation of embeddings within the graphs.
* PullNet <cit.> builds on the early GraftNet system but focuses on learning how to construct the subgraph. Unlike GraftNet, PullNet leverages a limited set of retrieval operations, with each operation expanding a graph node by acquiring new information from knowledge bases or corpora.
* EmbedKGQA <cit.> is the first work that utilizes knowledge graph embeddings to perform multi-hop question answering over sparse knowledge graphs.
* NSM <cit.> is a teacher-student network in which the student network aims to retrieve the correct answer to a question, while the teacher network learns intermediate supervision signals to enhance the reasoning capacity of the student network.
* TransferNet <cit.> is an effective and transparent model for multi-hop question answering that supports both label and text relations within a unified framework. TransferNet traverses entities across multiple steps. During each step, it focuses on different parts of the question, calculates activation scores for relations, and subsequently propagates the prior entity scores along these activated relations in a differentiable manner.
* UniKGQA <cit.> is a unified model designed for multi-hop question answering. It comprises a semantic matching module, leveraging a pre-trained language model for question-relation semantic matching, and a matching information propagation module that propagates this information along directed edges within knowledge graphs.
The brief description of in-context learning-based baselines is as follows.
* StructGPT <cit.> provides a specialized interface for gathering relevant evidence from structured data and utilizes large language models (LLMs) to focus on the reasoning task using the collected information. Specifically, it employs a linearization-generation procedure to enable LLMs to reason effectively on structured data, facilitated through the interface.
* KG-GPT <cit.> is a multi-purpose framework leveraging LLMs for tasks employing KGs. Specifically, it comprises three steps: sentence segmentation, graph retrieval, and inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions, respectively.
* KB-BINDER <cit.> is a unified, training-free framework that uses LLMs to generate logical forms for specific questions by mimicking a few demonstrations, and then grounds these forms on a knowledge base using BM25 score matching.
* ToG <cit.> treats LLMs as agents that interactively explore relevant entities and relations on knowledge graphs, enabling them to perform reasoning based on the retrieved knowledge.
§.§ Failure Cases
Fig. <ref> illustrates the error case that has been analyzed in the Error Analysis subsection.
|
http://arxiv.org/abs/2409.02970v1 | 20240904081643 | Central Limit Theorem for Diophantine approximation on spheres | [
"Zouhair Ouaggag"
] | math.NT | [
"math.NT",
"math.DS"
] |
§ ABSTRACT
We prove a Central Limit Theorem and an effective estimate for the counting function of Diophantine approximants on the sphere S^n using homogeneous dynamics on the space of orthogonal lattices.
Keywords: intrinsic Diophantine approximation; Central Limit Theorem; Siegel transform; effective equidistribution.
Marco Mellia
================
§ INTRODUCTION AND MAIN RESULTS
It is well-known in metric Diophantine approximation that for any c>0 and Lebesgue-almost all α∈ℝ^m, there exist infinitely many solutions (p,q) ∈ℤ^m×ℕ to the inequality[|| ·|| will denote the Euclidean norm.]
‖α - p/q‖ < c/q^1+1/m .
A refinement of this problem is to count solutions up to a certain bound for the complexity q of the approximants, which leads to consider counting functions such as
N_T,c(α) |{ (p,q) ∈ℤ^m×ℕ : 1 ≤ q < e^T and (<ref>) holds }| .
An accurate estimate of the counting function N_T,c was given in the work of W. Schmidt <cit.>, who proved for more general approximating functions, that for Lebesgue-almost all α∈ [0,1]^m,
N_T,c(α)=𝖢_c,m T + O_α,ε( T^1/2+ε) ,
for all ε>0, with a constant 𝖢_c,m>0 depending only on c and m.
In recent years, there has been significant interest in the problems of so-called intrinsic Diophantine approximation, where one considers approximation by rational points on algebraic varieties, addressing analogues of classical questions in the geometry of numbers and metric Diophantine approximation. Important progress has been achieved in this setting. In particular, Kleinbock and Merrill in <cit.> developed the theory of Diophantine approximation on spheres, which was subsequently generalized to quadratic surfaces with general signatures by Fishman, Kleinbock, Merrill and Simmons in <cit.>. These works established in particular analogues of the classical Dirichlet's and Khinchin's theorems. Then Alam and Ghosh in <cit.> proved an asymptotic formula for the number of rational approximants on spheres (<ref>). We also mention the works of Ghosh, Gorodnik and Nevo who developed the metric theory of Diophantine approximation on simple algebraic groups, providing estimates for uniform and almost sure Diophantine exponents in <cit.>, establishing analogues of Khinchin's and Jarnik's theorems in <cit.>, and deriving an asymptotic formula with an error term for the number of approximants for a range of uniform Diophantine exponents in <cit.>.
In this paper we are interested in the following intrinsic Diophantine approximation problem. Given T, c>0 and α∈ S^n, we consider the inequality (<ref>) (with the critical Dirichlet exponent for intrinsic Diophantine approximation on S^n)
‖α - p/q‖ < c/q ,
and the counting function given by (<ref>) for intrinsic rational approximants
𝖭_T,c(α) |{ (p,q) ∈ℤ^n+1×ℕ : p/q∈S^n, 1 ≤ q < cosh T and (<ref>) holds }| .
§.§ Effective estimate for the counting function
Alam and Ghosh gave in <cit.> a first quantitative estimate of 𝖭_T,c, using Birkhoff pointwise ergodicity on the space of orthogonal lattices to show that there exists a computable constant C_c,n>0, depending only on c and n, such that for almost every α∈S^n,
lim_T →∞𝖭_T,c(α)/T = C_c,n .
With a different approach, using effective equidistribution of translated orbits and important non-divergence estimates on the space of orthogonal lattices due to Eskin, Margulis and Mozes in <cit.>, we gave in <cit.> an effective estimate for 𝖭_T,c, showing that there exists a constant γ <1 depending only on the dimension n, such that for almost every α∈S^n,
𝖭_T,c(α) = C_c,nT + O_α(T^γ) .
In order to improve the estimate of the error term in (<ref>) to the order T^1/2+ε as in (<ref>) for the Euclidean space, we were missing some analog of Roger's formula for the space of orthogonal lattices. A crucial result in this direction was given recently by Kelmer and Yu in <cit.>, using spectral theory of spherical Eisenstein series to give an estimate of the second moment of the Siegel transform (see Theorem <ref>).
They also derived an effective estimate for 𝖭_T,ψ(q), for more general quadratic surfaces S and general approximation function ψ:ℕ→ (0,+∞), decreasing and satisfying ∑_q≥ 1q^-1ψ(q)^n=∞, showing that in dimension n≢1 (mod 8), for almost every α∈S,
𝖭_T,ψ(q)(α) = C_nJ_ψ(T) + O_α,ψ(J_ψ(T)^n+3/n+4log (J_ψ(T))+I_ψ(T)) ,
with J_ψ(T):=∑_1≤ q < Tq^-1ψ(q)^n, I_ψ(T)=∑_1≤ q < Tq^-3ψ(q)^n+2 and C_n>0.
Developing our approach in <cit.> and using methods derived from <cit.> to analyse the second moment of the Siegel transform, we prove in this paper an effective estimate of 𝖭_T,c with an error term of the same order as in (<ref>), for all dimensions n≥ 3.
Let n≥ 3. For almost every α∈S^n, for all ε>0, we have
𝖭_T,c(α) = C_c,nT + O_α,ε(T^1/2+ε) .
Some remarks related to Theorem <ref>:
* The constant C_c,n>0 in (<ref>), (<ref>) and in Theorems <ref>, <ref> is equal to the volume of a domain F_1,c⊂ℝ^n+2 given explicitly in (<ref>).
* Although the estimate of the error term T^1/2+ε is optimal in the case of the Euclidean space, we can not conclude about the optimality of this estimate for the sphere. Nevertheless, our analysis of the limit distribution of 𝖭_T,c (see Theorem <ref>) suggests that T^1/2 is the correct normalisation and the error term would be optimal if one could show that the variance σ^2 is positive.
* Our method fails for dimension n=2 because of the escape of mass in the space of orthogonal lattices for this dimension (see Proposition <ref>).
§.§ Distribution of the counting function
Another interesting question is to study the limit distribution of 𝖭_T,c as a random variable on the sphere. If we consider counting functions of approximants with denominators q in ranges [cosh (t) , cosh (t+1) ], i.e. functions
𝒩_t,c := 𝖭_t+1,c- 𝖭_t,c
and if the random variables 𝒩_t_1,c and 𝒩_t_2,c "decorrelate" for large t_1, t_2 and |t_2-t_1|, then Theorem <ref> would follow by a Law of large numbers for the random variable 𝖭_T,c=∑_t=0^T-1𝒩_t,c. This heuristic was developed by Bjöklund and Gorodnik in <cit.> to show that the counting function N_T,c for the Euclidean space follows a Central limit theorem, using higher-order mixing in the space of unimodular lattices as the dynamical translation of quasi-independent random variables. Using a similar approach as in <cit.> and the recent results by Kelmer and Yu in <cit.> about the second moment of the Siegel transform, we show that the counting function 𝖭_T,c on the sphere also follows a Central limit theorem.
Let n≥ 3. Then for every ξ∈ℝ,
μ_S^n( {α∈ S^n:𝖭_T,c(α)-C_c,n· T/T^1/2<ξ}) →Norm_σ(ξ) , as T→∞,
where Norm_σ denotes the normal distribution with variance σ^2≥ 0.
The variance σ^2 in Theorem <ref> is given explicitly in (<ref>).
§.§ Outline of the paper
Using the classical Dani correspondence, we first interpret the counting function 𝖭_T,c in terms of ergodic averages of a function over a subset of the space of unimodular lattices in ℝ^n+2, developing the approach in <cit.>, <cit.> and <cit.>. To do so, we embed the sphere S^n in the positive light cone 𝒞{ x ∈ℝ^n+1×ℝ_+ :Q(x)=0} of a quadratic form Q of inertia (n+1,1), and identify good approximants p/q∈ S^n for α∈ S^n with integer points (p,q) in Λ_0:=ℤ^n+2∩𝒞 whose images under certain rotations k_α∈ K=SO(n+1) lie in a specific domain E_T,c⊂𝒞 (we recall more details about this correspondence in Section <ref>). The number of solutions 𝖭_T,c is then related to the number of lattice points in the domain E_T,c, which can be approximated by a more convenient domain F_T,c and tessellated by the action of a hyperbolic subgroup {a_t, t∈ℝ}⊂ SO(Q).
For a bounded and compactly supported function f on 𝒞, we denote by f its light-cone Siegel transform, defined for any lattice Λ⊂𝒞 by
f(Λ):= ∑_x∈Λ∖{0}f(x) .
The counting function 𝖭_T,c can then be related to the light-cone Siegel transform of the characteristic function χ of an elementary domain F_1,c⊂𝒞, using averages of the form
𝖭_T,c(α) ≈∑_t=0^T-1χ∘ a_t (k_αΛ_0),
i.e. ergodic averages of the light-cone Siegel transform χ along K-orbits pushed by {a_t}. The analysis of these averages can then be carried out using dynamics on the space of orthogonal lattices.
Because the approximation (<ref>) is not precise (see "sandwiching" in (<ref>)), we need a version of Borel-Cantelli argument for a family of functions. This argument was overlooked in our previous work <cit.> and is now considered in Proposition <ref>.
In order to show that 𝖭_T,c follows a Central Limit Theorem, we will use the method of cumulants (Section <ref>), which is equivalent to the more widely known method of moments. The normal distribution is characterized by vanishing cumulants of orders r≥ 3, which can be expressed in the dynamical language as higher-order of an averaging function of the form
𝖥_N ≈1/N^1/2∑_t=0^N-1χ∘ a_t .
The "quasi-independence" of sampling observables χ∘ a_t along K-orbits pushed by {a_t} corresponds to multiple equidistribution of these orbits in the space of orthogonal lattices, which we establish in Section <ref> (Proposition <ref>). However, using effective equidistribution requires to consider smooth and compactly supported test functions, whereas the Siegel transform has typically none of these regularities. We address this issue in two steps.
We first use that the Siegel transform f can be approximated by a truncated Siegel transform f^(L) in a way to control the approximation on translated K-orbits, i.e. control | f∘ a_t - f^(L)∘ a_t| with respect to the probability measure on these orbits (Proposition <ref>). In a second step (Proposition <ref>), we use that the characteristic function χ of the elementary domain F_1,c can be approximated by a family of smooth and compactly supported functions f_ε, again in a way to keep control of the approximated Siegel transform f_ε over translated K-orbits. In this process we also need non-divergence results for the Siegel transform with respect to the probability measure on these orbits (Propositions <ref> and <ref>). We also need to show that all these approximations still give the same limit distribution for the averaging function 𝖥_N (Section <ref>).
In Section <ref> we use exponential multiple equidistribution established previously to show that the cumulants of 𝖥_N of orders r≥ 3 vanish as N→∞. To do so, we follow the argument developed in <cit.> and <cit.>, analysing the joint cumulants through a decomposition into sub-sums of correlations corresponding to “separated" or “clustered" tuples t_1,…,t_r and controlling their size in terms of the parameters related to the smooth and truncated approximation of the counting function.
In Section <ref> we estimate the limit variance of 𝖥_N as N→∞ using resent results of Kelmer and Yu in <cit.> on the second moment of the Siegel transform. Convergence of the second variance and vanishing of all cumulants of orders r≥ 3 complete the characterisation of the normal distribution for 𝖥_N. In Section <ref> we relate the distribution of 𝖥_N to the distribution of our counting function 𝖭_T,c and conclude the proof of the CLT-Theorem.
Our analysis of double correlations from Section <ref> allows us to derive an "almost-everywhere"-bound for the ergodic averages ∑_tχ∘ a_t (Proposition <ref>) and to improve the effective estimate for the counting function 𝖭_T,c (proof of Theorem <ref> in Section <ref>).
Acknowledgement I am grateful to Dubi Kelmer for sharing his ideas and for the useful discussions. Many thanks also to René Pfitscher for our discussions and his useful observations. I am very grateful to Alex Gorodnik for his guidance all along this research work.
§ DIOPHANTINE APPROXIMATION ON SPHERES AND DYNAMICS ON THE SPACE OF LATTICES
We recall the correspondence presented in <cit.> and <cit.> between Diophantine approximation on the sphere S^n and the dynamics of orthogonal lattices in ℝ^n+2.
We consider the quadratic form Q: ℝ^n+2→ℝ defined by
Q(x) := ∑_i=1^n+1 x_i^2 - x_n+2^2, for x= (x_1, … , x_n+2) ,
and the embedding of S^n in the positive light cone
𝒞 := { x ∈ℝ^n+2 : Q(x)=0, x_n+2 > 0 } ,
via α↦ (α,1), which yields a one-to-one correspondence between primitive integer points on the positive light cone, (p,q) ∈𝒞∩ℤ^n+2_prim, and rational points on the sphere, p/q∈ S^n.
We denote by G=SO(Q)^∘≅ SO(n+1,1)^∘ the connected component of the group of orientation-preserving linear transformations which preserve Q. We denote by Λ_0 := 𝒞∩ℤ^n+2 the set of integer points on the positive light cone. By a lattice Λ in 𝒞 we mean a set of the form gΛ_0 for some g∈ G.
If we denote by Γ the stabilizer of Λ_0 in G, then Γ is a lattice in G containing the subgroup SO(Q)_ℤ^∘ of integer points in G, as a finite index subgroup.
The space of lattices in 𝒞 can be identified with the homogeneous space 𝒳 := G/Γ, endowed with the G-invariant probability measure μ_𝒳.
Let K denote the subgroup of G that preserves the last coordinate in ℝ^n+2, i.e.
K= [ SO(n+1) ; 1 ]≅SO(n+1) ,
equipped with the Haar probability measure μ_K.
The sphere S^n can be realized as a quotient of K, endowed with a unique left K-invariant probability measure, giving a natural correspondence between full-measure sets in K and those in S^n.
For k ∈ K we define α_k ∈ S^n by k(α_k,1) = (0,…,0,1,1) ∈𝒞. For (p,q) ∈Λ_0, we write k(p,q)= (x_1,x_2, … , x_n+2) ∈𝒞, with x_n+2=q, and observe the following correspondence (<cit.>, Lemma 2.2.):
‖α_k - p/q‖ < c/q ⇔ ‖ q( α_k,1) -(p,q) ‖ < c,
⇔ ‖ qk(α_k,1) - k(p,q)‖ < c,
⇔ ‖(x_1,x_2, … , x_n, x_n+1-x_n+2,0)‖ < c,
⇔ 2x_n+2(x_n+2-x_n+1) < c^2 (since k(p,q) ∈𝒞).
Hence, if we denote
E_T,c{ x ∈𝒞 : 2x_n+2(x_n+2-x_n+1) <c^2, 1≤ x_n+2 < cosh T },
then we have:
𝖭_T,c(α_k) =| E_T,c∩ kΛ_0 |.
We denote 𝒴 K Λ_0, equipped with the Haar probability measure μ_𝒴.
We also consider elements
a_t = [ I_n ; cosh t -sinh t; -sinh t cosh t ] ∈ G
and the corresponding one-parameter subgroup
A = { a_t : t ∈ℝ}
endowed with the natural measure dt.
We will denote by χ_E the characteristic function of a given set E and use the notation a ≍ b (resp. a ≪ b) when there exist positive constants C_1 and C_2 such that C_1b ≤ a ≤ C_2 b (resp. a ≤ C_2 b).
In order to use the dynamics of translates of 𝒴 for the Diophantine approximation problem (<ref>), we first approximate E_T,c by a domain offering a convenient tessellation under the action of the subgroup A. We start with a similar approach as in <cit.> and improve the approximation by F_T,c in order to satisfy the accuracy obtained later for the counting function.
Approximation of E_T,c
We consider the following domain on a light-cone
F_T,c := { x ∈𝒞 : x_n+2^2- x_n+1^2 < c^2 , c≤ x_n+2+ x_n+1 < ce^T},
and a sequence of domains (F_T,c,𝓁)_𝓁≥ 1 defined by
F_T,c,𝓁 := { x ∈𝒞 : x_n+2^2- x_n+1^2 < c_𝓁^2 , c≤ x_n+2+ x_n+1 < ce^T},
with c_𝓁 c·( 𝓁/𝓁+1)^1/2.
Up to the domains
C_0 { x ∈𝒞: x_n+2≤c^2+c/2+1} and C_𝓁{ x ∈𝒞: |x_n+1|/x_n+2≤𝓁/𝓁+1}∪ C_0 ,
we can approximate E_T,c by F_T,c in the following sense.
There exist positive constants T_0 and r_0 depending only on c such that, for all T≥ T_0 and all integers 𝓁≥ 1, we have
F_T-r_0,c,𝓁∖ C_𝓁 ⊆
E_T,c∖ C_0 ⊆ F_T+r_0,c .
For x∈ E_T,c∖ C_0,
2x_n+2(x_n+2-x_n+1)<c^2 and x_n+2≥ 1
imply x_n+2-x_n+1< c^2 ,
hence
x_n+2+x_n+1 > 2x_n+2-c^2 > c .
Also
x_n+1≤ x_n+2 implies x_n+2+x_n+1≤ 2x_n+2<2cosh T < c e^T+r_0,
for some r_0>0 depending only on c.
We also have
x_n+2^2-x_n+1^2= (x_n+2+x_n+1)(x_n+2-x_n+1) ≤ 2x_n+2(x_n+2-x_n+1) < c^2.
The inequalities (<ref>), (<ref>) and (<ref>) prove the second inclusion in (<ref>).
For x∈ F_T-r_0,c,𝓁∖ C_𝓁, we have
x_n+2^2-x_n+1^2<c_l^2<c^2 and x_n+2+x_n+1≥ c
imply x_n+2-x_n+1< c ,
hence
2x_n+2 < x_n+2+x_n+1+c < ce^T-r_0+c ≤ 2cosh T
for all T>T_0, for some T_0>0 depending on c, and for r_0 large enough.
We also have
x_n+1 > x_n+2-c > c^2+c/2+1 -c > 0
,
which implies
2x_n+2(x_n+2-x_n+1) = (x_n+2^2-x_n+1^2)2x_n+2/x_n+2+x_n+1
< c_l^22x_n+2/x_n+2+x_n+1
= c^2(𝓁/𝓁+1)2x_n+2/x_n+2+x_n+1
< c^2x_n+2+|x_n+1|/x_n+2+x_n+1
= c^2 .
The inequalities (<ref>) and (<ref>) prove the first inclusion in (<ref>).
It follows from (<ref>) and (<ref>) that, for T>0 large enough,
|(F_T-r_0,c,𝓁∖ C_𝓁)∩ kΛ_0| ≤ 𝖭_T,c(α_k) + O(1) ≤ |F_T+r_0,c∩ kΛ_0|.
We will need later an estimate of the counting error related to the “sandwiching" (<ref>). We estimate in the following lemma this error in terms of the integer parameter 𝓁≥ 1 to be specified later.
We have
vol(F_1,c) =vol( F_1,c,𝓁)+ O(𝓁^-1),
and |F_T-r_0,c,𝓁∩ C_𝓁∩ kℤ^n+2| = O(𝓁^1/2), uniformly in k∈ K.
Since
F_1,c∖ F_1,c,𝓁= { x ∈𝒞 : c_𝓁≤ (x_1^2+ … + x_n^2)^1/2 < c , c< x_n+2+ x_n+1 < ce } ,
we have
vol(F_1,c) = vol( F_1,c,𝓁)+ O( c-c_𝓁) ,
and
c^2-c_𝓁^2=c^2(1-𝓁/𝓁+1)=c^21/𝓁+1 ,
c-c_𝓁=c^2/c+c_𝓁1/𝓁+1=O(𝓁^-1) ,
hence the first estimate.
For the second estimate, we observe that for x ∈ C_𝓁∩ F_T-r_0,c,𝓁 we have
x_n+2^2 < c_𝓁^2+x_n+1^2 ≤ c_𝓁^2+( 𝓁/𝓁+1)^2x_n+2^2 ,
hence x_n+2 < 𝓁+1/(2𝓁+1)^1/2c_𝓁 ,
thus for 𝓁 large enough, F_T-r_0,c,𝓁∩ C_𝓁 is contained in {x∈𝒞 : x_n+2≪_c𝓁^1/2 and x_1^2+ … + x_n^2 < c_𝓁^2}. Moreover, since k∈ K preserves the coordinate x_n+2 and is Lipschitz continuous in the other coordinates, by compactness there exist C>0 such that for every k∈ K, k(F_T-r_0,c,𝓁∩ C_𝓁) is contained in {x∈𝒞 : x_n+2≪_c𝓁^1/2 and x_1^2+ … + x_n^2 < C}, which yields the uniform bound O(𝓁^1/2) for the number of its integer points.
Tessellation of F_T,c We observe further that the domain F_T,c can be tessellated using translates of the set F_1,c under the action of {a_t}. We have, for all N ≥ 1,
F_N,c= _j=0^N-1 a_-j(F_1,c).
We denote by χ_1,c the characteristic function of F_1,c, and χ_1,c its Siegel transform defined by
χ_1,c(Λ) ∑_z ∈Λ∖{0}χ_1,c (z), for all Λ∈𝒳.
The tessellation (<ref>) implies, for all T>0 and all Λ∈𝒳,
∑_t=0^⌊ T ⌋-1χ_1,c(a_tΛ) ≤ |F_T,c∩Λ| ≤ ∑_t=0^⌊ T ⌋χ_1,c(a_tΛ).
It follows from (<ref>), (<ref>) and (<ref>), for large enough T>0 and k ∈ K as in (<ref>),
∑_t=0^⌊ T-r_0 ⌋-1χ_1,c,𝓁(a_t kΛ_0) +O( 𝓁^1/2) ≤ 𝖭_T,c(α_k) +O(1) ≤ ∑_t=0^⌊ T+r_0 ⌋χ_1,c(a_t kΛ_0).
Thus, estimating 𝖭_T,c(α) amounts to analyzing ergodic sums of the form ∑_t=0^Nχ_1,c∘ a_t on 𝒴= KΛ_0. We will use for this purpose effective higher order equidistribution results for unimodular lattices, specialized to 𝒴, which we discuss on the following section.
§ ESTIMATES ON HIGHER ORDER CORRELATIONS
In this section we prove an effective equidistribution of K-orbits by relating it to effective equidistribution of unstable horospherical orbits established in a more general setting in <cit.>. We recall the notations
G = SO(Q)^∘≅SO(n+1,1)^∘,
K = [ SO(n+1) ; 1 ],
a_t = [ I_n ; cosh t -sinh t; -sinh t cosh t ] ∈ G, and A = { a_t : t ∈ℝ}.
We also consider the corresponding horospherical subgroups
U = { g∈ G : a_-tga_t→ e as t→∞},
U^- = { g∈ G : a_tga_-t→ e as t→∞},
H = { g∈ G : a_tg=ga_t},
and the Haar measures dμ_K, dt and dμ_U on K, A and U respectively.
It will be important in our argument later that the error term in the effective equidistribution is explicit in terms of the C^l-norm, for some l≥ 1, of the test functions on 𝒳. We introduce below the required notations.
Every Y ∈ Lie(G) defines a first order differential operator D_Y on C_c^∞(𝒳) by
D_Y(ϕ)(x) d/dtϕ(exp (tY)x)|_t=0.
If { Y_1,…, Y_r} is a basis of Lie(G), then every monomial Z=Y_1^l_1… Y_f^l_r defines a differential operator by
D_Z D_Y_1^l_1… D_Y_r^l_r,
of degree deg(Z)=l_1+…+l_r. For integers l≥ 0 and ϕ∈ C_c^∞(𝒳), we write
||ϕ||_l ||ϕ||_C^l= ∑_deg(Z)≤ l ||D_Z(ϕ)||_∞
A crucial ingredient for our analysis is the following effective equidistribution result for higher order correlations on translated U-orbits (Theorem <ref>) and the analogous result we derive for translated K-orbits (Proposition <ref>)
For every r≥ 1 there exist γ_r>0 and l_r≥ 1 such that, for every f ∈ C_c^∞(U) and φ_1,…φ_r ∈ C_c^∞(𝒳) and every compact subset L ⊂𝒳, there exists C>0 such that for every Λ∈ L and t_1,… t_r > 0, we have
| ∫_U f(u)(∏_i=1^rφ_i (a_t_i uΛ) ) dμ_U(u)- (∫_Ufdμ_U) (∏_i=1^r∫_𝒳φ_i dμ_𝒳) | ≤ C e^-γ_r D(t_1,…,t_r)||f||_l_r∏_i=1^r||φ_i||_l_r ,
where D(t_1,…,t_r)min{t_i,|t_i-t_j|:1≤ i≠ j≤ r}.
For every r≥ 1 there exist δ_r>0 and l_r≥ 1 such that, for every f ∈ C^∞(K) and φ_1,…φ_r ∈ C_c^∞(𝒳) and every compact subset L ⊂𝒳, there exists C>0 such that for every Λ∈ L and t_1,… t_r > 0, we have
| I_Λ,f,φ_1,…,φ_r(t_1,…,t_r) - (∫_Kfdμ_K) (∏_i=1^r∫_𝒳φ_i dμ_𝒳) | ≤ C e^-δ_r D(t_1,…,t_r)||f||_l_r∏_i=1^r||φ_i||_l_r ,
where I_Λ,f,φ_1,…,φ_r(t_1,…,t_r) ∫_K f(k)(∏_i=1^rφ_i (a_t_i kΛ)) dμ_K(k).
We consider the centralizer of A in K,
Mcent_K(A)=K∩ H = [ SO(n) ; I_2 ]≅SO(n),
and the submanifold S⊂ K defined via the exponential map by
Lie(S)= {[ 0_n s ; -s^T 0; 0 ] : s∈ℝ^n}.
We have Lie(K)=Lie(M)⊕Lie(S) and the map M× S→ K is a diffeomorphism in a neighborhood of the identity, giving a unique decomposition k= m(k)s(k) and also a decomposition of the measure dμ_K, in the sens that ∫_K f dμ_K=∫_M× S f dμ_S dμ_M for any f bounded and compactly supported in this neighborhood, where we denote by dμ_M the Haar measure on M and by dμ_S a smooth measure defined on a neighborhood of the identity in S.
Further, we consider the decomposition of G as the product U^-HU in a neighborhood of the identity, giving a unique decomposition s=u^-(s)h(s)u(s). We verify that the coordinate map S→ U, s↦ u(s) is a diffeomorphism in a neighborhood of the identity. We first observe that
dim(S)=dim(K)-dim(M)=(n+1)n/2-(n-1)n/2=n=dim(U).
Moreover, for the product map p:U^-× H × U → G, (u^-,h,u)↦ u^-hu, the derivative at the identity is given by D(p)_e(x,y,z)=x+y+z, for all (x,y,z)∈Lie(U^-) ×Lie(H) ×Lie(U). Hence, for all w ∈Lie(G), the U-component of D(p)_e^-1(w) is zero if and only if w ∈Lie(U^-)+Lie(H). Since Lie(S) ∩(Lie(U^-)+Lie(H))= 0, the derivative of s↦ u(s) is injective. Since dim(S)=dim(U), this is a local diffeomorphism.
We denote B_𝓇^K the ball of radius 𝓇>0 centered at the identity in K and localize the problem to a neighborhood of the identity by considering the partition of unity 1= ∑_j=1^Nϕ_j(kk_j^-1) for all k ∈supp(f) and some k_j ∈ supp(f), with non-negative functions ϕ_j ∈ C^∞(K) such that supp(ϕ_j) ⊆ B_𝓇^K, || ϕ_j ||_l ≪𝓇^-ν and N ≪𝓇^-λ, for some ν, λ >0, and for 𝓇>0 small enough to be fixed later.
We write for simplicity k=m_ks_k=m_ku_s_k^-h_s_ku_s_k, the unique decompositions of k and s in a neighborhood of the identity in K and S. We also write f_j(k) f(kk_j) and Λ_j k_jΛ. We compute
I_Λ,f,φ_1,…,φ_r(t_1,…,t_r) = ∑_j=1^N ∫_K ϕ_j(k) f(kk_j)(∏_i=1^rφ_i(a_t_ikk_jΛ))dμ_K(k)
=∑_j=1^N ∫_K ϕ_j(k) f_j(k)(∏_i=1^rφ_i(m_ka_t_iu^-_s_ka_-t_ih_s_ka_t_iu_s_kΛ_j))dμ_K(k).
By Lipschitz continuity of the coordinate maps m_k, u^-_s_k and h_s_k on B_𝓇^K with 𝓇 small enough, there exists a constant C_1>0 such that for all k ∈ B_𝓇^K, we have
a_tu^-_s_ka_-t∈ B_C_1𝓇e^-2t^K and m_k,h_s_k∈ B_C_1𝓇^K.
By Lipschitz continuity of φ_1,…,φ_r, it follows
| I_Λ,f,φ_1,…,φ_r(t_1,…,t_r) - ∑_j=1^N ∫_K ϕ_j(k) f_j(k)(∏_i=1^rφ_i(a_t_iu_s_kΛ_j))dμ_K(k) |
= |∑_j=1^N ∫_K ϕ_j(k) f_j(k)(∏_i=1^rφ_i(a_t_iu_s_kΛ_j)-∏_i=1^rφ_i(m_ka_t_iu^-_s_ka_-t_ih_s_ka_t_iu_s_kΛ_j)dμ_K(k) |
≪_l 𝓇‖ f ‖_l‖∏_i=1^rφ_i‖_l ∫_K |∑_j=1^Nϕ_j(k)|dμ_K(k)
= 𝓇‖ f ‖_l‖∏_i=1^rφ_i‖_l (by K-invariance and since ϕ_j is a partition of unity)
≪_r 𝓇‖ f ‖_l∏_i=1^r‖φ_i‖_l.
We use now the decomposition of μ_K and apply the change of variable u ↦ s(u)=s_u, with a density ρ defined in a neighborhood of the identity in U by
∫_S Φ(s)dμ_S(s)=∫_UΦ(s(u))ρ(u)dμ_U(u) for all Φ∈ C_c(S) with supp(Φ)⊂ B_𝓇^S.
We have
∑_j=1^N ∫_K ϕ_j(k) f_j(k)(∏_i=1^rφ_i(a_t_iu_s_kΛ_j))dμ_K(k)
= ∑_j=1^N ∫_M× Sϕ_j(ms) f_j(ms)(∏_i=1^rφ_i(a_t_iu_sΛ_j))dμ_S(s)dμ_M(m)
= ∫_M(∑_j=1^N ∫_U ϕ_j(ms_u) f_j(ms_u)(∏_i=1^rφ_i(a_t_iuΛ_j))ρ(u) dμ_U(u))dμ_M(m).
Using Theorem <ref> with the function f_m,j(u) ϕ_j(ms_u)ρ(u)f_j(mu) and observing that ||f_m,j ||_l ≪ ||ϕ_j||_l||ρ||_l ||f_j||_l and that ||ρ||_l≪ 1, it follows that the integral (<ref>) is equal to
∫_M∑_j=1^N (∫_U ϕ_j(ms_u) f_j(ms_u)ρ(u)dμ_U(u)(∏_i=1^r ∫_𝒳φ_i dμ_𝒳) .
+ . O( e^-γ D(t_1,…,t_r)||ϕ_j||_l||f||_l∏_i=1^r||φ_i||_l))dμ_M(m)
= ∫_M(∑_j=1^N ∫_S ϕ_j(ms)f_j(ms)dμ_S(s))dμ_M(m) (∏_i=1^r ∫_𝒳φ_idμ_𝒳)
+ O( Ne^-γ D(t_1,…,t_r)||ϕ_j||_l||f||_l∏_i=1^r||φ_i||_l).
Using again the decomposition of μ_K, K-invariance and the partition of unity, we have
∫_M(∑_j=1^N ∫_S ϕ_j(ms)f_j(ms)dμ_S(s))dμ_M(m) = ∫_K(∑_j=1^N ϕ_j(kk_j^-1))f(k)dμ_K(k) = ∫_K f dμ_K
which simplifies the estimate (<ref>) to
( ∫_K fdμ_K) (∏_i=1^r ∫_𝒳φ_idμ_𝒳) + O( 𝓇^-λ e^-γ D(t_1,…,t_r)𝓇^-ν||f||_l∏_i=1^r||φ_i||_l).
Altogether we obtain
I_Λ,f,φ_1,…,φ_r(t_1,…,t_r) =(∫_K fdμ_K) (∏_i=1^r ∫_𝒳φ_idμ_𝒳) + O( ( 𝓇^-λ-ν e^-γ D(t_1,…,t_r)+𝓇)||f||_l∏_i=1^r||φ_i||_l).
We take 𝓇=e^-δ D(t_1,…,t_r) with δ = γ/1+λ+ν, which yields the claim.
We will use the following simplified version of Proposition <ref>.
For every r≥ 1, there exist δ_r>0 and l_r≥ 1 such that for every φ_0,…φ_r ∈ C_c^∞(𝒳) and t_1,… t_r > 0, we have
∫_𝒴φ_0(y)(∏_i=1^rφ_i (a_t_i y)) dμ_𝒴(y) = ∫_𝒴φ_0 dμ_𝒴(∏_i=1^r∫_𝒳φ_i dμ_𝒳) + O(e^-δ_r D(t_1,…,t_r)∏_i=0^r||φ_i||_l_r) .
We recall in the following section some properties of the Siegel transform that we use later to analyse the ergodic averages ∑_t=0^N χ∘ a_t.
§ SIEGEL TRANSFORM AND APPROXIMATION OF THE COUNTING FUNCTION
§.§ Siegel transform
Given a bounded measurable function f:ℝ^n+2→ℝ with compact support, its (standard) Siegel transform on the space ℒ of unimodular lattices in ℝ^n+2 is defined by
f^st.(Λ) ∑_z ∈Λ∖{0} f(z), for Λ∈ℒ.
Its restriction to 𝒳 is called the light-cone Siegel transform, defined for a bounded and compactly supported function f on 𝒞 by
f(Λ) ∑_z ∈Λ∖{0} f(z), for Λ∈𝒳.
The Siegel transform of a bounded function is typically unbounded, but its growth rate is controlled by an explicit function α defined as follows.
Given a lattice Λ∈ℒ, we say that a subspace V of ℝ^n+2 is Λ-rational if the intersection V∩Λ is a lattice in V. If V is Λ-rational, we denote d_Λ(V) the covolume of V∩Λ in V. We define then
α(Λ) sup{ d_Λ(V)^-1: V is a Λ-rational subspace of ℝ^n+2}.
It follows from Mahler's Compactness Criterion that α is a proper map ℒ→ [1, +∞). We recall below some important properties.
If f:ℝ^n+2→ℝ is a bounded function with compact support, then
|f^st.(Λ)| ≪_supp(f) ||f||_∞α(Λ), for all Λ∈ℒ.
We restrict this function to the space 𝒳 of lattices on the positive light cone and denote it also by α. An important property of α is its L^p-integrability in ℒ (see <cit.>) and also in 𝒳 with an explicit non-escape of mass.
The function α is in L^p(𝒳) for 1≤ p < n. In particular,
μ_𝒳 ({α≥ L }) ≪_p L^-p,
for all p<n.
We recall the analogous for the space 𝒳 of the Siegel Mean Value Theorem in the space of unimodular lattices ℒ (see <cit.>).
If f:𝒞→ℝ is a bounded Riemann integrable function with compact support, then
∫_𝒳f(Λ) dμ_𝒳(Λ) = ∫_𝒞 f(z) dz
for some G-invariant measure dz on 𝒞.
§.§ Non-divergence estimates
We recall here important estimates for the Siegel transform f on translated K-orbits by analyzing the escape of mass on submanifolds a_t𝒴⊂𝒳.
Following the same argument as in <cit.> and using effective equidistribution of translated K-orbits and L^p-integrability of the function α, we verified in <cit.> an analogous non-escape of mass for a_t𝒴.
There exists κ >0 such that for every L≥ 1 and t≥κlog L,
μ_𝒴 ({ y ∈𝒴 : α(a_t y) ≥ L }) ≪_p L^-p, for all p<n.
A crucial ingredient in our argument later is the integrability of the Siegel transform f on a_t𝒴 uniformly in t. This is an important result of Eskin, Margulis and Mozes in <cit.> establishing the following estimate for the function α.
If n ≥ 2 and 0<p<2, then for any lattice Λ in ℝ^n+2,
sup_t> 0∫_Kα ( a_t k Λ)^p dμ_K(k) < ∞.
§.§ Truncated Siegel transform
The Siegel transform of a smooth compactly supported function is typically not bounded. To be able to apply equidistribution results, we truncate the Siegel transform using a smooth cut-off function η_L built on the function α. We use the same construction as in [Lemma 4.9]bjoerklund2018central which yields the following lemma.
For every ξ∈ (0,1), there exists a family (η_L) in C_c^∞(𝒳) satisfying:
0 ≤η_L ≤ 1 , η_L = 1 on {α≤ξ^-1L } , η_L = 0 on {α > ξ L } , || η_L||_C^l≪ 1.
For a bounded function f:𝒞→ℝ with compact support, we define the truncated Siegel transform of f by
f^(L)f·η_L.
We recall in the following proposition some properties of the truncated Siegel transform f^(L) which we use later in our arguments.
For a bounded measurable function f:𝒞→ℝ with compact support, the truncated Siegel transform f^(L) satisfies the following bounds:
||f^(L) ||_L^p_𝒳≤ ||f ||_L^p_𝒳≪_supp(f),p ||f ||_∞ , for all p<n,
sup_t≥ 0||f^(L)∘ a_t ||_L^p_𝒴≤sup_t≥ 0||f∘ a_t ||_L^p_𝒴 < ∞ , for all 1≤ p<2,
||f^(L) ||_∞≪_supp(f) L||f ||_∞,
||f - f^(L) ||_L^1_𝒳≪_supp(f),τ L^-(τ-1)||f ||_∞ , for all τ < n,
‖f-f^(L)‖_L^2_𝒳≪_supp(f),τ L^-τ-2/2||f ||_∞ , for all τ < n,
||f∘ a_t - f^(L)∘ a_t ||_L^p_𝒴≪_supp(f),τ L^-τ(2-p)/2p||f ||_∞ , for all 1≤ p<2, τ<n and t≥κlog L.
Moreover, if f ∈ C_c^∞(𝒞) then f^(L)∈ C_c^∞(𝒳) and satisfies
||f^(L) ||_C^l≪_supp(f) L||f ||_C^l , for all l≥ 1.
All but estimate (<ref>) were proven in <cit.>.
To show (<ref>), we apply Hölder's Inequality with 1 ≤ p < n and q=(1/2-1/p)^-1 and deduce
‖f-f^(L)‖_L^2_𝒳≪_supp(f) ||α ||_L_𝒳^p μ_𝒳({α≥ξ^-1L })^1/q ||f ||_∞.
Then Proposition <ref> implies
‖f-f^(L)‖_L^2_𝒳≪_supp(f),p L^-p-2/2||f ||_∞.
§.§ Smooth approximation
For simplicity we write χχ_F_1,c and χ_𝓁χ_F_1,c,𝓁 for the characteristic functions of the sets F_1,c and F_1,c, 𝓁 respectively. We approximate χ and χ_𝓁 by a family of non-negative functions f_ε , f_𝓁,ε∈ C_c^∞(𝒞) with support in an ε-neighborhood of F_1,c and F_1,c, 𝓁 respectively, such that
χ≤ f_ε≤ 1, ||f_ε - χ||_L^1_𝒞≪ε , ||f_ε - χ||_L^2_𝒞≪ε^1/2, ||f_ε||_C^l≪ε^-l,
and χ_𝓁≤ f_𝓁,ε≤ 1, ||f_𝓁,ε - χ_𝓁||_L^1_𝒞≪ε , ||f_𝓁,ε - χ_𝓁||_L^2_𝒞≪ε^1/2, ||f_𝓁,ε||_C^l≪ε^-l.
We reformulate in the following proposition a previous result in <cit.>, in order to take into account the parameter 𝓁≥ 1, and show that the smooth approximation of χ, χ_𝓁 also yields a good approximation of their Siegel transforms χ, χ_𝓁 on translated K-orbits, uniformly in the parameter 𝓁≥ 1.
There exists θ>0 such that for every 𝓁≥ 1 and every ε>0,
∫_𝒴 |f_𝓁,ε∘ a_t - χ_𝓁∘ a_t| dμ_𝒴≪_c,nε +e^-θ t.
Let 𝓁≥ 1. We first recall the definition of the set F_1,c,𝓁,
F_1,c,𝓁 = { x ∈𝒞 : x_n+2^2- x_n+1^2 < c_𝓁^2 , c≤ x_n+2+ x_n+1 < ce },
with c_𝓁=c·(𝓁/𝓁+1)^2,
and observe that there exists c_𝓁,ε>c_𝓁 such that c_𝓁,ε=c_𝓁+ O(ε) and f_𝓁,ε≤χ_𝓁,ε, where χ_𝓁,ε denotes the characteristic function of the set
{ x ∈𝒞 : c-ε≤ x_n+2+x_n+1≤ ce+ε , x_n+2^2-x_n+1^2 < c_𝓁,ε^2 }.
The difference χ_𝓁,ε-χ_𝓁 is bounded by the sum χ^(1)_𝓁,ε+χ^(2)_𝓁,ε+χ^(3)_𝓁,ε of the characteristic functions of the sets
{ x ∈𝒞 : c-ε≤ x_n+2+x_n+1≤ c , x_n+2^2-x_n+1^2 < c_𝓁,ε^2 },
{ x ∈𝒞 : ce ≤ x_n+2+x_n+1≤ ce+ε, x_n+2^2-x_n+1^2 < c_𝓁,ε^2 },
{ x ∈𝒞 : c ≤ x_n+2+x_n+1≤ ce , c_𝓁^2<x_n+2^2-x_n+1^2 < c_𝓁,ε^2 }.
Since 0≤χ_𝓁≤ f_𝓁,ε≤χ_𝓁,ε, it follows in particular
f_𝓁,ε (a_tΛ) - χ_𝓁(a_tΛ) ≤χ_𝓁,ε^(1)(a_tΛ)+χ_𝓁,ε^(2)(a_tΛ)+χ_𝓁,ε^(3)(a_tΛ).
We first consider χ^(1)_𝓁,ε. For x in the corresponding set, we also have
0≤ x_n+2-x_n+1 < c_𝓁,ε^2/(c-ε) and x_1^2+ … +x_n^2 < c_𝓁,ε^2.
We write I_0,𝓁,ε [0,c_𝓁,ε], I_1,𝓁,ε [-c_𝓁,ε^2/(c-ε),0], I_2,ε [c-ε,c], k=(k_1, … , k_n+2)^T ∈ K, and compute
∫_𝒴 |χ_𝓁,ε^(1)∘ a_t| dμ_𝒴 = ∫_Kχ_𝓁,ε^(1)(a_tkΛ_0) dμ_K(k) = ∫_K∑_z ∈Λ_0χ_𝓁,ε^(1)(a_tkz) dμ_K(k)
= ∑_z ∈Λ_0 ∫_Kχ^(1)_𝓁,ε(⟨ k_1,z ⟩
…
⟨ k_n,z ⟩
⟨ k_n+1,z ⟩cosh t - z_n+2sinh t
⟨ k_n+1,z ⟩(-sinh t) + z_n+2cosh t
) dμ_K(k)
=∑_z ∈Λ_0∫_K
χ_I_0,𝓁,ε( ||⟨ k_1,z⟩ ,…, ⟨ k_n,z ⟩ || )
χ_I_1,𝓁,ε(e^t(⟨ k_n+1,z ⟩ -z_n+2) )
χ_I_2,ε(e^-t( ⟨ k_n+1,z ⟩ +z_n+2) )
dμ_K(k).
We observe that the intersection (e^-tI_1,𝓁,ε+z_n+2) ∩ (e^tI_2,ε-z_n+2) is non-empty only if (c-ε)e^t≤ 2z_n+2≤ ce^t+c_𝓁,ε^2/c-εe^-t, i.e. z_n+2=ce^t/2 +O_c(ε e^t+e^-t) where the implicit constant is independent from 𝓁≥ 1. Moreover, writing each z ∈Λ_0 as z = z_n+2k_zv_0 with some k_z ∈ K and v_0=(0,…,0,1,1)∈𝒞, and using invariance under k_z, we have
∫_𝒴 |χ_𝓁,ε^(1)∘ a_t| dμ_𝒴
≤∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t)∫_K
χ_I_0,𝓁,ε( z_n+2||⟨ k_1,v_0⟩ ,…, ⟨ k_n,v_0 ⟩ || )
χ_e^-tI_1,𝓁,ε(z_n+2(⟨ k_n+1,v_0 ⟩ -1) )·
·χ_e^tI_2,ε(z_n+2(⟨ k_n+1,v_0 ⟩ +1) )
dμ_K(k)
≤∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t)∫_K
χ_e^-t2/c-εI_0,𝓁,ε(||⟨ k_1,v_0⟩ ,…, ⟨ k_n,v_0 ⟩ || )
χ_e^-2t2/c-ε I_1,𝓁,ε(⟨ k_n+1,v_0⟩ - 1 ) ·
·χ_2/c-εI_2,ε(⟨ k_n+1,v_0 ⟩ +1 )
dμ_K(k)
≤∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t)μ_K ( { k ∈ K : |k_i,n+1| ≪_c e^-t, i=1,…,n ,
|k_n+1,n+1-1| ≪_c min(e^-2t,ε).
})
≤∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t)μ_K ( { k ∈ K : ||kv_0-v_0 ||≪_c e^-t})
≪_n ∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t)μ_S^n( { v ∈S^n : ||v-v_0 ||≪ e^-t})
≪_c,n∑_z∈Λ_0
z_n+2=ce^t/2 +O(ε e^t+e^-t) e^-nt.
We use further that there exist positive constants C and θ such that, for all n≥ 2, we have
|{z ∈𝒞∩ℤ^n+2 : 0≤ z_n+2<T }| = CT^n+O(T^n-θ),
hence
|{z ∈𝒞∩ℤ^n+2 : (c-ε)e^t ≤ 2z_n+2<ce^t+c_𝓁,ε^2/c-εe^-t}| ≤ε e^nt+O_c(e^(n-θ)t).
It follows
∫_𝒴 |χ_𝓁,ε^(1)∘ a_t| dμ_𝒴≪_c,nε + e^-θ t.
We proceed similarly for χ^(3)_𝓁,ε. For x in the corresponding set, we also have
c_𝓁^2/ce≤ x_n+2-x_n+1 < c_𝓁,ε^2 and c_𝓁^2<x_1^2+ … +x_n^2 < c_𝓁,ε^2.
We write I'_0,𝓁,ε [c_𝓁,c_𝓁,ε], I'_1,𝓁,ε [-c_𝓁,ε^2,-c_𝓁^2/ce], I'_2 [c,ce] and compute similarly
∫_𝒴 |χ_𝓁,ε^(3)∘ a_t| dμ_𝒴 = ∫_K∑_z ∈Λ_0χ_𝓁,ε^(3)(a_tkz) dμ_K(k)
=∑_z ∈Λ_0∫_K
χ_I'_0,𝓁,ε( ||⟨ k_1,z⟩ ,…, ⟨ k_n,z ⟩ || )
χ_I'_1,𝓁,ε(e^t(⟨ k_n+1,z ⟩ -z_n+2) )
χ_I'_2(e^-t( ⟨ k_n+1,z ⟩ +z_n+2) )
dμ_K(k).
We observe again that the intersection (e^-tI'_1,𝓁,ε+z_n+2) ∩ (e^tI'_2-z_n+2) is non-empty only if C_1e^t ≤ z_n+2≤ C_2e^t for some positive constants C_1 and C_2 depending only on c>0. Moreover, writing each z ∈Λ_0 as z = z_n+2k_zv_0 with some k_z ∈ K and v_0=(0,…,0,1,1)∈𝒞, and using invariance under k_z, we have
∫_𝒴 |χ_𝓁,ε^(3)∘ a_t| dμ_𝒴
≤∑_z∈Λ_0
z_n+2≍_c e^t∫_K
χ_I'_0,𝓁,ε( z_n+2||⟨ k_1,v_0⟩ ,…, ⟨ k_n,v_0 ⟩ || )
χ_e^-tI'_1,𝓁,ε(z_n+2(⟨ k_n+1,v_0 ⟩ -1) )·
·χ_e^tI'_2(z_n+2(⟨ k_n+1,v_0 ⟩ +1) )
dμ_K(k)
≤∑_z∈Λ_0
z_n+2≍_c e^t∫_K
χ_e^-t1/C_1I'_0,𝓁,ε(||⟨ k_1,v_0⟩ ,…, ⟨ k_n,v_0 ⟩ || )
χ_e^-2t1/C_1I'_1,𝓁,ε(⟨ k_n+1,v_0⟩ - 1 ) ·
·χ_1/C_1I'_2(⟨ k_n+1,v_0 ⟩ +1 )
dμ_K(k)
≤∑_z∈Λ_0
z_n+2≍_c e^tμ_K ( { k ∈ K : ||kv_0-v_0 ||≪_c ε
e^-t})
≪_n ∑_z∈Λ_0
z_n+2≍_c e^tμ_S^n( { v ∈S^n : ||v-v_0 ||≪_c ε e^-t})
≪_c,n∑_z∈Λ_0
z_n+2≍_c e^tε^n e^-nt.
We use again the estimate
|{z ∈𝒞∩ℤ^n+2 : z_n+2≍ e^t}|= O(e^nt),
hence
∫_𝒴 |χ_𝓁,ε^(3)∘ a_t| dμ_𝒴 ≪_c,n ε.
The bound for ||χ_𝓁,ε^(2)∘ a_t ||_L^1_𝒴 is obtained similarly as for χ^(1)_ε.
Altogether we obtain,
||f̂_ε∘ a_t -χ̂∘ a_t ||_L^1_𝒴≪_c,nε+ e^-θ t.
§.§ Averaging function
As explained in Section <ref>, analysing the counting function 𝖭_T,c reduces to analysing ergodic averages of the form ∑_t χ∘ a_t. We define for this purpose the following averaging function.
𝖥_N 1/√(N)∑_t=0^N-1( χ∘ a_t -μ_𝒴(χ∘ a_t) ) .
To study the distribution of 𝖥_N we shall use in the following arguments the basic observation that if we approximate 𝖥_N by a sequence 𝖥_N in such a way that ||𝖥_N-𝖥_N ||_L^1_𝒴 0 and the limit distribution of 𝖥_N is continuous, then 𝖥_N and 𝖥_N have the same convergence in distribution.
Truncated averages
We first observe that 𝖥_N has the same convergence in distribution as the truncated averages
𝖥_N,M1/√(N-M)∑_t=M^N-1( χ∘ a_t -μ_𝒴(χ∘ a_t) ),
for some M=M(N)→∞ to be specified later.
Indeed, we have
‖𝖥_N -𝖥_N,M‖_L^1_𝒴 ≤1/√(N)∑_t=0^M-1‖χ∘ a_t -μ_𝒴(χ∘ a_t) ‖_L^1_𝒴
+ ( 1/√(N-M)-1/√(N)) ∑_t=M^N-1‖χ∘ a_t -μ_𝒴(χ∘ a_t) ‖_L^1_𝒴
≪M/√(N)sup_t≥ 0‖χ∘ a_t ‖_L^1_𝒴,
hence, by (<ref>) and provided that
M=o(N^1/2)
we have
‖𝖥_N -𝖥_N,M‖_L^1_𝒴→ 0, as N→∞.
Averages for the Siegel transform of a smooth approximation
Further, we observe that the averages 𝖥_N,M has the same convergence in distribution if the characteristic function χ is replaced by the smooth approximation f_ε introduced earlier. Indeed, if we consider the averages
𝖥_N,M^(ε)1/√(N-M)∑_t=M^N-1( f_ε∘ a_t -μ_𝒴(f_ε∘ a_t) ),
with the parameter ε = ε(N), ε(N) 0 to be specified later, then Proposition <ref> implies
‖𝖥_N,M- 𝖥_N,M^(ε)‖_L^1_𝒴≤2/√(N-M)∑_t=M^N-1‖f_ε∘ a_t -χ∘ a_t ‖_L^1_𝒴≪ (N-M)^1/2(ε+e^-θ M) .
We will choose ε and M such that
(N-M)^1/2ε→ 0 and (N-M)^1/2e^-θ M→ 0 ,
which yields
‖𝖥_N,M- 𝖥_N,M^(ε)‖_L^1_𝒴→ 0 as N→∞ .
Averages for the truncated Siegel transform
Finally, we also have the same convergence in distribution for the averages of the truncated Siegel transform
𝖥_N,M^(ε, L)1/√(N-M)∑_t=M^N-1( f_ε^(L)∘ a_t -μ_𝒴(f_ε^(L)∘ a_t) ),
defined for parameters ε(N) 0 and L(N) ∞ to be specified later.
We assume that
M≫log L
such that Proposition <ref> applies when t≥ M. Since the family of functions f_ε is uniformly bounded by a compactly supported function, the estimate (<ref>) gives
‖𝖥_N,M^(ε) -𝖥_N,M^(ε, L)‖_L^1_𝒴 ≤1/√(N-M)∑_t=M^N-1‖(f_ε∘ a_t -f_ε^(L)∘ a_t)-μ_𝒴(f_ε∘ a_t -f_ε^(L)∘ a_t) ‖_L^1_𝒴
≤2/√(N-M)∑_t=M^N-1‖f_ε∘ a_t -f_ε^(L)∘ a_t ‖_L^1_𝒴
≪_τ (N-M)^1/2L^-τ/2, for all τ<n.
We will choose L(N) ∞ such that
N-M =o(L^p) for some p<n,
to obtain
‖𝖥_N,M^(ε) -𝖥_N,M^(ε, L)‖_L^1_𝒴→ 0, as N→∞.
Hence if we prove the CLT for the sequence (𝖥_N,M^(ε, L)), then the CLT for (𝖥_N) would follow.
§ CUMULANTS OF THE COUNTING FUNCTION
§.§ The method of cumulants
We recall in this section the general approach of the method of cumulants (presented in <cit.> and <cit.>) to establish the convergence to a normal distribution using a characterisation by the cumulants.
Given a probability space (X,μ) and bounded measurable functions φ_1,⋯,φ_r on X, we define their joint cumulant as
Cum_μ^(r)(φ_1,⋯,φ_r)= ∑_𝒫(-1)^|𝒫|-1(|𝒫|-1)!∏_I∈𝒫∫_X(∏_i∈ Iφ_i )dμ ,
where the sum is over all partitions 𝒫 of the set {1,⋯,r}. For a bounded measurable function φ on X we write
Cum_μ^(r)(φ)=Cum_μ^(r)(φ,⋯,φ) .
We will use the following classical CLT-criterion (see <cit.>).
Let (f_N)_N≥ 1 be a sequence of real-valued bounded measurable functions such that
∫_X f_N dμ=0 , σ^2lim_N→∞∫_X f_N^2 dμ < ∞
and
lim_N→∞Cum_μ^(r)(f_N)=0 , for all r≥ 3 .
Then for every ξ∈ℝ,
μ( { f_N<ξ}) →Norm_σ(ξ) as N→∞ .
The method of cumulants is equivalent to the more widely known “method of moments", but the cumulants offer the following convenient cancellation property.
For a partition 𝒬 of {1,⋯,r }, we define the conditional joint cumulant with respect to 𝒬 by
Cum_μ^(r)(φ_1,⋯,φ_r|𝒬)= ∑_𝒫(-1)^|𝒫|-1(|𝒫|-1)!∏_I∈𝒫∏_J∈𝒬∫_X(∏_i∈ I∩
Jφ_i )dμ .
<cit.>
For any partition 𝒬 with |𝒬|≥ 2,
Cum_μ^(r)(φ_1,⋯,φ_r|𝒬)=0 ,
for all φ_1,⋯,φ_r ∈ L^∞(X,μ).
§.§ Estimating the cumulants
It will be convenient to write
ψ^(ε,L)_t(y):=f_ε^(L)(a_ty)-μ_𝒴(f_ε^(L)∘ a_t),
so that the averaging function is
𝖥_N,M^(ε, L)=1/√(N-M)∑_t=M^N-1ψ^(ε,L)_t with ∫_𝒴𝖥_N,M^(ε, L) dμ_𝒴=0.
Our aim in this section is to estimate the following joint cumulants for r≥ 3,
Cum_μ_𝒴^(r)(𝖥_N,M^(ε, L))=1/(N-M)^r/2∑_t_1,…,t_r=M^N-1Cum_μ_𝒴^(r)(ψ^(ε,L)_t_1,…, ψ^(ε,L)_t_r).
We reproduce below the argument as developed in <cit.> and <cit.>, taking into account the dependence on the parameters L and ε coming from the truncated Siegel transform and the smooth approximation respectively. The main idea in estimating these joint cumulants is to decompose (<ref>) into sub-sums corresponding to “separated" or “clustered" tuples t_1,…,t_r and to control their sizes.
§.§.§ Separated and clustered times t_1,…,t_r
It will be convenient to consider {0,…,N-1}^r as a subset of ℝ_+^r+1 with the embedding (t_1,…,t_r)→ (0,t_1,…,t_r).
Following the approach developed in <cit.>, we define for
non-empty subsets I and J of {0,…, r} and t = (t_0,…,t_r) ∈_+^r+1,
ρ^I(t) := max{ |t_i-t_j| : i,j ∈ I } and ρ_I,J(t):= min{ |t_i-t_j| : i ∈ I, j ∈ J },
and if is a partition of {0,…,r}, we set
ρ^(t) := max{ρ^I(t) : I ∈} and ρ_(t) := min{ρ_I,J(t) : I ≠ J, I, J ∈}.
For 0 ≤α < β, we define
Δ_(α,β)
:=
{t∈_+^r+1 : ρ^(t) ≤α,
ρ_(t) > β}
and
Δ(α):=
{t∈_+^r+1 : ρ(t_i,t_j) ≤α for all i,j}.
The following decomposition of _+^r+1 was established in <cit.>: given
0=α_0<β_1<α_1=(3+r)β_1<β_2<⋯ <β_r<α_r=(3+r)β_r<β_r+1,
we have
_+^r+1 = Δ(β_r+1) ∪( ⋃_j=0^r⋃_|| ≥ 2Δ_(α_j,β_j+1) ),
where the union is taken over the partitions of {0,…,r} with ||≥ 2. Upon taking restrictions, we also have
{M,…,N-1}^r =Ω(β_r+1;M,N) ∪( ⋃_j=0^r⋃_|| ≥ 2Ω_(α_j,β_j+1;M,N) ),
for all N> M ≥ 0, where
Ω(β_r+1;M,N) :={M,…,N-1}^r ∩Δ(β_r+1),
Ω_(α_j,β_j+1;M,N) :={M,…,N-1}^r ∩Δ_(α_j,β_j+1).
In order to estimate the cumulant (<ref>), we shall separately estimate the sums over Ω(β_r+1;M,N) and Ω_(α_j,β_j+1;M,N), the exact choices of the sequences (α_j) and (β_j) will be fixed later.
We may first choose
M>β_r+1
so that Ω(β_r+1;M,N)=∅ and does not contribute to the sum.
§.§.§ Case 1: Summing over (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N) with ={{0},{1,…,r}}.
We shall first show that, in this case, we have
Cum_μ_^(r)(ψ^(ε,L)_t_1,…,ψ^(ε,L)_t_r)≈Cum_μ_^(r)(ϕ^(ε,L)∘ a_t_1,…,ϕ^(ε,L)∘ a_t_r)
where ϕ^(ε,L):= f_ε^(L)-μ_(f_ε^(L)).
This reduces to estimating the integrals
∫_(∏_i∈ Iψ^(ε,L)_t_i) dμ_
= ∑_J⊂ I (-1)^|I\ J|(∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_)
∏_i∈ I\ J(∫_ (f_ε^(L)∘ a_t_i) dμ_).
If (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N), and thus
|t_i_1-t_i_2|≤α_j and t_i_1≥β_j+1 for all 1≤ i_1,i_2≤ r,
it follows from Corollary <ref> with r=1 that there exists δ>0 such that
∫_ (f_ε^(L)∘ a_t_i) dμ_
=μ_(f_ε^(L))+O(e^-δβ_j+1 f_ε^(L)_C^l).
For a fixed J ⊂ I, we define
Φ^(ε,L):=∏_i∈ Jf_ε^(L)∘ a_t_i-t_1,
and note that for some ξ=ξ(n,l)>0, we have
Φ^(ε,L)_C^l≪∏_i∈ Jf_ε^(L)∘ a_t_i-t_1_C^l≪ e^|J|ξ α_j f_ε^(L)_C^l^|J|.
If we again apply Corollary <ref> to the function Φ^(ε,L), we obtain
∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_ =∫_ (Φ^(ε,L)∘ a_t_1) dμ_
=∫_Φ^(ε,L) dμ_+O(e^-δβ_j+1 Φ^(ε,L)_C^l)
=∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_+O(e^-δβ_j+1 e^rξ α_j f_ε^(L)_C^l^|J|),
where we used that μ_ is invariant under the transformation a_t.
Let us now choose the exponents α_j and β_j+1 so that
δβ_j+1-rξα_j>0.
Combining (<ref>), (<ref>) and (<ref>), we deduce that
∫_(∏_i∈ Iψ^(ε,L)_t_i) dμ_
= ∑_J⊂ I (-1)^|I\ J|(∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_)
μ_(f_ε^(L))^|I\ J|
+O(e^-δβ_j+1 e^rξ α_j f_ε^(L)_C^l^|I|)
= ∫_∏_i∈ I(f_ε^(L)∘ a_t_i-μ_(f_ε^(L))) dμ_
+O(e^-(δβ_j+1-rξα_j ) f_ε^(L)_C^l^|I|),
and thus, for any partition ,
∏_I∈∫_(∏_i∈ Iψ^(ε,L)_t_i) dμ_
=
∏_I∈∫_(∏_i∈ Iϕ^(ε,L)∘ a_t_i) dμ_
+O(e^-(δβ_j+1-rξα_j ) f_ε^(L)_C^l^r),
and consequently,
Cum_μ_^(r)(ψ^(ε,L)_t_1,…,ψ^(ε,L)_t_r)= Cum_μ_^(r)(ϕ^(ε,L)∘ a_t_1,…,ϕ^(ε,L)∘ a_t_r)
+O(e^-(δβ_j+1-rξ α_j) f_ε^(L)_C^l^r)
whenever (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N) with ={{0},{1,…,r}},
from which (<ref>) follows.
We now claim that
|Cum_μ_^(r)(ϕ^(ε,L)∘ a_t_1,…,ϕ^(ε,L)∘ a_t_r)| ≪f_ε^(L)_C^0^(r-(n-1))^+f_ε^(L)_L^n-1()^min(r,n-1),
where we use the notation x^+=max(x,0). The implied constant in (<ref>) and below depend only on supp(f_ε), so that it is uniform in ε. By the definition of the cumulant, to prove (<ref>), it suffices to show
that for every z≥ 1 and indices i_1,…,i_z,
∫_|(ϕ^(ε, L)∘ a_t_i_1)⋯(ϕ^(ε,L)∘ a_t_i_z)| dμ_≪f_ε^(L)_C^0^(z-(n-1))^+f_ε^(L)_L^n-1()^min(z,n-1).
Using
the generalized Hölder inequality, we deduce that when z≤ n-1,
∫_|(ϕ^(ε,L)∘ a_t_i_1)⋯(ϕ^(ε,L)∘ a_t_i_z)| dμ_ ≤ϕ^(ε,L)∘ a_t_i_1_L^n-1()⋯ϕ^(ε,L)∘ a_t_i_z_L^n-1()
≪f_ε^(L)_L^n-1()^z.
Also when z>n-1,
∫_|(ϕ^(ε,L)∘ a_t_i_1)⋯(ϕ^(ε,L)∘ a_t_i_z)| dμ_
≤ ϕ^(ε,L)_C^0^z-(n-1)∫_|(ϕ^(ε,L)∘ a_t_i_1)⋯(ϕ^(ε,L)∘ a_t_i_n-1)| dμ_
≪ f_ε^(L)_C^0^z-(n-1)f_ε^(L)_L^n-1()^n-1.
This implies (<ref>) and (<ref>).
Finally we recall that if (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N) with ={{0},{1,…,r}}, then we have |t_i_1-t_i_2|≤α_j
for all i_1 i_2, and thus
|Ω_(α_j,β_j+1;M,N)|≪ (N-M)α_j^r-1.
Combining (<ref>), (<ref>) and (<ref>) in (<ref>), and using Proposition <ref> with (<ref>), we conclude that
1/(N-M)^r/2∑_t∈Ω_(α_j,β_j+1;M,N)Cum_μ_𝒴^(r)(ψ^(ε,L)_t_1,…, ψ^(ε,L)_t_r)
≪ (N-M)^r/2 e^-(δβ_j+1- rα_jξ) f_ε^(L)_C^l^r +(N-M)^1-r/2α_j^r-1f_ε^(L)_C^0^(r-(n-1))^+f_ε^(L)_L^n-1()^min(r,n-1)
≪ (N-M)^r/2 e^-(δβ_j+1- rα_jξ) L^r ε^-rl
+ (N-M)^1-r/2α_j^r-1
L^(r-(n-1))^+.
§.§.§ Case 2: Summing over (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N) with ||≥ 2 and
{{0},{1,…,r}}.
In this case, the partition defines a non-trivial partition '={I_0,…,I_ℓ} of {1,…,r} such that
for all (t_1,…, t_r)∈Ω_Q(α_j,β_j+1;M,N), we have
|t_i_1-t_i_2|≤α_j if i_1∼_' i_2
|t_i_1-t_i_2|> β_j+1 if i_1≁_' i_2,
and
t_i≤α_j for all i∈ I_0, t_i>β_j+1 for all i∉ I_0.
In particular,
D(t_i_1,…, t_i_ℓ)≥β_j+1,
Let I be an arbitrary subset of {1,…,r}; we shall show that
∫_( ∏_i∈ Iψ_t_i^(ε,L)) dμ_≈∏_h=0^ℓ(∫_(∏_i∈ I∩ I_hψ_t_i^(ε,L)) dμ_),
where we henceforth shall use the convention that the product is equal to one when I∩ I_h=∅.
Let us estimate the right hand side of (<ref>). We begin by setting
Φ_0^(ε,L):=∏_i∈ I∩ I_0ψ_t_i^(ε,L).
It is easy to see that there exists ξ=ξ(n,l)>0 such that
Φ_0^(ε,L)_C^l≪∏_i∈ I∩ I_0f_ε^(L)∘ a_t_i-μ_(f_ε^(L)∘ a_t_i)_C^l≪ e^|I∩ I_0|ξ α_j f_ε^(L)_C^l^|I∩ I_0|.
To prove (<ref>),
we expand ψ_t_i^(ε,L)=f_ε^(L)∘ a_t_i-μ_(f_ε^(L)∘ a_t_i)
for i∈ I\ I_0 and get
∫_(∏_i∈ Iψ^(ε,L)_t_i) dμ_ = ∑_J⊂ I\ I_0 (-1)^|I\ (J∪ I_0)|·
· (∫_Φ_0^(ε,L)(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_)
∏_i∈ I\ (J∪ I_0)(∫_ (f_ε^(L)∘ a_t_i) dμ_).
We recall that when i∉ I_0, we have t_i≥β_j+1, and thus it follows from Corollary <ref> with r=1 that
∫_ (f_ε^(L)∘ a_t_i) dμ_
=μ_(f_ε^(L))+O(e^-δβ_j+1 f_ε^(L)_C^l), with i∉ I_0.
To estimate the other integrals in (<ref>), we also apply Corollary <ref>.
Let us first fix a subset J ⊂ I ∖ I_0 and for each 1 ≤ h ≤ l, we pick i_h∈ I_h,
and set
Φ_h^(ε,L):=∏_i∈ J∩ I_hf_ε^(L)∘ a_t_i-t_i_h.
Then
∫_Φ_0^(ε,L)(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_=
∫_Φ_0^(ε,L)( ∏_h=1^ℓΦ_h^(ε,L)∘ a_t_i_h) dμ_.
We note that for i∈ I_h, we have |t_i-t_i_h|≤α_j, and
thus there exists ξ=ξ(n,l)>0 such that
Φ_h^(ε,L)_C^l≪∏_i∈ J∩ I_hf_ε^(L)∘ a_t_i-t_i_h_C^l≪ e^|J∩ I_h|ξ α_j f_ε^(L)_C^l^|J∩ I_h|.
Using (<ref>), Corollary <ref> implies that
∫_Φ_0^(ε,L)( ∏_h=1^ℓΦ_h^(ε,L)∘ a_t_i_h) dμ_
= (∫_Φ_0^(ε,L) dμ_)∏_h=1^ℓ(∫_Φ_h^(ε,L) dμ_)
+O( e^-δβ_j+1 ∏_h=0^ℓΦ_h^(ε,L)_C^l).
Using (<ref>) and (<ref>) and invariance of the measure μ_, we deduce that
∫_Φ_0^(ε,L)( ∏_h=1^ℓΦ_h^(ε,L)∘ a_t_i_h) dμ_
= (∫_Φ_0^(ε,L) dμ_)∏_h=1^ℓ(∫_𝒳(∏_i∈ J∩ I_hf_ε^(L)∘ a_t_i) dμ_)
+O( e^-(δβ_j+1-rξα_j) f_ε^(L)_C^l^|(I∩ I_0)∪ J|).
Hence, we conclude that
∫_Φ_0^(ε,L)(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_
= (∫_𝒴Φ_0^(ε,L) dμ_)∏_h=1^ℓ(∫_𝒳(∏_i∈ J∩ I_hf_ε^(L)∘ a_t_i) dμ_)
+O( e^-(δβ_j+1-rξα_j) f_ε^(L)_C^l^|(I∩ I_0)∪ J|).
We shall choose the parameters α_j and β_j+1 so that
δβ_j+1-rξα_j>0.
Substituting (<ref>) and (<ref>) in (<ref>), we deduce that
∫_(∏_i∈ Iψ_t_i^(ε,L)) dμ_
= ∑_J⊂ I\ I_0 (-1)^|I\ (J∪ I_0)|(∫_𝒴Φ_0^(ε,L) dμ_)∏_h=1^ℓ(∫_𝒳(∏_i∈ J∩ I_hf_ε^(L)∘ a_t_i) dμ_) μ_(f_ε^(L))^|I\ (J∪ I_0)|
+O ( e^-(δβ_j+1-rξα_j) f_ε^(L)_C^l^|I|).
Next, we estimate the right hand side of (<ref>).
Let us fix 1 ≤ h ≤ l and for a subset J ⊂ I ∩ I_h, we define
Φ_J^(ε,L):=∏_i∈ Jf_ε^(L)∘ a_t_i-t_i_h.
As in (<ref>), for some ξ>0,
Φ_J^(ε,L)_C^k≪∏_i∈ Jf_ε^(L)∘ a_t_i-t_i_h_C^l≪ e^|J|ξ α_j f_ε^(L)_C^l^|J|.
Applying Corollary <ref> to the function Φ_J^(ε,L) and using that t_i_h>β_j+1, we get
∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_ =∫_ (Φ_J^(ε,L)∘ a_t_i_h) dμ_
=∫_Φ_J^(ε,L) dμ_+O(e^-δβ_j+1 Φ_J^(ε,L)_C^l)
=∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_+O(e^-δβ_j+1 e^rξ α_j f_ε^(L)_C^l^|J|),
where we have used a-invariance of μ_.
Combining (<ref>) and (<ref>), we deduce that
∫_(∏_i∈ I∩ I_hψ_t_i^(ε,L)) dμ_
= ∑_J⊂ I∩ I_h (-1)^|(I∩ I_h)\ J|(∫_(∏_i∈ Jf_ε^(L)∘ a_t_i) dμ_)
μ_(f_ε^(L))^|(I∩ I_h)\ J|
+O(e^-δβ_j+1 e^rξ α_j f_ε^(L)_C^l^|I∩ I_h|)
= ∫_∏_i∈ I∩ I_h(f_ε^(L)∘ a_t_i-μ_(f_ε^(L))) dμ_
+O(e^-(δβ_j+1-rξα_j ) f_ε^(L)_C^l^|I∩ I_h|),
which implies
∏_h=0^ℓ(∫_(∏_i∈ I∩ I_hψ_t_i^(ε,L)) dμ_)
= (∫_𝒴Φ_0^(ε,L) dμ_) ∏_h=1^ℓ(∫_∏_i∈ I∩ I_h(f_ε^(L)∘ a_t_i-μ_(f_ε^(L))) dμ_)
+O(e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^r).
Furthermore, multiplying out the products over I∩ I_h, we get
∏_h=0^ℓ(∫_(∏_i∈ I∩ I_hψ_t_i^(ε,L)) dμ_)
= (∫_𝒴Φ_0^(ε,L) dμ_)
∑_J⊂ I\ I_0 (-1)^|I\ (I_0∪ J)|∏_h=1^ℓ(∫_∏_i∈ I_h∩ Jf_ε^(L)∘ a_t_i dμ_)
μ_(f_ε^(L))^|I\ (I_0∪ J)|
+O(e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^|I|).
Comparing (<ref>) and (<ref>), we finally conclude that
∫_(∏_i∈ Iψ_t_i^(ε,L)) dμ_ = ∏_h=0^ℓ(∫_(∏_i∈ I∩ I_hψ_t_i^(ε,L)) dμ_)
+O(e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^|I|)
when (t_1,…,t_r)∈Ω_(α_j,β_j+1;M,N).
This establishes (<ref>) with an explicit error term.
This estimate implies that for the partition '={I_0,…, I_ℓ},
Cum_μ_^(r)(ψ_t_1^(ε,L),…,ψ_t_r^(ε,L))=
Cum_μ_^(r)(ψ^(ε,L)_t_1,…,ψ^(ε,L)_t_r|')+
O(e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^r)
By Proposition <ref>,
Cum_μ_^(r)(ψ^(ε,L)_t_1,…,ψ^(ε,L)_t_r|')=0,
so it follows that for all (t_1,…, t_r)∈Ω_Q(α_j,β_j+1;M,N),
|Cum_μ_^(r)(ψ^(ε,L)_t_1,…,ψ_t_r)|
≪ e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^r.
It follows
1/(N-M)^r/2∑_(t_1,…, t_r)∈Ω_(α_j,β_j+1;M,N) |Cum_μ_^(r)(ψ^(ε, L)_t_1,…,ψ^(ε, L)_t_r)|
≪ (N-M)^r/2e^-(δβ_j+1 - rξα_j) f_ε^(L)_C^l^r
≪ (N-M)^r/2e^-(δβ_j+1 - rξα_j) L^r ε^-rl,
where we used Lemma <ref> and (<ref>).
§.§.§ Final estimates on the cumulants
Finally, we combine the established bounds to get the following estimate
|Cum_μ_^(r)(𝖥_N,M^(ε, L))|≪ (N-M)^1-r/2(max_j α_j^r-1) L^(r-(n-1))^+
+
(N-M)^r/2(max_j e^-(δβ_j+1 - rξα_j)) L^r ε^-rl.
This estimate holds provided that (<ref>) and (<ref>) hold, namely when
α_j=(3+r)β_j<β_j+1 and δβ_j+1-rξα_j>0 for j=1,…,r.
Given any γ>0, we define the parameters β_j inductively by the formula
β_1=γ and β_j+1=max(γ+(3+r)β_j, γ+δ^-1r(3+r)ξβ_j).
It easily follows by induction that β_r+1≪_r γ, and choosing
M≫_r γ
we deduce from (<ref>) that
|Cum_μ_^(r)(𝖥_N,M^(ε, L))|
≪ (N-M)^r/2 e^-δγ L^rε^-rl
+(N-M)^1-r/2γ^r-1 L^(r-(n-1))^+.
We observe that since n≥ 2,
(r-(n-1))^+/n<r/2-1
for all r≥ 3,
Hence, we can choose q>1/n such that
q(r-(n-1))^+<r/2-1
for all r≥ 3.
Then we select
L=(N-M)^q,
so that, in particular, the condition (<ref>) is satisfied.
We recall that δ=δ(r) and l=l(r) and write (<ref>) as
|Cum_μ_^(r)(𝖥_N,M^(ε, L))|
≪ (N-M)^r/2+rq e^-δγ ε^-rl
+(N-M)^q(r-(n-1))^+-(r/2-1)γ^r-1 .
Choosing γ of the form
γ=c_r·log (N-M)
with sufficiently large c_r>0, and assuming
(N-M)^r/2L^rε^-rl=o(e^δγ)
we conclude that
Cum_μ_^(r)(𝖥_N,M^(ε, L))→ 0 as N→∞
for all r≥ 3.
The choice of the parameters L, ε, M, K and γ satisfying all the conditions mentioned earlier is discussed at the beginning of section <ref>.
§ ESTIMATING THE VARIANCE
In this section we shall show the convergence of the variance of the averaging function 𝖥_N,M^(ε, L), given by
𝖥_N,M^(ε, L)_L^2()^2 = 1/N-M∑_t_1=M^N-1∑_t_2=M^N-1∫_ψ_t_1^(ε, L)ψ_t_2^(ε, L) dμ_,
with
ψ_t^(ε, L) := f_ε^(L)∘ a_t -μ_𝒴(f_ε^(L)∘ a_t).
We first observe that this expression is symmetric with respect to t_1 and t_2,
writing t_1=s+t and t_2=s with 0≤ t≤ N-M-1 and M≤ s ≤ N-t-1,
we obtain that
𝖥_N,M^(ε, L)_L^2()^2
= Θ_N,M^(ε,L)(0)
+2∑_t=1^N-M-1Θ_N,M^(ε,L)(t),
where
Θ_N,M^(ε,L)(t):=1/N-M∑_s=M^N-1-t∫_ψ_s+t^(ε, L)ψ_s^(ε, L) dμ_,
with
∫_ψ_s+t^(ε, L)ψ_s^(ε, L) dμ_
=
∫_ (f_ε^(L)∘ a_s+t)(f_ε^(L)∘ a_s) dμ_ -
μ_(f_ε^(L)∘ a_s+t)μ_(f_ε^(L)∘ a_s).
We shall first show that with a suitable choice of parameters ε and L we have:
𝖥_N,M^(ε, L)_L^2()^2
=Θ_∞^(ε, L) (0) + 2∑_t=1^N-1Θ_∞^(ε, L) (t)+o(1),
where
Θ_∞^(ε, L)(t):=∫_ (f_ε^(L)∘ a_t)f_ε^(L) dμ_ -μ_(f_ε^(L))^2.
To estimate Θ_N,M(t), we introduce an additional parameter K=K(N)→∞ (to be specified later) with K≤ M and consider separately the cases when t< K and when t≥ K.
First, we consider the case when t≥ K.
By Corollary <ref>, we have
∫_ (f_ε^(L)∘ a_s+t)(f_ε^(L)∘ a_s) dμ_ =μ_(f_ε^(L))^2
+O(e^-δmin(s,t)f_ε^(L)_C^l^2 ).
and also
∫_ (f_ε^(L)∘ a_s) dμ_ =μ_ (f_ε^(L)) +O( e^-δ sf_ε^(L)_C^l).
Hence, combining (<ref>) and (<ref>), we deduce that
∫_ψ_s+t^(ε, L)ψ_s^(ε, L) dμ_
=O (e^-δmin(s,t)f_ε^(L)_C^l^2).
Since
∑_t=K^N-M-1(∑_s=M^N-1-te^-δmin(s,t))
≤∑_t=K^N-1∑_s=M^N-1 (e^-δ t+e^-δ s)≪ N e^-δ K,
we conclude that
∑_t=K^N-M-1Θ_N,M(t) ≪ e^-δ K f_ε^(L)_C^l^2
≪ e^-δ K L^2 ε^-2l,
where we used Lemma <ref> and (<ref>).
The implied constants here and below in the proof depend only on (f_ε), which is uniformly bounded, hence the dependence is only on the constant c from the diophantine approximation (<ref>).
Let us now consider the case t<K. We observe that Corollary <ref> (for r = 1)
applied to the function ϕ_t:= (f_ε^(L)∘ a_t)f_ε^(L) yields,
∫_ (f_ε^(L)∘ a_s+t)(f_ε^(L)∘ a_s) dμ_ =∫_ (ϕ_t∘ a_s) dμ_
=∫_ϕ_t dμ_+O(e^-δ s ϕ_t_C^l).
Furthermore, for some ξ=ξ(n,l)>0, we have
ϕ_t_C^l≪f_ε^(L)∘ a_t_C^l f_ε^(L)_C^l≪ e^ξ t f_ε^(L)_C^l^2.
Therefore, we deduce that
∫_ (f_ε^(L)∘ a_s+t)(f_ε^(L)∘ a_s) dμ_
=∫_ (f_ε^(L)∘ a_t)f_ε^(L) dμ_ +O (e^-δ s e^ξ t f_ε^(L)_C^l^2 ).
Combining this estimate with (<ref>), we obtain that
∫_ψ_s+t^(ε,L)ψ_s^(ε,L) dμ_
=Θ_∞^(ε, L)(t)
+ O (e^-δ s e^ξ t f_ε^(L)_C^l^2 ).
Using further the estimates from Lemma <ref> and (<ref>), it follows, for the case t<K,
Θ_N,M^(ε,L)(t)= N-M-t/N-MΘ_∞^(ε,L)(t)+O ((N-M)^-1 e^-δ M e^ξ t f_ε^(L)_C^l^2 )
= Θ_∞^(ε,L)(t)
+O ((N-M)^-1tf_ε^(L)_L^2()^2+ (N-M)^-1 e^-δ M e^ξ t f_ε^(L)_C^l^2 )
= Θ_∞^(ε,L)(t)
+O ((N-M)^-1t+ (N-M)^-1 e^-δ M e^ξ t ε^-2lL^2 ).
It follows
Θ_N,M^(ε,L)(0)+2∑_t=1^K-1Θ_N,M^(ε,L)(t) = Θ_∞^(ε,L)(0)+2∑_t=1^K-1Θ_∞^(ε,L) (t)
+O ((N-M)^-1K^2+ (N-M)^-1 e^-δ M e^ξ Kε^-2l L^2).
Combining (<ref>) and (<ref>), it follows from (<ref>) that
𝖥_N,M^(ε,L)_L^2()^2
= Θ_∞^(ε,L)(0)+2∑_t=1^K-1Θ_∞^(ε,L) (t)
+ O ((N-M)^-1K^2 + ((N-M)^-1 e^-δ M e^ξ K+e^-δ K) L^2 ε^-2l).
We will choose later in (<ref>) to (<ref>) the parameters K(N), M(N), ε(N) and L(N) so that
e^-δ K L^2ε^-2l→ 0,
(N-M)^-1 e^-δ M e^ξ K L^2ε^-2l→ 0,
(N-M)^-1K^2 → 0,
as N→∞, which gives
𝖥_N,M^(ε,L)_L^2()^2 =Θ_∞^(ε,L) (0) + 2∑_t=1^K-1Θ_∞^(ε,L) (t)+o(1).
We shall show next that with a suitable choice of parameters we have
𝖥_N,M^(ε,L)_L^2()^2 =Θ_∞^(ε) (0) + 2∑_t=1^K-1Θ_∞^(ε) (t)+o(1),
where
Θ_∞^(ε)(t):=∫_ (f_ε∘ a_t)f_ε dμ_ -μ_(f_ε)^2.
Using again the estimates from Lemma <ref>
||f_ε - f_ε^(L) ||_L^1_𝒳≪_supp(f_ε),τ L^-(τ-1) and ‖f_ε-f_ε^(L)‖_L^2_𝒳≪_supp(f_ε),τ L^-τ-2/2
we have (since the supports of the functions f_ε are uniformly bounded)
μ_(f_ε^(L)) =μ_(f_ε)+O_τ( L^-τ-1),
∫_ (f_ε^(L)∘ a_t)f_ε^(L) dμ_ = ∫_ (f_ε∘ a_t)f_ε dμ_ + O_τ( L^-τ-2/2),
which yields
Θ_∞^(ε,L)(t)= Θ_∞^(ε)(t) + + O_τ( L^-τ-2/2),
and (<ref>) then gives
𝖥_N,M^(ε,L)_L^2()^2
= Θ_∞^(ε)(0)+2∑_t=1^K-1Θ_∞^(ε) (t)
+ O ((N-M)^-1K^2 + ((N-M)^-1 e^-δ M e^ξ K+e^-δ K) L^2 ε^-2l +KL^-τ-2/2).
We will choose the parameters K(N) and L(N) such that
KL^-τ-2/2→ 0 as N→∞, for some τ<n,
which gives (<ref>).
In order to analyse further the correlations ∫_ (f_ε∘ a_t)f_ε dμ_ and show the convergence of the series ∑_t=1^K-1Θ_∞^(ε) (t), we shall use results from a recent work by Kelmer and Yu in <cit.>, where incomplete Eisenstein series are used to analyse the second moment of the light-cone Siegel transform. We recall briefly in the following section some preliminaries to this approach.
Moment formulas of incomplete Eisenstein series
Before recalling the approach and results of Kelmer and Yu, we reproduce below some preliminaries from <cit.> about Eisenstein's series and adapt the notations to our coordinate system from Section <ref>.
We will denote in this section the elements in the subgroup A by
a_y:= [ I_n ; y+y^-1/2 -y-y^-1/2; -y-y^-1/2 y+y^-1/2 ] , for y>0,
the ℝ-split torus with a_y acting on e_0=(0,…,1,1)∈ℝ^n+2 as e_0a_y=y^-1e_0 and
K:={k=[ k̃ ; 1 ]: k̃∈_n+1(ℝ)}
a maximal compact subgroup. Let L be the stabilizer of e_0 in G and let P be the parabolic subgroup fixing the line spanned by e_0. More precisely, P=UAM and L=UM with
M={m=([ m̃ ; 1 ; 1 ]): m̃∈_n(ℝ)}
the centralizer of A in K.
Any g∈ G can be written as g=u_xa_yk with k∈ K and in these coordinates the Haar measure of G is given (up to scaling) by
dμ_G(g)=y^-(n+1) d x d y dμ_K(k),
where dx is the usual Lebesgue measure on ℝ^n and μ_K is the probability Haar measure of K.
The subgroup L is unimodular with its Haar measure given by
dμ_L(u_xm)=dxdμ_M(m),
where μ_M is the probability Haar measure of M≅_n(ℝ).
Since L is the stabilizer of e_0 and G acts transitively on 𝒞, we can identify 𝒞 with the homogeneous space L \ G, which gives a natural right G-invariant measure on 𝒞. Explicitly, further identifying L\ G with A× M\ K gives natural polar coordinates on 𝒞: Every x∈𝒞 can be written uniquely as x=e_0a_yk for some y>0 and k∈ M\ K. In these coordinates the measure
d μ_𝒞(e_0a_yk):=y^-(n+1) d ydμ_M\ K(k)
is such an invariant measure. Here μ_M\ K is the unique right K-invariant probability measure on the homogeneous space M\ K which is homeomorphic to the unit sphere S^n. The measure μ_𝒞 is unique up to scaling, related to the G-invariant measure dz introduced in Proposition <ref> by dz=ω_Q dμ_𝒞.
We have further the Langlands decomposition P=UAM (with the unipotent subgroup U given by the Iwasawa decomposition G=UAK) and L=UM. The cusps of Γ are the Γ-conjugacy classes of rational parabolic subgroups of G. Let m be the number of these cusps and P_1,…,P_m a set of representatives of these classes, each of which having a Langlands decomposition P_i=U_iA_iM_i, i=1,…,m. We denote by Γ_P_i:=Γ∩ P_i and by Γ_U_i:=Γ∩ U_i, where Γ_U_i is by definition a finite index subgroup of Γ_P_i (see <cit.>).
For each P_i we fix the scaling matrix τ_i=k_ia_y_i, where k_i∈ K is such that P_i=k_iPk_i^-1 and where y_i>0 is the unique number such that μ_L(τ_i^-1Γ_P_iτ_i\ L)=1.
We define the (spherical) Eisenstein series corresponding to the i-the cusp for Re(s)>n and g ∈ G by the convergent series
E_i(s,g) := ∑_γ∈Γ_P_i\Γ y(τ_i^-1γ g)^s,
where y(g) is given by the Iwasawa decomposition g=u_xa_y(g)k∈ UA K.
For each 1≤ j≤ m the constant term of E_i(s,g) with respect to the j-th cusp is defined by
c_ij(s,g):=1/vol(τ_j^-1Γ_U_jτ_j\ U)∫_τ_j^-1Γ_U_jτ_j\ UE_i(s, τ_j u_x g) dx,
which is known to be of the form
c_ij(s,g)=δ_ijy(g)^s+φ_ij(s)y(g)^n-s
for some holomorphic function φ_ij defined for Re(s)>n.
The series E_i(s, g) (and hence also φ_ij) has a meromorphic continuation to the whole s-plane, which on the half plane Re(s)≥n/2 is holomorphic except for a simple pole at s=n (called the trivial pole) and possibly finitely many simple poles on the interval (n/2, n) (called exceptional poles). We denote by C_Γ⊆ (n/2, n) the finite set of exceptional poles of all Eisenstein series of Γ.
The residue of E_i(s,g) at s=n is a constant which is the same for Eisenstein series at all cusps, given by the reciprocal of the measure of the homogeneous space Γ\ G, that is, for each 1≤ i≤ m and g∈ G,
ω_Γ:=Res_s=nE_i(s,g)=μ_G(Γ\ G)^-1,
For any bounded and compactly supported function f:𝒞→ℂ, the incomplete Eisenstein series attached to f at P_i is defined for any g ∈ G by
E_i(g,f) := ∑_γ∈Γ_P_i\Γ f(e_0τ_i^-1γ g).
Since E_i(·,f) is left Γ-invariant, it can be viewed as a function on the homogeneous space 𝒳. The light-cone Siegel transform of f can then be expressed in terms of incomplete Eisenstein series as follows.
There exist constants λ_1,…,λ_m >0 such that for any bounded and compactly supported function f:→ℂ,
f = ∑_i=1^m E_i(·,f_λ_i),
where f_λ(x):=f(λ^-1x) for any λ>0.
By the classical spherical harmonic analysis, the function space L^2(S^n) decomposes into irreducible _n+1(ℝ)-representations as following:
L^2(S^n)=⊕_d≥ 0L^2(S^n,d),
where L^2(S^n, d) is the space of degree d harmonic polynomials in n+1 variables restricted to S^n. This in turn induces the following decomposition of L^2(M\ K) into irreducible K-representations
L^2(M\ K)=⊕_d≥ 0 L^2(M\ K, d),
where L^2(M\ K, d) is the pre-image of L^2(S^n,d) under the isomorphism between L^2(M\ K) and L^2(S^n).
For each d≥ 0, we fix an orthonormal basis
{ψ_d,l: 0≤ l≤_ℂL^2(M\ K, d)-1}
for L^2(M\ K, d).
For any f: 𝒞→ℂ bounded and compactly supported, let
f_d,l(y):=∫_M\ Kf(e_0a_yk)ψ_d,l(k)dμ_M\ K(k).
so that f has a spherical expansion
f(e_0a_yk)=∑_d, l ≥ 0f_d,l(y)ψ_d,l(k)
in L^2 and also pointwise if f is smooth.
For any function f on ℝ^+, we denote by
f(s):=∫_0^∞f(y)y^-(s+1) d y, , for s ∈ℂ
its Mellin transform, whenever this defining integral is absolutely convergent.
Using the spherical expansion we define the following bilinear form for any f,f': 𝒞→ℂ bounded and compactly supported and any s∈ (n2, n),
M_f,f'(s):=∑_d,l≥ 0P_d(s)f_d,l(s)f'_d,l(s),
with P_0(s):=1 and P_d(s):=∏_i=0^d-1n-s+i/s+i if d≥ 1.
The following lemmas give estimates related to the operator M_f,f' which will be useful later for the analysis of Θ_∞(t). We write for simplicity M_f:=M_f,f.
Let f be a bounded function on the light-cone with bounded support. For every s∈(n/2,n), we have
| M_f(s)| ≪_s,supp(f)‖ f ‖_2^2 .
By definition of the Mellin transform and using the spherical expansion f(e_0a_yk)=∑_d,lf_d,l(y)ψ_d,l(k), we have for any s∈ (n/2,n)
M_f(s) = ∑_d,lP_d(s)|f_d,l(s)|^2
= ∑_d,lP_d(s) |∫_0^+∞f_d,l(y)y^-(s+1) dy|^2
= ∑_d,lP_d(s) |∫_0^+∞(∫_M\ Kf(e_0a_yk)ψ_d,l(k) dk)y^-(s+1) dy|^2.
Using the decomposition = ℝ_+ × M\ K given by x=e_0a_yk with the spherical coordinates y∈ℝ_+ and k∈ M\ K, we can write f(e_0a_yk)=ϕ_y(k)ρ(y), where (ϕ_y)_y>0 is a family of bounded function on M\ K and ρ is the characteristic function of an interval away from y=0 (since by the parametrization of 𝒞 we have e_0a_y=y^-1e_0).
We also introduce the projection operator pr_d:L^2(M\ K)→ L^2(M\ K,d) on the space of degree d harmonic polynomials in n+1 variables restricted to M\ K and write f_d:=pr_d(f) for f∈ L^2(M\ K). Using that (ψ_d,l)_l≥ 0 is an orthonormal basis of L^2(M\ K,d) for every d≥ 0, it follows
M_f(s) = ∑_d,l≥ 0P_d(s) |∫_0^+∞(∫_M\ Kϕ_y(k)ρ(y)ψ_d,l(k) dk)y^-(s+1) dy|^2
=∑_d≥ 0P_d(s)∫_0^+∞∫_0^+∞⟨ϕ_y_1_d, ϕ_y_2_d⟩__M\ K ρ(y_1)ρ(y_2)y_1^-(s+1)y_2^-(s+1) dy_1dy_2.
Using that |P_d(s) |≪ 1, Cauchy-Schwarz inequality, the decomposition in spherical harmonics given by L^2(M\ K)= ⊕_d≥ 0 L^2(M\ K,d), that ρ^2=ρ and that the support of ρ is an interval away from y=0, we obtain
|M_f(s)| ≪∫_0^+∞∫_0^+∞∑_d≥ 0‖ϕ_y_1_d‖_2‖ϕ_y_2_d‖_2 ρ(y_1)ρ(y_2)y_1^-(s+1)y_2^-(s+1) dy_1dy_2
≤∫_0^+∞∫_0^+∞‖ϕ_y_1‖_2 ‖ϕ_y_2‖_2 ρ(y_1)ρ(y_2)y_1^-(s+1)y_2^-(s+1) dy_1dy_2
= (∫_0^+∞‖ϕ_y‖_2 ρ(y)y^-(s+1) dy)^2
≤(∫_0^+∞‖ϕ_y‖_2^2 ρ(y)^2y^-(n+1) dy)(∫_0^+∞ρ(y)^2y^n-2s-1 dy)
≪_s,supp(ρ)(∫_0^+∞∫_M\ K|ϕ_y(k)ρ(y)|^2y^-(n+1) dk dy)
= ‖ f ‖_2^2.
For any s∈ (n/2,n), we have P_d(s)≍_s (d+1)^n-2s.
It will be useful for our argument later to have an estimate of P_d(s) also for s∈ℂ with real part in (n/2,n).
For any s=r+it ∈ℂ, with r∈(n/2,n), we have |P_d(s)|≪_δ |t|^1/2(d+1)^n-2r+δ for any δ>0.
We have
|P_d(s) |^2
=∏_k=0^d-1|n-s+k|^2/|s+k|^2
=∏_k=0^d-1(n-r+k)^2/(r+k)^2·1+t^2/(n-r+k)^2/1+t^2/(r+k)^2
≪ (d+1)^2(n-2r)·∏_k=0^d-11+t^2/(n-r+k)^2/1+t^2/(r+k)^2 (by Lemma <ref>)
Further we have
log( ∏_k=0^d-11+t^2/(n-r+k)^2/1+t^2/(r+k)^2) = ∑_k=1^d-1( log(1+t^2/(n-r+k)^2)- log(1+t^2/(r+k)^2))
+log(1+t^2/(n-r)^2/1+t^2/r^2)
≪∑_k=1^d-1t^2/1+t^2/(r+k)^2( 1/(n-r+k)^2-1/(r+k)^2)+O(1)
≪∑_k=1^d-1t^2k^2/k^2+t^2·1/k^3+O(1)
≪∑_k≤α(t)1/k+∑_k= α(t)+1^d-1t^2/(k^2+t^2)k +O(1)
≪logα(t)+t^2/α(t)^2+t^2log (d+1)+O(1).
Choosing α(t)=⌈β |t|⌉ with β >0 large enough, we obtain
∏_k=0^d-11+t^2/(n-r+k)^2/1+t^2/(r+k)^2≪β |t| · (d+1)^1/β^2+1,
hence |P_d(s)|≪_δ |t|^1/2·(d+1)^n-2r+δ for any δ>0.
We have further (see details in <cit.>)
M_f_λ_i,f'_λ_j(s) =λ_i^sλ_j^sM_f,f'(s), for any s∈ (n/2,n),
μ_ (f_λ) = λ^n μ_ (f), for any λ>0,
and denote by
ω_Q := Res_s=nE_Q(s,g) = ω_Γ∑_i=1^mλ_i^n, with ω_Γ=μ_G(G/Γ)^-1
c_Q:= ω_Γ||Res_s=s_nE_Q(s,g)||_L^2_𝓍^2 = ω_Γ∑_i,j=1^mλ_i^s_nλ_j^s_nRes_s=s_nφ_ij(s),
where E_Q(s,g) is the light-cone Eisenstein series of the quadratic form Q defined by
E_Q(s,g):= ∑_i=1^mλ_i E_i(s,g),
which has at most one exceptional pole at s_n=⌊n+2/2⌋ (see <cit.>).
The correlations of incomplete Eisenstein series can then be estimated as follows.
For any 1≤ i,j≤ m, there exist a bounded linear operator _ij: L^2()→ L^2() with operator norm || _ij||_op≤ 1 such that for any f,f' ∈ C_c^∞(𝒞),
⟨ E_i(·,f),E_j(·,f') ⟩= ω_Γ^2 μ_(f)μ_(f') +ω_Γ⟨δ_ijf+_ij(f),f'⟩+ω_Γ∑_s_l ∈ C_ΓM_f,f'(s_l)Res_s=s_lφ_i,j(s) ,
where the two inner products are with respect to μ_ and μ_ respectively.
From Theorem <ref> Kelmer and Yu derived the following mean value theorem and effective estimate of the second moment of the light-cone Siegel transform.
Let f:→ℂ be a measurable, bounded and compactly supported function. Then we have
∫_f d μ_ = ω_Q μ_(f).
Further assume f is smooth, then
∫_|f|^2 d μ_ = |ω_Q μ_(f)|^2+c_Q M_f,f(s_n) +O( μ_( |f|^2)),
where s_n:= ⌊n+2/2⌋, the term M_f,f(s) is a quadratic form on f given by (<ref>) and c_Q given by (<ref>).
We generalize the second moment formula in Theorem <ref> to measurable, bounded and compactly supported functions in the following Proposition, using a similar argument as in <cit.>.
Let f be measurable, bounded and compactly supported functions on 𝒞. Then we have
∫_|f|^2 d μ_ = |ω_Q μ_(f)|^2+c_Q M_f,f(s_n) +O( μ_( |f|^2)).
There exist a sequence (f_i)_i∈ℕ in C_c^∞(𝒞) converging to f in L^1(𝒞) and in L^2(𝒞). By the mean value identity (<ref>) it follows that (f_i)_i∈ℕ converges to f in L^1(𝒳), hence also pointwise almost everywhere for some subsequence. To show that this convergence is also in L^2, we use Theorem <ref> and write for any i,j ∈ℕ
f_i - f_j _L_𝒳^2 ≪ f_i - f_j _L^1_𝒞^2 + f_i - f_j _L^2_𝒞^2 + M_f_i-f_j(s_n)
≪ f_i - f_j _L^1_𝒞^2 + f_i - f_j _L^2_𝒞^2 (by Lemma <ref>).
Since (f_i)_i∈ℕ is a Cauchy sequence in L^1(𝒞)∩ L^2(𝒞), it follows that (f_i)_i∈ℕ is a Cauchy sequence in L^2(𝒳) and therefore converges to f in L^2(𝒳). Hence the second moment formula for f follows from the second moment formula for (f_i)_i∈ℕ.
We shall show next that
𝖥_N,M^(ε,L)_L^2()^2 =Θ_∞ (0) + 2∑_t=1^K-1Θ_∞ (t)+o(1),
where
Θ_∞(t):=∫_ (χ∘ a_t)χ dμ_ -μ_(χ)^2.
Using Proposition <ref>, Lemma <ref> and the estimates in (<ref>), we have for any t≥ 0
|Θ_∞(t) - Θ_∞^(ε)(t) | ≤∫_|(f_ε∘ a_t )f_ε- (χ∘ a_t )χ| dμ_ + | μ_( f_ε)^2- μ_( χ)^2|
≪f_ε-χ_L_^2(f_ε_L_^2 +χ__L_^2)+ f_ε-χ_L_^1
≪f_ε-χ_L_^2+ f_ε-χ_L_^1
≪μ_( | f_ε - χ|)^2+ |M_f_ε-χ(s_n)|+O( μ_(|f_ε-χ|^2)) + ε
≪ε,
and (<ref>) then gives
𝖥_N,M^(ε,L)_L^2()^2
= Θ_∞(0)+2∑_t=1^K-1Θ_∞(t)
+ O (N^-1(M+K)K + (N^-1 e^-δ M e^ξ K+e^-δ K) L^2 ε^-2l +KL^-τ-2/2 + ε K ),
which implies (<ref>), provided
ε K → 0, as N→∞.
It remains to show that the series ∑_t=1^K-1Θ_∞ (t) converges as K→∞.
We write
Θ_∞ (t) = ∫_χ_t·χ dμ_ - μ_( χ_t)μ_( χ).
with χ_t:=χ∘ a_t.
Using Lemma <ref>, we can express the correlations in (<ref>) in terms of incomplete Eisenstein series attached to χ_t and χ:
∫_χ_t·χ dμ_ = ∑_i,j=1^m⟨ E_i(·,χ_t,λ_i),E_j(·,χ_λ_j) ⟩,
where the inner product is with respect to μ_.
It follows from (<ref>) that
∫_χ_t·χ dμ_ = ∑_i,j=1^m⟨ E_i(·,χ_t,λ_i),E_j(·,χ_λ_j) ⟩_μ_
= ∑_i,j=1^m( ω_Γ^2 μ_(χ_t,λ_i)μ_(χ_λ_j) +ω_Γ⟨δ_ijχ_t,λ_i+_ij(χ_t,λ_i),χ_λ_j⟩_μ_+ω_Γ∑_s_l ∈ C_ΓM_χ_t,λ_i,χ_λ_j(s_l)Res_s=s_lφ_ij(s))
= ω_Q^2 μ_(χ_t)μ_(χ) +ω_Q ⟨χ_t, χ⟩_μ_ +∑_i,j=1^m⟨_ij(χ_t,λ_i),χ_λ_j⟩_μ_+c_Q M_χ_t,χ(s_n).
Since χ_t and χ have disjoint supports for all t≥ 1, it follows from the mean value identity in Theorem <ref> and from (<ref>) that (<ref>) reduces for all t≥ 1 to
Θ_∞(t) = ∑_i,j=1^m⟨_ij(χ_t,λ_i),χ_λ_j⟩_μ_+c_Q M_χ_t,χ(s_n).
We shall estimate next the terms M_χ_t,χ(s_n) and ⟨_ij(χ_t,λ_i),χ_λ_j⟩_μ_.
For every s∈ (n/2,n), there exists σ>0 (depending on n and s) such that for every t≥ 1 we have
|M_χ_t,χ(s)| ≪ e^-σ t.
By definition of the Mellin transform, we have for any s∈ (n/2,n)
M_χ_t,χ(s) = ∑_d,lP_d(s)χ__d,l(s)(χ_t)_d,l(s)
= ∑_d,lP_d(s) (∫_0^+∞χ__d,l(y)y^-(s+1) dy)(∫_0^+∞(χ_t)_d,l(y)y^-(s+1) dy)
= ∑_d,lP_d(s) (∫_0^+∞(∫_M\ Kχ(e_0a_yk)ψ_d,l(k) dk)y^-(s+1) dy)·
·(∫_0^+∞(∫_M\ Kχ_t(e_0a_yk)ψ_d,l(k) dk)y^-(s+1) dy).
Using the decomposition = ℝ_+ × M\ K given by x=e_0a_yk with the spherical coordinates y∈ℝ_+ and k∈ M\ K, we write
χ(e_0a_yk)=ϕ_y(k)ρ(y) and χ_t(e_0a_yk)=ϕ_t,y(k)ρ_t(y),
where ρ and ρ_t are the characteristic functions of intervals given by the projections on the last coordinate of the domains F_1,c and a_e^-t(F_1,c) respectively (i.e. intervals of the form [α,β] and [α e^-t,β e^-t] for some positive constants α and β depending on the diophantine constant c from (<ref>)), while ϕ_y and ϕ_y,t are characteristic functions on M\ K resulting from the decomposition = ℝ_+ × M\ K.
We also introduce the projection operator on the space of degree d harmonic polynomials in n+1 variables restricted to M\ K and write
pr_d:L^2(M\ K)→ L^2(M\ K,d) and f_d:=pr_d(f), for any f∈ L^2(M\ K).
Using that (ψ_d,l)_l≥ 0 is an orthonormal basis of L^2(M\ K,d) for every d≥ 0, it follows
M_χ_t,χ(s) = ∑_d,l≥ 0P_d(s) (∫_0^+∞(∫_M\ Kϕ_y(k)ρ(y)ψ_d,l(k) dk)y^-(s+1) dy)·
·(∫_0^+∞(∫_M\ Kϕ_t,y(k)ρ_t(y)ψ_d,l(k) dk)y^-(s+1) dy)
=∫_0^+∞∫_0^+∞∑_d≥ 0P_d(s)⟨(ϕ_y_1)_d, (ϕ_t,y_2)_d⟩__M\ K ρ(y_1)ρ_t(y_2)y_1^-(s+1)y_2^-(s+1) dy_1dy_2.
We separate the summation above into two parts with a parameter D≥ 1 to be fixed later, and estimate the two parts using a similar approach as in <cit.>, in particular the estimate of P_d(s) from Lemma <ref> and the following inequality from spherical harmonic analysis
|| ϕ_d||_2 ≪ (d+1)^n-1/2||ϕ||_1, for any ϕ∈ L^2(M\ K).
For the first part of the summation, we use orthogonality of the projections pr_d and Cauchy-Schwarz inequality to obtain
|∑_d≤ D P_d(s)⟨(ϕ_y_1)_d, (ϕ_t,y_2)_d⟩__M\ K|
≪∑_d≤ D (d+1)^n-2s||ϕ_y_1||_2||ϕ_t,y_2||_2
≪∑_d≤ D (d+1)^n-2s+n-1/2||ϕ_y_1||_2||ϕ_t,y_2||_1
≪ D^n-2s+n-1/2+1||ϕ_y_1||_2||ϕ_t,y_2||_1.
For the second part of the summation, we use Cauchy-Schwarz inequality for the sum and the convergence given by L^2(M\ K)= ⊕_d≥ 0 L^2(M\ K,d), which gives
|∑_d> D P_d(s)⟨(ϕ_y_1)_d, (ϕ_t,y_2)_d⟩__M\ K| ≪∑_d> D (d+1)^n-2s||(ϕ_y_1)_d||_2||(ϕ_t,y_2)_d||_2
≤ D^n-2s∑_d≥ 0||ϕ_y_1_d||_2||ϕ_t,y_2_d||_2
≤ D^n-2s||ϕ_y_1||_2||ϕ_t,y_2||_2.
Using that ||ϕ||_2=||ϕ||_1^1/2 for any characteristic function ϕ, we optimize both estimates (<ref>) and (<ref>) by taking D=max (1,||ϕ_t,y_2||_1^-1/n+1 ) and obtain
|∑_d≥ 0 P_d(s)⟨(ϕ_y_1)_d, (ϕ_t,y_2)_d⟩__M\ K| ≪ ||ϕ_y_1||_2||ϕ_t,y_2||_1^1/2+2s-n/n+1.
We note that 1/2+2s-n/n+1>s/n for any s∈ (n/2,n) and n≥ 2, and write σ:= 1/2+2s-n/n+1-s/n>0.
We observe further that, since e_0a_y=y^-1e_0 and since F_1,c⊆{x ∈𝒞: x_1^2+… + x_n+2^2 <c^2 }, we have for any y>0
ϕ_y _1 = ∫_M \ Kχ (e_0a_yk)dk
=∫_M \ Kχ (y^-1e_0k)dk
≤∫_M \ Kχ_{x_1^2+… +x_n^2 <y^2c^2}(e_0k)dk
≪_c y^n.
From (<ref>) and (<ref>), using that supp(ρ) is bounded away from 0 and supp(ρ_t) =[α e^-t,β e^-t], it follows
| M_χ_t,χ(s)| ≪(∫_0^+∞y_1^n/2y_1^-(s+1)ρ(y_1) dy_1)(∫_0^+∞y_2^s+nσy_2^-(s+1)ρ_t(y_2) dy_2)
≪_supp(ρ)∫_0^+∞y_2^-1+nσ ρ_t(y_2)dy_2
≪∫_α e^-t^β e^-ty_2^-1+nσ dy_2
≪ e^-nσ t.
We give next an estimate of the term ∑_i,j=1^m⟨_ij(χ_λ_i),χ_t,λ_j⟩_μ_. In order to simplify the notations, we will omit without loss of generality the scaling coefficients λ_1,…, λ_m and consider only a single cusp.
There exists γ >0 such that for every t≥ 1 we have
|⟨(χ),χ_t⟩_μ_| ≪ e^-γ t.
For a smooth and compactly supported function f∈ C_c^∞(), the operator _ can be expressed explicitly in terms of the Mellin transform of the spherical harmonic coefficients of f as follows (see <cit.>):
_(f)(e_0a_yk) = ∑_d,l(1/2π i∫_( n/2) P_d(s)φ(s)f_d,l(s)y^n-sds)ψ_d,l(k)
where the contour integration is along the line of complex numbers with real part n/2.
We shall approximate χ by a smooth and compactly supported function f_ε_t in the sense of (<ref>), with a parameter ε_t>0 to be fixed later. We note that ε_t is independent from the parameter ε=ε(N) introduced in (<ref>). Since is a bounded linear operator on L^2() with operator norm ‖‖_op≤ 1, we have:
| ⟨_(χ),χ_t⟩| = | ⟨_(χ -f_ε_t)+_(f_ε_t),χ_t⟩|
≤χ -f_ε_t_2χ_t _2+| ⟨_(f_ε_t),χ_t⟩|
≪ε_t^1/2 + | ⟨_(f_ε_t),χ_t⟩| .
Using the decomposition dμ_=y^-(n+1)dydk, the spherical expansion f(e_0a_yk)=∑_d,lf_d,l(y)ψ_d,l(k) and the decomposition L^2(M\ K)= ⊕_d≥ 0 L^2(M\ K,d), it follows:
⟨_(f_ε_t),χ_t⟩_μ_
= ∫_M\ K∫_0^+∞(∑_d,l1/2π i( ∫_(n/2)P_d(s)φ(s)(f_ε_t)_d,l(s)y^n-sds)ψ_d,l(k))·
·(∑_d',l'(χ_t)_d',l'(y)ψ_d',l'(k))y^-(n+1)dydk
= ∫_M\ K1/2π i∫_(n/2)( ∑_d,lP_d(s)φ(s)(f_ε_t)_d,l(s)ψ_d,l(k))·
·( ∑_d',l'( ∫_0^+∞(χ_t)_d',l'(y)y^-(s+1)dy)ψ_d',l'(k))dsdk
=∑_d,l1/2π i(∫_(n/2)P_d(s)φ(s)(f_ε_t)_d,l(s)(χ_t)_d,l(s) ds ).
We use again the same decomposition as in (<ref>)
χ_t(e_0a_yk)=ϕ_t,y(k)ρ_t(y)
and introduce the function
F_ε_t(y,k):= f_ε_t(e_0a_yk).
Moreover, using the fact that there is at most one exceptional pole at s_n=⌊n+2/2⌋ in (n/2,n), we can move the contour of integration to the line ( n/2+𝓇) for some 𝓇>0 small enough. By expanding the integrand similarly to (<ref>) we have
⟨_(f_ε_t),χ_t⟩_μ_
=1/2π i∫_(n/2+𝓇)φ(s)∑_d≥0P_d(s)(∫_ℝ_+∫_ℝ_+⟨ F_ε_t(y_1,· ), (ϕ_t,y_2)_d⟩_μ_M\ K ρ_t(y_2)y_1^-(s+1)y_2^-(s̅+1) dy_1dy_2 ) ds
=1/2π i∫_(n/2+𝓇)φ(s)∑_d≥0P_d(s)
(∫_ℝ_+∫_ℝ_+⟨∂^l F_ε_t/∂^l y_1, (ϕ_t,y_2)_d⟩_μ_M\ K ρ_t(y_2)y_1^-(s+1)+l/∏_j=0^l-1(s+j)y_2^-(s̅+1) dy_1dy_2 ) ds,
where we applied an integration by parts for the l-th partial derivative with respect to y_1, with l≥ 1 (even) large enough to be fixed later.
We use the same computation as in the proof of Lemma <ref> with the estimate |P_d(s)|≪ (d+1)^n-2𝓇+δ, for s=r+it ∈ℂ, from Lemma <ref>, and choose a fixed 𝓇>0 small enough such that n/2+𝓇<s_n. It follows
|⟨_(f_ε_t),χ⟩_μ_|
≪∫_(n/2+𝓇)|φ(s)s^-l| (∫_ℝ_+∫_ℝ_+∑_d≥0|P_d(s)||⟨(∂^l F_ε_t/∂^l y_1)_d, (ϕ_t,y_2)_d⟩_μ_M\ K| ·.
· ρ_t(y_2)y_1^-(n/2+𝓇+1)+ly_2^-(n/2+𝓇+1) dy_1dy_2 )ds
≪_δ∫_(n/2+𝓇)|φ(s)s^-l+1/2|ds (∫_ℝ_+∫_ℝ_+∑_d≥0(d+1)^n-2𝓇+δ|⟨(∂^l F_ε_t/∂^l y_1)_d, (ϕ_t,y_2)_d⟩_μ_M\ K| ·.
· ρ_t(y_2)y_1^-(n/2+𝓇+1)+ly_2^-(n/2+𝓇+1) dy_1dy_2 ).
Using further the same estimate as in (<ref>) and the fact that I:=pr_ℝ_+(supp(F_ε_t)) is uniformly bounded away from y=0, we have
∫_ℝ_+∫_ℝ_+∑_d≥0(d+1)^n-2𝓇+δ|⟨(∂^l F_ε_t/∂^l y_1)_d, (ϕ_t,y_2)_d⟩_μ_M\ K| ρ_t(y_2)y_1^-(n/2+𝓇+1)+ly_2^-(n/2+𝓇+1) dy_1dy_2
≪(∫_ℝ_+‖∂^l F_ε_t/∂^l y_1‖_L^2_M\ K y_1^-(n/2+𝓇+1)+ldy_1)(∫_ℝ_+‖ϕ_t,y_2‖_L^1_M\ K^1/2+2𝓇-δ/n+1 ρ_t(y_2)y_2^-(n/2+𝓇+1) dy_2)
≪‖ f_ε_t‖_C^l(∫_I y_1^-(n/2+𝓇+1)+ldy_1)(∫_ℝ_+y_2^n(1/2+2𝓇-δ/n+1) ρ_t(y_2)y_2^-(n/2+𝓇+1) dy_2)
≪_𝓇‖ f_ε_t‖_C^l(∫_ℝ_+ ρ_t(y_2)y_2^-1-δn/n+1+𝓇n-1/n+1 dy_2)
≪ε_t^-l ∫_α e^-t^β e^-t y_2^-1-δn/n+1+𝓇n-1/n+1 dy_2
≪ε_t^-le^-t(𝓇n-1/n+1-δn/n+1) .
We write σ:=𝓇n-1/n+1-δn/n+1 and choose 0<δ<n-1/n+1𝓇 such that σ >0. We use further the following estimates for the scattering matrix φ(s) near the critical line (n/2) in terms of a function W(t)≥ 1, with s=r+it, introduced in <cit.>:
|φ(s)|^2=1+O( (r-n/2)W(t)) , for n/2<r<r_0<n,
and ∫_0^T W(t) dt≪ T^n+1 , (see Propositions 7.11 and 7.13 in <cit.>).
For fixed r=n/2+𝓇 and l≥ 1 large enough, using Cauchy-Schwarz inequality then integration by parts, we have
(∫_(n/2+𝓇)|φ(s)s^-l+1/2|ds )^2 ≤(∫_(n/2+𝓇)|φ(s)s^-l/2|^2ds )(∫_(n/2+𝓇)|s^-l+1|ds )
= (∫_ℝ(1+O(𝓇W(t)))|t^2+r^2|^-l/2dt)(∫_ℝ|t^2+r^2|^-l+1/2 dt )
≪_𝓇,l∫_ℝW(t)|t^2+r^2|^-l/2dt
≪∫_ℝ |t|^n+2|t^2+r^2|^-l+2/2dt = O(1) with some fixed l> n+1 .
All together we obtain
| ⟨_(χ),χ_t⟩| ≪ε_t^1/2 + ε_t^-le^-σ t,
and choose ε_t = e^-2γ t with γ :=σ/1+2l.
Putting together (<ref>), (<ref>) and the estimates showed in Lemmas <ref> and <ref>, we obtain
‖𝖥_N^(ε,L)‖_L^2_𝒴^2 = ∑_t=-K+1^K-1Θ_∞(t) + o(1)
= ∑_t=-K+1^K-1(∑_i,j=1^m⟨_ij(χ_t,λ_i),χ_λ_j⟩_μ_+c_Q M_χ_t,χ(s_n)) + o(1)
and
‖𝖥_N^(ε,L)‖_L^2_𝒴^2N→∞⟶ σ^2 := ∑_i,j=1^m⟨_ij(χ_∞,λ_i),χ_λ_j⟩_μ_+c_Q M_χ_∞,χ(s_n),
where we denote by χ_∞ the characteristic function of the domain
{ x ∈𝒞 : x_n+2^2- x_n+1^2 < c^2} = ⋃_t=-∞^∞a_-t(F_1,c).
§ PROOF OF THE CLT FOR THE COUNTING FUNCTION
Using the characterisation by the cumulants (Proposition <ref>), we first show that the sequences (𝖥_N,M^(ε,L))_N≥ 1, hence also the sequence (𝖥_N)_N≥ 1, converge in distribution to the normal law Norm_σ.
Let m≥ 2. For every ξ∈ℝ,
μ_𝒴({y ∈𝒴:𝖥_N(y)<ξ}) →Norm_σ(ξ)
as N→∞, for some variance σ^2<∞.
By Proposition <ref> and considering that 𝖥_N and 𝖥_N,M^(ε,L) have the same limit distribution, it is enough to show that
Cum_μ_𝒴^(r)(𝖥_N,M^(ε,L))→ 0 as N→∞
when r≥ 3, and
‖𝖥_N,M^(ε,L)‖_L^2_𝒴^2→σ^2 as N→∞ .
We showed in sections <ref> and <ref> that these two conditions hold provided that the parameters
ε=ε(N), L=L(N), M=M(N), γ=γ(N), K=K(N)≤ M(N)
satisfy the conditions we recall here
M =o(N^1/2) ,<ref>
(N-M)^1/2ε → 0 ,<ref>
(N-M)^1/2e^-θ M → 0 ,<ref>
M ≫log L ,<ref>
(N-M) =o(L^p) , for some p<n ,<ref>
M ≫_r γ ,<ref>
(N-M)^r/2L^rε^-rl =o(e^δγ) ,<ref>
e^-δ K L^2ε^-2l → 0 , <ref>
(N-M)^-1 e^-δ M e^ξ K L^2ε^-2l → 0 ,<ref>
(N-M)^-1K^2 → 0<ref>,
KL^-τ-2/2 → 0 , for some τ<n <ref>
ε K → 0<ref>.
One verifies easily that the following choice of parameters, with n≥ 3,
M =(log N) (loglog N),
ε =(N-M)^-q_1, for some q_1>1/2,
L =(N-M)^q_2 for some q_2>0 large enough to satisfy (<ref>),
K =c_1log (N-M) for some c_1>0 large enough to satisfy (<ref>),
γ =c_rlog (N-M) for some c_r>0 large enough to satisfy (<ref>)
verify the required conditions.
Hence, Theorem <ref> follows from Proposition <ref>.
Next we relate the function 𝖥_N to the counting function 𝖭_T,c and show that (𝖥_N)_N≥ 1 has the same limit distribution as (𝖣_T)_T>0 defined in the following.
For k∈ K and α_k ∈ S^n defined by k(α_k,1)=(0,…,0,1,1)∈ S^n, we consier
𝖣_T(k) :=𝖭_T,c(α_k)-C_c,n· T/T^1/2
where C_c,n:=∫_𝒞χ dμ_𝒞=vol(F_1,c).
For all N≥ 1, we have
∑_t=0^N-1∫_𝒴χ∘ a_t dμ_𝒴 = C_c,nN+O(1) .
By the mean value identity in (<ref>), we have
C_c,n=∫_𝒞χ dμ_𝒞= ∫_𝒳χ̂d μ_𝒳.
It follows
| ∑_t=0^N-1∫_𝒴χ∘ a_t dμ_𝒴 - C_c,nN| = | ∑_t=0^N-1∫_𝒴χ∘ a_t dμ_𝒴 - ∑_t=0^N-1∫_𝒳χdμ_𝒳|
≤∑_t=0^N-1∫_𝒴| χ∘ a_t- μ_𝒳(χ) |dμ_𝒴.
Introducing a parameter L_t>0 such that L_t ∞ and using the estimates for the truncated Siegel transform from Proposition <ref>, we have for any 2≤τ <n and t≥κlog L_t,
‖(χ∘ a_t - μ_𝒳(χ)) -(χ^(L_t)∘ a_t - μ_𝒳(χ^(L_t) )) ‖_L^1(𝒴) ≤‖χ∘ a_t -χ^(L_t)∘ a_t‖_L^1(𝒴) + μ_𝒳( | χ- χ^(L_t)|)
≪ L_t^-τ/2+ L_t^-(τ-1)
≪ L_t^-τ/2.
Introducing further a parameter ε_t>0 such that ε_t 0 and using the estimates for the smooth approximation of χ from Proposition <ref> and from (<ref>) we have
‖(χ^(L_t)∘ a_t - μ_𝒳(χ^(L_t))) -(f_ε_t^(L_t)∘ a_t - μ_𝒳(f_ε_t^(L_t) )) ‖_L^1(𝒴)
≤‖χ^(L_t)∘ a_t - f_ε_t^(L_t)∘ a_t‖_L^1(𝒴) + μ_𝒳( | χ^(L_t)- f_ε_t^(L_t)|)
≪ε_t +e^-θ t.
Using further the effective equidistribution estimate from Proposition <ref>, we have
‖f_ε_t^(L_t)∘ a_t - μ_𝒳(f_ε_t^(L_t) ) ‖_L^1(𝒴) ≪ e^-δ t‖f_ε_t^(L_t)‖_C^l
≪ e^-δ tε_t^-lL_t.
We choose L_t=t^a and ε_t=t^-b for some a>2/τ and b>1/l, then fix an integer N_0=N_0(κ,a)≥ 1 such that t≥κlog L_t for all t≥ N_0.
Altogether we obtain
| ∑_t=0^N-1∫_𝒴χ∘ a_t dμ_𝒴 - C_c,nN| ≪| ∑_t=0^N_0-1∫_𝒴χ∘ a_t dμ_𝒴 - C_c,nN|
+ ∑_t=N_0^N-1( L_t^-τ/2+ε_t+ e^-θ t + e^-δ tε_t^-lL_t)
= O(1) .
We will also need the following estimate related to the approximation in (<ref>). We recall our equivalent notations χ=χ_1,c=χ__F_1,c and use in the following Lemma the notation χ_1,c for more clarity.
We have
∫_K | 𝖭_N,c(α_k)-∑_t=0^N-1χ_1,c∘ a_t(kΛ_0) | dμ_K(k) = O(N^1/3) .
From (<ref>) we have
∫_K | 𝖭_N,c(α_k)-∑_t=0^N-1χ_1,c∘ a_t(kΛ_0) | dμ_K(k)
≤∫_K | ∑_t=0^⌊ N+r_0 ⌋χ_1,c∘ a_t(kΛ_0)-∑_t=0^⌊ N-r_0 ⌋-1χ_1,c_𝓁∘ a_t(kΛ_0) |dμ_K(k)+O(𝓁^1/2)
≤∑_t=0^⌊ N-r_0 ⌋-1∫_𝒴χ__F_1,c∖ F_1,c_𝓁∘ a_tdμ_𝒴+ ∑_t=⌊ N-r_0 ⌋^⌊ N+r_0 ⌋∫_𝒴χ_1,c∘ a_tdμ_𝒴 +O(𝓁^1/2).
By Lemma <ref> we have
∑_t=⌊ N-r_0 ⌋^⌊ N+r_0 ⌋∫_𝒴χ_1,c∘ a_t=O(1).
Further, we estimate the volume of the set
F_1,c∖ F_1,c_𝓁= { x ∈𝒞 : c_𝓁≤ (x_1^2+ … + x_n^2)^1/2 < c , c< x_n+2+ x_n+1 < ce } ,
using that |c-c_𝓁|=O(l^-1), which gives
∫_𝒞χ__F_1,c∖ F_1,c_𝓁dμ_𝒞 =O(𝓁^-1).
By Lemma <ref> we have
∑_t=0^⌊ N-r_0 ⌋-1∫_𝒴χ__F_1,c∖ F_1,c_𝓁∘ a_tdμ_𝒴= O( N𝓁^-1+1),
which yields the claim with 𝓁=⌊ N^2/3⌋.
It follows from Lemmas <ref> and <ref> that
∫_K| 𝖣_N(k) -𝖥_N(kΛ_0) |dμ_K(k)
= 1/N^1/2∫_K| 𝖭_N,c(α_k)-∑_t=0^N-1χ∘ a_t(kΛ_0) +∑_t=0^N-1∫_𝒴χ∘ a_t -C_c,nN |dμ_K(k)
=o(1) ,
hence (𝖣_N) and (𝖥_N) have the same limit distribution, i.e. for all ξ∈ℝ we have
| {k ∈ K:𝖣_N(k)<ξ}|→Norm_σ (ξ) , as N→∞.
If we take N_T=⌊ T ⌋, then N_T ≤ T< N_T+1, hence
𝖣_T(k)= 𝖭_T,c(α_k)-C_c,nT/T^1/2≤𝖭_N_T+1,c(α_k)-C_c,nN_T/T^1/2=a_T𝖣_N_T+1+b_T , with a_T→ 1 and b_T→ 0.
It follows
| {k ∈ K:𝖣_T(k)<ξ}| ≥ | {k ∈ K:𝖣_N_T+1(k)<(ξ-b_T)/a_T }|.
Therefore, for any ε>0 and sufficiently large T,
| {k ∈ K:𝖣_T(k)<ξ}| ≥ | {k ∈ K:𝖣_N_T+1(k)<ξ-ε}|,
thus
lim inf_T→∞| {k ∈ K:𝖣_T(k)<ξ}| ≥Norm_σ(ξ-ε) ,
for all ε>0, which implies
lim inf_T→∞| {k ∈ K:𝖣_T(k)<ξ}| ≥Norm_σ(ξ) .
One shows similarly
lim sup_T→∞| {k ∈ K:𝖣_T(k)<ξ}| ≤Norm_σ(ξ) ,
which finishes the proof of Theorem <ref>.
§ PROOF OF THE EFFECTIVE ESTIMATE FOR THE COUNTING FUNCTION
To obtain an effective estimate for the counting function 𝖭_T,c, the central argument in our approach is to derive an almost-everywhere-bound for averages ∑_t=0^T-1(χ∘ a_t-μ_𝒳(χ)) from an L^2-bound on these averages. We generalized this argument in <cit.> to L^p-bounds, p>1, following the approach in <cit.> based on an original idea of Schmidt in <cit.>. We generalize in the following proposition our result from <cit.> in order to take into account the approximation of χ_1,c by the sequence (χ_1,c,𝓁)_𝓁≥ 1 coming from the sandwiching (<ref>).
Let (Y,ν) be a probability space, and let (f_𝓁)_𝓁≥ 1 be a sequence of measurable functions f_𝓁: Y ×ℕ→ℝ. Suppose there exist p> 1 and C>0 such that, for all 𝓁≥1 and any integers 0≤ a<b, we have
∫_Y| ∑_t=a^b-1f_𝓁(y,t) |^pdν(y) ≤ C(b-a) .
Then, there exist C_p>0, depending only on p>1, such that for any subsequence (f_𝓁_N)_N≥ 1 there exists a full-measure set Y_0⊆ Y such that for all y∈ Y_0, all ε >0, there exists N_y≥1 such that for all N≥ N_y, we have
∑_t=0^N-1 f_𝓁_N(y,t) ≤ C_p
N^1/plog ^1+1/p+ ε/p N.
In the argument as formulated in <cit.>, the estimate in (<ref>) is satisfied for all pairs (a,b) of the form (2^i j,2^i(j+1)) coming from the dyadic decomposition of N-1. In our previous work (see proof of Proposition 4.2 in <cit.>), we observed that for our argument it is actually enough to consider only a reduced selection of such pairs, denoted below by L(N), which still builds a partition of [1,N)∩ℕ and allows moreover to satisfy the conditions t≥κlog L and t≥ -1/θlogε required by Proposition <ref> and Proposition <ref>, for t ∈{a,b-1} and for parameters L and ε to be defined as functions of (a,b).
For non-negative integers a<b we write [a..b) [a,b)∩ℕ. For an integer s≥ 2 we consider the following set of dyadic subsets,
L_s{[2^i..2^i+1) : 0≤ i≤ s-2 }∪{[2^ij..2^i(j+1)) : 2^ij≥ 2^s-1, 2^i(j+1)≤ 2^s }∪{ [0..1) },
where the sets of the first type [2^i..2^i+1), 0≤ i≤ s-2, together with [0..1), are a decomposition of the set [0..2^s-1).
We observe that for any integer N≥ 2 with 2^s-1< N ≤ 2^s, the set [0..N) is the disjoint union of at most 2s-1 subsets in L_s (namely [0..1), the s-1 subsets of the first type and at most s-1 sets of the second type which can be constructed from the binary expansion of N-1). We denote by L(N) this set of subsets, i.e. [0..N)=_I∈ L(N)I.
In the following lemmas, the notations and assumptions are the same as in Proposition <ref> .
For every l≥ 1, we have
∑_I∈ L_s∫_Y|∑_t∈ I f_l(y,t) |^p dν(y) ≤ Cs2^s.
Since L_s is a subset of the set of all dyadic sets [2^ij..2^i(j+1)) where i,j are non-negative integers and 2^i(j+1)≤ 2^s, we have for any l≥ 1
∑_I∈ L_s∫_Y|∑_t∈ I f_l(y,t) |^p dν(y) ≤∑_i=0^s-1∑_j=0^2^s-i-1∫_Y|∑_t∈ I f_l(y,t) |^p dν(y)
≤∑_i=0^s-1∑_j=0^2^s-i-1 C2^i
≤ Cs2^s.
For every ε >0, there exists a sequence of measurable subsets { Y_s}_s ≥ 1 of Y such that:
* ν(Y_s) ≤ C s^-(1+pε).
* For every integer N≥ 2 with 2^s-1≤ N-1<2^s and for every y ∉ Y_s one has
| ∑_t=0^N-1f_𝓁_N(y,t) | ≪_p
2^s/ps^1+1/p+ε.
For every s≥1, consider the function f_s:Y×ℕ→ℝ defined by
f_s(y,t) := max_2^s-1<N≤2^s∑_I∈ L_s| ∑_t∈ If_𝓁_N(y,t)|^p,
and the measurable set
Y_s=
{ y ∈ Y : f_s(y,t) > 2^s s^2+pε}.
The first assertion follows from Lemma <ref> and Markov's Inequality.
Further, for any N≥ 2 such that 2^s-1≤ N-1<2^s and any y∉ Y_s, using the partition [0..N)=_I∈ L(N)I with L(N) of cardinality at most 2s-1, we have
|∑_t=0^N-1 f_𝓁_N(y,t)|^p = |∑_I∈ L(N)∑_t∈ I f_𝓁_N(y,t)|^p
≤ (2s-1)^p-1∑_I∈ L(N)|∑_t∈ I f_𝓁_N(y,t)|^p (by Hölder's Inequality)
≤ (2s-1)^p-1f_s(y,t)
≪_p s^1+p+pε2^s , (since y∉ Y_s)
which yields the claim by raising to the power 1/p.
Let ε>0 and choose a sequence of measurable subsets {Y_s}_s≥ 1 as in (<ref>). Observe that
∑_s=1^∞ν(Y_s) ≤∑_s=1^∞ C s^-(1+pε) < ∞.
The Borel-Cantelli lemma implies that there exists a full-measure subset Y(ε)⊂ Y such that for every y ∈ Y(ε) there exists s_y ∈ℕ such that for all s > s_y we have y ∉ Y_s.
Let N≥ 2 and s= 1 + ⌊log (N-1) ⌋, so that 2^s-1≤ N-1 < 2^s. Then, for N-1 ≥ 2^s_y we have s> s_y and y ∉ Y_s, thus
|∑_t=0^N-1f_𝓁_N(y,t) | ≪_p 2^s/ps^1+1/p +ε
≤
(2N)^1/plog^1+1/p+ ε(2N).
This implies the claim for y ∈∩_m∈ℕY(1/m).
We now apply Proposition <ref> to the counting function ∑_tχ∘ a_t, where we write for simplicity χ=χ__F_1,c for the characteristic function of the set F_1,c defined in (<ref>). We denote by vol(F_1,c) the average of the Siegel transform from Proposition <ref> for the function χ given by
vol(F_1,c) ∫_𝒞χ(z)dz = ∫_𝒳χ(Λ)dμ_𝒳(Λ).
Let n≥ 3. For all ε>0, for almost every k ∈ K we have
∑_t=0^N-1χ(a_t k Λ_0) = N vol(F_1,c) + O_k,ε (N^1/2+ε).
Using Proposition <ref>, it is enough to show that for every 1<p<2, for all pairs of integers (a,b) from the dyadic decomposition of N specified in Remark <ref>, we have
|| ∑_t=a^b-1( χ-μ_𝒳(χ))∘ a_t ||_L^p(𝒴)^p≪ (b-a) .
Let 1<p<2. Using the estimates for the truncated Siegel transform from Proposition <ref>, we have, for 2p/3p-2≤τ < n and t≥κlog L,
|| ( χ∘ a_t-μ_𝒳(χ))- ( χ^(L)∘ a_t-μ_𝒳(χ^(L)))||_L^p_𝒴 ≤|| χ∘ a_t- χ^(L)∘ a_t||_L^p_𝒴+ ∫_𝒳|χ-χ^(L)|
≪ L^-τ(2-p)/2p + L^-(τ-1)
≪ L^-τ(2-p)/2p.
Further, using Proposition <ref> and the estimates from Proposition <ref> and (<ref>), we have for t≥ -1/θlogε,
‖( χ^(L)∘ a_t-μ_𝒳(χ^(L))) - ( f_ε^(L)∘ a_t-μ_𝒳(f_ε^(L)))‖_L^p_𝒴≤|| χ^(L)∘ a_t- f_ε^(L)∘ a_t||_L^p_𝒴+ ∫_𝒳|χ^(L)-f_ε^(L)|
≤||(χ -f_ε)^(L)∘ a_t ||_∞^p-1/p·||(χ -f_ε)^(L)∘ a_t ||_L^1_𝒴^1/p+ ∫_𝒞|χ-f_ε|
≪ L^p-1/p ε^1/p + ε
≪ L^p-1/p ε^1/p.
Further, using effective equidistribution for smooth and compactly supported functions (Proposition <ref> for r=1), we have
‖∑_t=a^b-1( f_ε^(L)-μ_𝒳(f_ε^(L)))∘ a_t ‖_L^p_𝒴≤‖∑_t=a^b-1( f_ε^(L)-μ_𝒳(f_ε^(L)))∘ a_t ‖_L^2_𝒴
≤‖∑_t=a^b-1( f_ε^(L)∘ a_t-μ_𝒴(f_ε^(L)∘ a_t)) ‖_L^2_𝒴 + |∑_t=a^b-1( μ_𝒴(f_ε^(L)∘ a_t)-μ_𝒳(f_ε^(L))) |
≪ (b-a)^1/2‖𝖥_b,a^(ε,L)‖_L^2_𝒴 + ∑_t=a^b-1 e^-δ t‖f_ε^(L)‖_l,
where 𝖥_b,a^(ε, L) = 1/√(b-a)∑_t=a^b-1( f_ε^(L)∘ a_t -μ_𝒴(f_ε^(L)∘ a_t) ) as defined in (<ref>) and estimated in (<ref>) by
‖𝖥_b,a^(ε, L)‖_L^2_𝒴 = ∑_-K^KΘ_∞(t)
+ O( (b-a)^-1K^2+((b-a)^-1e^-δ ae^ξ K+e^-δ K)L^2ε^-2l+KL^-τ-2/2+Kε).
Putting (<ref>), (<ref>), (<ref>) and (<ref>) together, and considering moreover that ∑_-K^KΘ_∞(t) is bounded uniformly in K (by the convergence showed in Section <ref>), we obtain
|| ∑_t=a^b-1( χ-∫_𝒳χ)∘ a_t ||_L^p(𝒴)
≪ (b-a)(L^-τ(2-p)/2p+L^p-1/pε^1/p) + e^-δ aLε^-l
+(b-a)^1/2( 1+(b-a)^-1K^2+((b-a)^-1e^-δ ae^ξ K+e^-δ K)L^2ε^-2l+KL^-τ-2/2+Kε).
In order to bound the first term in (<ref>) by (b-a)^1/p, we choose
L=(b-a)^q_2 , with q_2= 2(p-1)/τ (2-p),
and ε= L^-τ(2-p)+2(p-1)/2= (b-a)^-q_1, with q_1=q_2τ(2-p)+2(p-1)/2= (p-1)(1+q_2).
We choose further
K=c_p log (b-a), with some c_p>0,
and, in order to satisfy the condition K≤ a, we verify that for all but finitely many pairs (a,b) from the dyadic decomposition of N, i.e. for pairs of the forms (2^i,2^i+1) and (2^ij,2^i(j+1)) with i≥ i_0(c_p)≥ 1, we have
K=c_p log(2^i) ≤ a.
With this choice of L, ε and K, we verify next that the terms in (<ref>) are bounded by (b-a)^1/p, for all but finitely many pairs (a,b) from the dyadic decomposition. We have indeed
(b-a)^-1/2e^-δ ae^ξ KL^2ε^-2l≤ e^-δ 2^i_0(b-a)^-1/2+ξ c_p+2q_2+2lq_1≤ (b-a)^1/p,
and for some c_p>0 large enough, we also have
(b-a)^1/2e^-δ KL^2ε^-2l≤ (b-a)^1/2-δ c_p+2q_2+2lq_1≤ (b-a)^1/p.
One verifies easily that the other terms in (<ref>) are also bounded by (b-a)^1/p.
Finally, we verify that the conditions t≥κlog L and t≥ -1/θlogε are also verified, since we have, for all but finitely many pairs (a,b),
κlog L = κ q_2 log (b-a) = κ q_2 log (2^i) ≤ a ≤ t
and -1/θlogε = q_1/θlog (b-a) = q_1/θlog (2^i) ≤ a ≤ t.
We obtain, for n>τ>2,
|| ∑_t=a^b-1( χ-∫_𝒳χ)∘ a_t ||_L^p_𝒴≪_p (b-a)^1/p,
which ends the prove.
Since the bound in Proposition <ref> is uniform for 𝓁≥ 1, since all the implicit constants in the estimates used in Proposition <ref> depend only on the support of χ and since the supports of χ_l are uniformly bounded for 𝓁≥ 1, the same argument as in Proposition <ref> with Proposition <ref> applied to χ_𝓁-μ_𝒳(χ_𝓁) yields the same asymptotic given in the following proposition.
Let n≥ 3. For any subsequence (𝓁_N)_N≥ 1, for almost every k ∈ K and all ε>0, we have
∑_t=0^N-1χ_𝓁_N(a_t k Λ_0) = N vol(F_1,c,𝓁_N) + O_k,ε (N^1/2+ε).
Combining Propositions <ref> and <ref> with the estimate (<ref>) we have, for almost every k∈ K, for all T> T_k for some T_k≥ 2,
T vol(F_1,c,𝓁)+O(T ^1/2+ε)+O(𝓁^1/2)≤𝖭_T,c(α_k) +O(1) ≤ Tvol(F_1,c)+O(T ^1/2+ε).
From (<ref>) we also have
Tvol(F_1,c,𝓁)+O(T ^1/2+ε) + O(𝓁^1/2)
= T ( vol(F_1,c) + O(𝓁^-1)) +O(T ^1/2+ε)+ O(𝓁^1/2)
= T vol(F_1,c) + O(Tl^-1+T^1/2+ε+l^1/2).
By choosing the subsequence 𝓁_N=⌊ N^2/3⌋, with N=⌊ T ⌋, we obtain
𝖭_T,c(α_k)
=Tvol(F_1,c)+O(T ^1/2+ε).
Since full-measure sets in K correspond to full-measure sets in S^n, we conclude that this last estimate holds for almost every α∈ S^n.
|
http://arxiv.org/abs/2409.02234v1 | 20240903190257 | Motives, mapping class groups, and monodromy | [
"Daniel Litt"
] | math.AG | [
"math.AG",
"math.GT",
"math.NT",
"32S40, 14C30, 57K20"
] |
§ ABSTRACT
We survey some recent developments at the interface of algebraic geometry, surface topology, and the theory of ordinary differential equations. Motivated by “non-abelian" analogues of standard conjectures on the cohomology of algebraic varieties, we study mapping class group actions on character varieties and their algebro-geometric avatar—isomonodromy differential equations—from the point of view of both complex and arithmetic geometry. We then collect some open questions and conjectures on these topics. These notes are an extended version of my talk at the April 2024 Current Developments in Mathematics conference at Harvard.
What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets
Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014
September 9, 2024
==================================================================================================================================================================
§ INTRODUCTION
The goal of these notes is to explain a number of relationships between the topology of algebraic varieties, representation theory and dynamics of mapping class groups, and the theory of algebraic differential equations. I have tried to survey a few different classical and modern perspectives on these subjects, and to put together some conjectures and questions that might give the reader a sense of some of the directions the area is heading.
The basic question motivating this work is: how is the geometry of an algebraic variety X reflected in the structure of its fundamental group, π_1(X), and in the representation theory of π_1(X)? We will see shortly that this question is the modern descendent of some very classical questions about ordinary differential equations, braid groups and mapping class groups, hypergeometric functions, etc. As is traditional in algebraic geometry, we view it as a special case of a much more general question about families of algebraic varieties: given (say) a smooth proper morphism
f: X→ S,
and points x∈ X, s=f(x)∈ S, how is the geometry of f reflected in the exact sequence
π_1(X_s, x)→π_1(X, x)→π_1(S, s)→ 1,
in the induced outer action of π_1(S, s) on π_1(X_s), and in the induced action of π_1(S,s) on conjugacy classes of representations of π_1(X_s)?
There are a number of basic examples of morphisms f as above that the reader would do well to keep in mind: namely the maps
𝒞_g,n→ℳ_g,n
from the universal n-pointed curve of genus g to the Deligne-Mumford moduli space of smooth n-pointed curves of genus g. We will return to this fundamental example throughout this survey, as in this case the questions we consider are closely related to important classical questions in surface topology, through the natural identification of π_1(ℳ_g,n) with the (pure) mapping class group of an n-pointed surface of genus g, and to important classical questions in the theory of ordinary differential equations, when g=0.
§.§ A reader's guide
We hope the material covered here will appeal to mathematicians with interests in algebraic and arithmetic geometry, dynamics, or surface topology. We have tried to write <ref>, which discusses a classical and elementary question about dynamics of 2× 2 matrices (which arises when one specializes the more general questions considered here to the case of the variety X=ℂℙ^1∖{x_1, ⋯, x_n}), with a broad mathematical audience in mind. <ref> discusses the generalization of this question to surfaces of arbitrary genus: namely, the analysis of finite orbits of the mapping class group action on the character variety of a n-punctured surface of genus g. While the methods of proof are somewhat technical—relying as they do on non-abelian Hodge theory and input from the Langlands program—we hope that the questions considered, and their answers, will still be of broad interest. This section also introduces the connection to certain algebraic differential equations, the so-called isomonodromy differential equations, examples of which include the Painlevé VI equation and the Schlesinger system.
<ref> and <ref> will primarily be of interest to algebraic and arithmetic geometers. In <ref> we give a conjectural (arithmetic) answer to all questions about finite orbits of the actions of fundamental groups of algebraic varieties on character varieties, and algebraic solutions to isomonodromy equations, and sketch a proof for “Picard-Fuchs" initial conditions. In <ref>, we give some philosophical motivation for these questions by analogy to standard conjectures on algebraic cycles (the Hodge conjecture, Tate conjecture, and so on) and enumerate a number of questions on the arithmetic and algebraic geometry of character varieties, suggested by this analogy.
Finally, in <ref>, we return to questions about mapping class groups and their representations, and explain the connection with a number of basic open questions about vector bundles on algebraic curves. This last section should be of interest to both complex algebraic geometers and surface topologists, and we have done our best to make it accessible to readers from either of these backgrounds.
The last two sections, <ref> and <ref>, are filled with questions and conjectures. We hope that these will give the reader a sense of where the subject is heading.
§.§ Acknowledgments
Everything new in this paper is joint work, with various subsets of {Josh Lam, Aaron Landesman, Will Sawin}. I am extremely grateful to them for the many, many ideas they have contributed to this work. I would also like to acknowledge the enormous intellectual debt the work here owes to Hélène Esnault, Michael Groechenig, Nick Katz, Mark Kisin, and Carlos Simpson. I am also very grateful to Josh Lam, Aaron Landesman, and Salim Tayou for many useful comments. In particular many of the subjects covered here (particularly those in <ref>) are discussed from a somewhat different point of view in <cit.>, which we enthusiastically recommend. This work was supported by the NSERC Discovery Grant, “Anabelian methods in arithmetic and algebraic geometry" and by a Sloan Research Fellowship.
§ SOME QUESTIONS ABOUT N-TUPLES OF MATRICES
To bring things down to earth, we start with an example that will demonstrate many of the features of the general situation discussed in <ref>. Take X to be the simplest algebraic variety with interesting fundamental group, i.e.
X=ℂℙ^1∖{x_1, ⋯, x_n},
where x_1, ⋯, x_n are distinct points.
The fundamental group of X has the presentation
π_1(X)=⟨γ_1, ⋯, γ_n |∏_i γ_i=1⟩,
with γ_i a loop around x_i (as in <ref>), and hence a representation
π_1(X)→GL_r(ℂ)
is the same[Explicitly, set A_i to be the image of γ_i in GL_r(ℂ).] as an n-tuple of r× r invertible matrices (A_1, ⋯, A_n) such that
∏_i=1^n A_i=id.
Set
Y(0, n, r)={(A_1, ⋯, A_n)∈GL_r(ℂ)^n such that ∏_i=1^n A_i=id}/GL_r(ℂ)
where GL_r(ℂ) acts on an n-tuple (A_1, ⋯, A_n) by simultaneous conjugation, i.e.
B· (A_1, ⋯, A_n)=(BA_1B^-1, ⋯, BA_nB^-1).
That is, Y(0, n, r) is the set of conjugacy classes of n-tuples of r× r invertible complex matrices whose product is the identity matrix, or equivalently the set of conjugacy classes of r-dimensional representations of π_1(X).[Here the index 0 in the notation Y(0,n,r) indicates that this is the genus 0 version of a more general problem, which we will encounter later in the notes.]
Given conjugacy classes C_1, ⋯, C_n⊂GL_r(ℂ) we write C for the tuple (C_1, ⋯, C_n), and set
Y(C)={(A_1, ⋯, A_n) | ∏_i=1^n A_i=id and A_i∈ C_i for all i}/simultaneous conjugation
That is, Y(C)⊂ Y(0,n,r) is the set of (simultaneous) conjugacy classes of n-tuples of matrices (A_1, ⋯, A_n) satisfying the equation (<ref>),
subject to the constraint that A_i lies in C_i for each i, or equivalently the set of conjugacy classes of representations
ρ: π_1(X)→GL_r(ℂ)
such that ρ(γ_i)∈ C_i for each i.
We denote by Y(0,n,r)^irr, (resp. Y(C)^irr) the subsets of Y(0,n,r) (resp. Y(C)) corresponding to irreducible representations of π_1(X). Note that these sets are naturally topological spaces (and indeed, they have a lot more structure, as we will see later). For example, Y(C) inherits a natural (quotient) topology: it is a quotient of the subset of C_1×⋯× C_n consisting of tuples satisfying (<ref>).
Three questions immediately present themselves:
* (Existence) For which C is Y(C)^irr non-empty? This question is a form of the Deligne-Simpson problem, surveyed nicely in <cit.>, and has a number of variants; for example, one might ask that the A_i all lie in the unitary group, or in some other subgroup of GL_r(ℂ).
* (Uniqueness) For which C is Y(C)^irr a singleton? That is, when is a solution to (<ref>) determined uniquely (up to simultaneous conjugation) by the conjugacy classes C_i of the matrices A_i? For reasons that will soon become clear, the question of classifying tuples C such that Y(C) is a singleton is typically referred to as the classification of rigid local systems, and was studied by Katz in his book of the same name, <cit.>.
* (Geometry and dynamics) When Y(C) is not a singleton, what does it look like? In particular, Y(C) has a huge group of symmetries: given one solution (A_1, ⋯, A_n) to (<ref>), one may produce others via the operations
σ_i: (A_1, ⋯, A_n)↦ (A_1, ⋯, A_iA_i+1A_i^-1, A_i, ⋯, A_n),
where i ranges from 1 to n-1. The group B_n:=⟨σ_1, ⋯, σ_n-1⟩ acts on Y(0,n,r), permuting the C_i, hence an index n! subgroup preserves each Y(C). The study of the dynamics of this action goes back to work of Markoff <cit.> and Fricke and Klein <cit.> in the 19th century; we will be interested in the most basic questions about this action, e.g. classifying finite orbits, invariant subvarieties, and so on.
Note that the question of classifying finite orbits generalizes (2) above: if Y(C)^irr is a singleton, it necessarily has finite orbit under the action of the group B_n.
The existence question (1) and uniqueness question (2) above are reasonably well-understood, as we shortly explain. Question (3) has a long history and we are quite far from any kind of answer, even in the case of 2× 2 matrices. Our purpose in this section is to summarize some recent progress on this very special case.
§.§ Existence and uniqueness
We will not say much about the Deligne-Simpson problem—i.e. the problem of determining whether or not Y(C)^irr is non-empty—except to note that what we know about it (in particular, a complete solution in case the conjugacy classes satisfy a suitable genericity condition), due to work of Simpson <cit.>, Kostov <cit.>, Crawley-Boevey <cit.>, and others, is closely tied to algebraic geometry. For example, the existence of points of Y(C) corresponding to irreducible unitary representations of π_1(X) is equivalent to the existence of stable parabolic bundles on ℂℙ^1 with prescribed local data, and this point of view leads to a complete solution in this case, in terms of the quantum Schubert calculus of the Grassmannian; the case of rank 2 was worked out by Biswas <cit.>, and the general case by Agnihotri-Woodward <cit.> and Belkale <cit.>.
Regarding uniqueness, we briefly summarize Katz's classification <cit.> of tuples of conjugacy classes C for which Y(C)^irr is a singleton, as his method will have some relevance later. We say a tuple of conjugacy classes (C_1, ⋯, C_n)⊂GL_r(ℂ)^n is rigid if Y(C)^irr is a singleton; we will also refer to the corresponding conjugacy class of irreducible representations
ρ: π_1(X)→GL_r(ℂ)
with ρ(γ_i)∈ C_i as a rigid representation, and the corresponding local system on X as a rigid local system.
For each λ∈ℂ^×∖{1}, Katz produces a functor
MC_λ: Rep(π_1(X))→Rep(π_1(X))
with the following properties:
* If ρ is a rigid irreducible π_1(X)-representation, MC_λ(ρ) is rigid and irreducible.
* The functors MC_λ, MC_λ^-1 are quasi-inverse.
* If ρ is a rigid irreducible π_1(X)-representation of rank at least 2, there exists a rank one representation
χ: π_1(X)→ℂ^×
and a scalar λ∈ℂ^×∖{1} such that
rkMC_λ(ρ⊗χ)< rkρ.
If a representation ρ corresponds to a tuple of matrices (A_1, ⋯, A_n), we will also write MC_λ(A_1, ⋯, A_n) for a tuple of matrices corresponding to MC_λ(ρ). Note that MC_λ does not in general preserve the dimension of a representation.
The upshot of this construction is as follows: given a rigid irreducible π_1(X)-representation ρ, one may find a sequence of rank one representations χ_i: π_1(X)→ℂ^× and λ_i∈ℂ^×∖{1}, i=1,⋯, such that, setting ρ_0=ρ and ρ_i=MC_λ_i(ρ_i-1⊗χ_i), we have
1≤rk(ρ_i)<rk(ρ_i-1)
and hence eventually rk(ρ_t)=1 for some t. As the operations MC_λ and -⊗χ are invertible (and rank one representations are necessarily rigid, as a 1× 1 matrix is determined by its conjugacy class), this tells us that every rigid irreducible is “generated" by rank one representations under these operations.
It turns out that the conjugacy class of MC_λ(ρ)(γ_i) depends only on λ and the conjugacy class of ρ(γ_i). Thus given a tuple of conjugacy classes C_1, ⋯, C_n ⊂SL_r(ℂ), there exists another (explicit) tuple C_1', ⋯, C_n'⊂SL_r'(ℂ) such that MC_λ induces a map
Y(C)→ Y(C').
Katz's description of the functor MC_λ is algebro-geometric in nature; we explain a variant in <ref>. There are now a number of different expositions of Katz's classification from various more algebraic points of view, notably <cit.> and <cit.>, which make the middle convolution operation completely explicit and computable.
§.§ Dynamics
We now turn to the dynamics of the ⟨σ_1, ⋯,σ_n-1⟩ action on Y(0, n, r). How does this dynamics arise? A hint can be found in the observation that the two tuples
σ_iσ_i+1σ_i· (A_1, ⋯, A_n) and σ_i+1σ_iσ_i+1· (A_1, ⋯, A_n)
are conjugate for 1≤ i≤ n-1, and σ_i, σ_j commute for |i-j|≥ 2. That is, the action of ⟨σ_1, ⋯, σ_n-1⟩ on Y(0,n, r) factors through the quotient
⟨σ_1, ⋯, σ_n-1|σ_iσ_i+1σ_i=σ_i+1σ_iσ_i+1 and σ_iσ_j=σ_jσ_i for |i-j|≥ 2⟩,
which is the usual Artin presentation of the braid group, which is (up to quotienting by the center) the mapping class group
Mod_0,n:=π_0(Homeo^+(ℂℙ^1∖{x_1, ⋯, x_n}))
of orientation-preserving self-homeomorphisms of ℂℙ^1∖{x_1, ⋯, x_n}, up to isotopy. This latter group has an obvious outer action on π_1(ℂℙ^1∖{x_1, ⋯, x_n}),[Here Mod_0,n only has an outer action, as opposed to an honest action, as a self-homeomorphism of ℂℙ^1∖{x_1, ⋯, x_n} will typically not preserve a basepoint.] induced by the action of Homeo^+(ℂℙ^1∖{x_1, ⋯, x_n}) on ℂℙ^1∖{x_1, ⋯, x_n} and hence acts on conjugacy classes of representations of π_1(ℂℙ^1∖{x_1, ⋯, x_n}), i.e. points of Y(0,n,r).
There is an evident surjection
Mod_0,n→ S_n
given by sending σ_i to the transposition (i, i+1) (or equivalently induced by the action of Homeo^+(ℂℙ^1∖{x_1, ⋯, x_n}) on {x_1, ⋯, x_n}). Set PMod_0,n to be the kernel of this surjection—the pure mapping class group. It turns out that PMod_0,n is the fundamental group of the moduli space ℳ_0,n of genus zero curves with n marked points, and the outer action
PMod_0,n→Out(π_1(ℂℙ^1∖{x_1, ⋯, x_n}))
is induced by the short exact sequence (a special case of the Birman exact sequence)
1→π_1(ℂℙ^1∖{x_1, ⋯, x_n})→π_1(ℳ_0,n+1)→π_1(ℳ_0,n)→ 1
corresponding to the fibration
ℳ_0,n+1→ℳ_0,n
given by forgetting the n+1-st marked point. So in particular the analysis of the outer action of PMod_0,n on π_1(ℂℙ^1∖{x_1, ⋯, x_n}) fits into the paradigm with which we started these notes in <ref>, namely that of analyzing the geometry of an algebraic map in terms of the induced structure of the map of fundamental groups. Note that PMod_0,n acts on each Y(C).
We begin with the most basic possible question about this action: what are the finite orbits of the action of Mod_0,n (equivalently, of PMod_0,n), on Y(0,n,r)?
§.§.§ The cases n=0,1,2,3
For n=0,1, there is almost nothing to say—the fundamental group of ℂℙ^1∖{x_1, ⋯, x_n} is trivial. For n=2 the fundamental group is ℤ, and Mod_0,2 is finite. For n=3 the fundamental group of ℂℙ^1∖{x_1, ⋯, x_3} is the free group on two generators, and hence has many interesting representations. But in this case Mod_0,3 is still finite—isomorphic to the symmetric group S_3—and so all orbits are finite. (One may see this geometrically—the group PMod_0,3 is the fundamental group of ℳ_0,3, which is a point.)
In fact, in this last case all irreducible 2-dimensional representations are rigid in the sense of <ref>; while the dynamics are not interesting, this is the source of the (extremely rich) theory of hypergeometric functions, and the corresponding representations of π_1(ℂℙ^1∖{x_1, x_2, x_3}) are precisely given by the monodromy of the hypergeometric functions _2F_1(a,b,c | z) (see e.g. <cit.>).
§.§.§ The case n=4
So the first interesting case of our dynamical question—that of classifying finite Mod_0,n-orbits on Y(0,n,r)—is the case when n=4, and this case is very interesting indeed. The dynamics of this situation were originally studied by Markoff in the 19th century <cit.> in a different guise.
Markoff studied integer solutions to the cubic equation
x^2+y^2+z^2-3xyz=0.
Given one solution (x,y,z) to this equation (for example (x,y,z)=(1,1,1)) one may produce more by “Vieta jumping": fixing y and z, (<ref>) becomes quadratic in x, and hence admits another solution with the same y, z coordinates, namely
T_x(x,y,z)=(3yz-x, y,z).
Analogously we define
T_y(x,y,z)=(x, 3xz-y, z), T_z(x,y,z)=(x, y, 3xy-z);
the group ⟨ T_x, T_y, T_z⟩ acts on the affine cubic surface S defined by (<ref>). It turns out that the complex points of the surface S parametrize semisimple representations in Y(C), where
C_1=C_2=C_3=[[ 0 -1; 1 0 ]], C_4=[[ 1 1; 0 1 ]].
More generally, if C_1,⋯, C_4⊂SL_2(ℂ), one may parametrize points of Y(C) corresponding to semisimple representations of π_1(ℂℙ^1∖{x_1,⋯, x_4}) by the complex points of
X_A,B,C,D={(x,y,z| x^2+y^2+z^2+xyz=Ax+By+Cz+D}
where
A=tr(C_1)tr(C_2)+tr(C_3)tr(C_4),
B=tr(C_2)tr(C_3)+tr(C_1)tr(C_4),
C=tr(C_1)tr(C_3)+tr(C_2)tr(C_4), and
D=4-tr(C_1)^2-tr(C_2)^2-tr(C_3)^2-tr(C_4)^2-∏_i=1^4 tr(C_i).
One may similarly define an action of the group ℤ/2ℤ*ℤ/2ℤ*ℤ/2ℤ on X_A,B,C,D by Vieta jumping; it turns out that, up to finite index, this is the same as the natural action of Mod_0,4 on Y(C) discussed above <cit.>.
Markoff was not interested in finite orbits—rather, he showed that the integral solutions to (<ref>) form a single orbit under these dynamics. We will return to questions about integral points later in these notes, in <ref>; instead we now turn to the origin of our question about finite orbits.
§.§.§ n=4 and the Painlevé VI equation
The Painlevé VI equation PVI(α, β,γ, δ), discovered by Richard Fuchs <cit.>, is given by:
d^2y/dt^2=
1/2(1/y+1/y-1+1/y-t)( dy/dt)^2
-(1/t+1/t-1+1/y-t)dy/dt
+
y(y-1)(y-t)/t^2(t-1)^2(α+βt/y^2+γt-1/(y-1)^2+δt(t-1)/(y-t)^2)
where y is a function of t and α, β, γ, δ are complex numbers.
Painlevé famously claimed, in his Stockholm lectures, <cit.> that solutions to this equation are given by “new transcendents": that is, functions that could not be expressed in terms of classical functions. While this is true for generic values of the parameters (α, β, γ, δ), Painlevé's argument was not rigorous by modern standards, and correct proofs, largely due to Umemura and his school (see e.g. <cit.>), rely on a classification of the (rare) algebraic solutions (i.e. algebraic functions satisfying (<ref>)) that do exist.
We shall see later in these notes (in <ref>) that each algebraic solution corresponds to a certain finite orbit of Mod_0,4 on Y(0, 4, 2), and conversely, each finite orbit gives rise to a (countable) family of algebraic solutions to (<ref>).
Algebraic solutions to the Painlevé VI equations (equivalently, finite orbits of Mod_0,4 on Y(0,4,2)) have been classified by Lisovyy and Tykhyy <cit.>. Their classification builds on work of many people, including Andreev-Kitaev <cit.>, Boalch <cit.>, Doran <cit.>, Dubrovin-Mazzocco <cit.>, Hitchin <cit.>, and Kitaev <cit.>, and relies on an effective form of the Manin-Mumford conjecture for tori (originally due to Lang <cit.>) and a somewhat involved computer computation. Up to a slightly complicated equivalence relation, which we will not summarize here, there are:
* four continuous families of finite orbits,
* one countably infinite (discrete) family of finite orbits, of unbounded size, and
* forty-five exceptional finite orbits.
The countably infinite family of orbits mentioned above have representatives given by
A_1= [ 1+x_2x_3/x_1 -x_2^2/x_1; x_3^2/x_1 1-x_2x_3/x_1 ],
A_2=[ 1 -x_1; 0 1 ],
A_3=[ 1 0; x_1 1 ],
A_4=(A_1A_2A_3)^-1
where
x_1=2cos(π(α+β)/2), x_2=2 sin(πα/2), x_3= 2 sin(πβ/2)
for α, β∈Q. See <cit.> for a discussion of this example, and the rest of that paper for an involved analysis of some related arithmetic questions. See also <ref> for a brief further discussion of this example.
The upshot is that, at least at first glance, the classification is somewhat complicated!
§.§.§ The case n>4
Not much was known about finite orbits of Mod_0,n on Y(0, n, 2) when n>4, with the exception of a very interesting paper of Tykhyy <cit.> which gives a computer-aided classification when n=5 (though I have not yet understood the extent to which there is a rigorous proof that this classification is complete). Aside from this there are a few sporadic constructions of finite orbits, e.g. <cit.>. As before this question can be understood as asking for a classification of algebraic solutions to a certain non-linear ODE.
In general finite orbits of the action of Mod_0,n on Y(0,n,r) correspond to algebraic solutions to the system
∂ B_i/∂λ_j=[B_i, B_j]/λ_i-λ_j if i≠ j
∑_i ∂ B_i/∂λ_j=0
where the B_i are 𝔤𝔩_r(ℂ)-valued functions (see <ref>).
In this language the question was clearly of interest to Painlevé, Schlesinger, Gambier, Garnier, etc. (see for example Garnier's 1912 paper <cit.>, where Garnier writes down, among other things, many classical solutions to (<ref>)), but to my knowledge the idea that a complete classification might be possible first appeared in print in work of Dubrovin-Mazzocco <cit.>.
It turns out that one can give a quite clean classification of the finite orbits of the PMod_0,n-action on Y(C), when at least one of the conjugacy classes C_i of C has infinite order, using algebro-geometric techniques, as we now explain. We make the following convenient definition:
Say that a representation
ρ: π_1(ℂℙ^1∖{x_1, ⋯, x_n})→SL_2(ℂ)
(equivalently, an n-tuple of matrices A_1, ⋯, A_n∈SL_2(ℂ) such that ∏_i A_i=id) is interesting if
* Its image is Zariski-dense, i.e. it has infinite image and the image can't be conjugated into one of the subgroups
[ * *; 0 * ] or [ * 0; 0 * ]∪[ 0 *; 0 ],
* No A_i is a scalar matrix, and
* The point of Y(C) corresponding to ρ has finite orbit under the action of PMod_0,n, and it is isolated as a finite orbit of PMod_0,n. That is, if Γ⊂PMod_0,n is the stabilizer of [ρ]∈ Y(C), [ρ] is an isolated point of Y(C)^Γ, with the natural topology inherited from Y(C) via its presentation as a quotient of C_1×⋯× C_n.
We say a local system on ℂℙ^1∖{x_1, ⋯, x_n} is interesting if its monodromy representation is interesting.
The careful reader may have noticed that we are now considering SL_2(ℂ)-representations rather than GL_2(ℂ)-representations. There is no loss of generality here: given (A_1, ⋯, A_n)∈ Y(0,n,2) with finite Mod_0,n-orbit, one may always scale the matrices by appropriately chosen scalars so that they lie in SL_2(ℂ), while preserving the property that the tuple have finite Mod_0,n-orbit.
The point of the conditions of <ref> is that representations not satisfying these conditions have been classified classically. Let us say a word about each of the conditions before proceeding to our partial classification of interesting representations.
* The non-Zariski-dense subgroups of SL_2(ℂ) are either finite, can be conjugated into the Borel subgroup
[ * *; 0 * ],
or can be conjugated into the (infinite) dihedral group
[ * 0; 0 * ]∪[ 0 *; 0 ].
The conjugacy class of a representation
ρ: π_1(ℂℙ^1∖{x_1, ⋯, x_n})→SL_2(ℂ)
with finite image always has finite orbit under Mod_0,n; the finite subgroups of SL_2(ℂ) were essentially classified by Euclid (in his classification of Platonic solids), and explicitly classified by Schwarz <cit.>.
The representations that can be conjugated into the Borel subgroup are precisely the representations which are not irreducible. The finite orbits of such representations were classified by Cousin-Moussard <cit.>, with an essentially (but perhaps not obviously) equivalent classification found by McMullen <cit.>.
Finally, the representations that can be conjugated into the infinite dihedral group were classified by Tykhyy <cit.>; alternately one can deduce this classification from the main result of <cit.>.
* The assumption that no A_i is a scalar matrix is harmless, as we now explain. A scalar matrix in SL_2(ℂ) is ±id. If A_i=id, we may remove it from our tuple of matrices, replacing n with n-1. If A_i=-id, let j≠ i be such that A_j is non-scalar, and consider the new tuple of matrices A_k' where A_k'=A_k for k≠ i, j, A_i'=-A_i, A_j'=-A_j. This new tuple has A_i'=id, and hence we may remove it as before. So it is easy to satisfy this assumption by multiplying by scalars and removing some of our matrices.
* It is much less obvious that the condition of <ref>(3) is at all relevant to the problem. That said, results of Corlette-Simpson <cit.> show that Zariski-dense representations ρ with finite PMod_0,n-orbit, not satisfying this condition, have a very special form: they are so-called “pullback solutions" studied by Doran <cit.>, Kitaev <cit.> and others and classified by Diarra <cit.>. We will discuss these to some extent in <ref>.
By the discussion above, the only finite orbits that remain to be classified are the interesting ones. We can almost do so.
Suppose a conjugacy class of tuples of matrices (A_1, ⋯, A_n)∈ Y(0, n,2) is interesting. If some A_i has infinite order, then there exists λ∈ℂ^×∖{1} and α_1, ⋯, α_n∈ℂ^× such that
(α_1A_1, ⋯, α_n A_n)=MC_λ(B_1, ⋯, B_n),
where each B_i∈ GL_n-2(ℂ), the group ⟨ B_1, ⋯, B_n ⟩ is an irreducible finite complex reflection group, and B_1, ⋯, B_n-1 are pseudoreflections.
Here MC_λ is the middle convolution operation introduced by Katz in <cit.> and discussed earlier in <ref>.
Note that if n>4, the B_i are not 2× 2 matrices—they are (n-2)×(n-2) matrices. So we have classified finite Mod_0,n-orbits on Y(0,n,2) in terms of certain finite subgroups of GL_n-2(ℂ). In what sense is this actually a classification? The point is that finite complex reflection groups were classified by Shephard and Todd <cit.> in 1954. We briefly recall their definition and classification; see <cit.> for a modern exposition.
A matrix A∈GL_r(ℂ) is a pseudoreflection if it has finite order and rk(A-id)=1. A subgroup G⊂GL_r(ℂ) is a finite complex reflection group if G is finite and generated by pseudoreflections. A finite complex reflection group G⊂GL_r(ℂ) is irreducible if the corresponding rank r representation of G is irreducible.
There is one infinite class of finite complex reflection groups, denoted G(m,p,n)⊂ GL_n(ℂ), where p divides m. The group G(m, 1, n) consists of all n× n matrices with exactly one non-zero entry in each row and column, where that non-zero entry is an m-th root of unity. The group G(m, p, n)⊂ G(m, 1,n) is the subgroup consisting of matrices whose non-zero entries multiply to an m/p-th root of unity.
There are 34 exceptional irreducible finite complex reflection groups not conjugate to one of the G(m, p, n), including the Weyl groups W(E_6), W(E_7), W(E_8), the Valentiner group, the group PSL_2(𝔽_7) with its natural 3-dimensional representation, the automorphism group of the icosahedron, and so on.
In other words, the interesting finite orbits of Mod_0,n on Y(0, n, 2) are classified in terms of the symmetry groups of some interesting “complex polytopes.” It is worth noting that <ref> recovers and generalizes that of Dubrovin-Mazzocco <cit.> when n=4; this is perhaps surprising given that their motivation comes from a completely different direction (namely, as I understand it, the theory of Frobenius manifolds).
The five matrices
A_1=[ -1 1; 0 -1 ], A_2= [ -1 0; -1 -1 ], A_3 = [ -1-√(5)/2 1; -3+√(5)/2 -3+√(5)/2 ], A_4= [ 1-√(5)/2 3-√(5)/2; -3+√(5)/2 -5+√(5)/2 ]
A_5=(A_1A_2A_3A_4)^-1
give rise to a finite Mod_0,5-orbit on Y(0,5,2) <cit.>. Under the correspondence of <ref>, they are related by middle convolution to the automorphism group of the icosahedron, viewed as a subgroup of GL_3(ℂ) (namely W(H_3), in Shephard-Todd's notation).
The only condition that prevents <ref> from being a complete classification is the requirement that some A_i has infinite order. As we will soon see, this condition is algebro-geometrically natural (see <ref>), but it would be of great interest to find a classification without it.
Here is a very concrete corollary, which we only know how to prove using <ref> and the Shephard-Todd classification:
Suppose (A_1, ⋯, A_n) is an interesting tuple, and some A_i has infinite order. Then n≤ 6.
The corollary above is sharp; there exist examples of tuples with n=6 (see e.g. <cit.> or <cit.>.
§.§ An algebro-geometric interpretation
As we prepare to explain the idea of the proof of <ref>, let us consider an a priori slightly different question to that of classifying finite Mod_0,n-orbits on Y(0,n,r).
Let X be a smooth complex algebraic variety. A complex local system 𝕍 on X is of geometric origin if there exists a dense open subset U⊂ X, a smooth proper morphism π: Y→ U, and an integer i≥ 0 such that 𝕍 is a summand of R^iπ_*ℂ.[Here R^iπ_*ℂ is the local system on U whose fiber at x∈ U is H^i(Y_x, ℂ).]
One interpretation of our fundamental question from the introduction—how does the geometry of a variety X influence the structure of its fundamental group—is: can one classify local systems of geometric origin on X? As we will see in <ref>, this is in some sense the non-abelian analogue of the question of understanding algebraic cycles on X, and it has conjectural answers analogous to the Hodge conjecture, the Tate conjecture, and so on.
Let us return to our very special case:
Let x_1, ⋯, x_n⊂ℂℙ^1 be n generic points. Can one classify local systems of rank 2 on ℂℙ^1∖{x_1, ⋯, x_n} that are of geometric origin?
It turns out that this question is the same as classifying finite Mod_0,n-orbits on Y(0,n,2), as we now explain.
The following is immediate from the proof of <cit.> (note that the condition that g≥ 1 is unnecessary in our setting, as we are working with SL_2(ℂ)-representations):
A representation ρ: π_1(ℂℙ^1∖{x_1, ⋯, x_n})→GL_r(ℂ) has conjugacy class with finite Mod_0,n-orbit if and only if there exists
* a family of n-punctured curves of genus 0, π: 𝒳→ℳ, with the induced map ℳ→ℳ_0,n dominant, and
* a local system 𝕍 on 𝒳 whose restriction to a fiber of π has monodromy conjugate to ρ.
Thus classifying finite Mod_0,n-orbits on Y(0,n,r) is the same as classifying local systems on families of curves 𝒳 as in <ref>. Now a result of Corlette-Simpson <cit.> and Loray-Pereira-Touzet <cit.> (see also <ref> in these notes) tells us that Zariski-dense SL_2-local systems on smooth quasi-projective varieties come in two (not necessarily disjoint) flavors:
* rigid[Here the notion of rigidity is a generalization of what was discussed earlier, in section <ref>. Namely, these are local systems on 𝒳 with no non-trivial deformations. We will discuss these later, in <ref>. These local systems correspond in our setting to those satisfying <ref>(3).] local systems, which are of geometric origin, and
* local systems pulled back from Deligne-Mumford curves (equivalently, the corresponding PGL_2-local system is pulled back from an orbifold curve).
It turns out that the local systems on 𝒳 not of geometric origin can be classified by elementary means, as we now explain. So (after the next section) only local systems of geometric origin will remain to be classified.
§.§.§ Pullback families
Let us first turn to the local systems 𝕍 of type (2) above, i.e. those pulled back from curves. We will refer to these as being of pullback type; we learned this construction from the paper <cit.>. In this case, there exists an orbifold curve C, a projective local system 𝕎 on C, and a map f:𝒳→ C so that the restriction of f^*𝕎 to a fiber of π is isomorphic to ℙV. In particular, this monodromy is non-trivial, so C is dominated by any fiber of π, and hence has genus zero.
Put another way, there exists a divisor D⊂ℂℙ^1 and a projective local system 𝕎 with Zariski-dense monodromy on ℂℙ^1∖ D, such that for general {x_1, ⋯, x_n}⊂ℂℙ^1, there exists g: ℂℙ^1→ℂℙ^1 so that g^*𝕎 is unramified away from {x_1, ⋯, x_n}. We must have g({x_1, ⋯, x_n})⊂ D. Moreover g must be ramified over at least n-3 points away from D, by counting parameters (i.e., the dimension of ℳ_0,n is n-3), and in particular, assuming n>3, we have d=deg(g)>1.
That g^*𝕎 is unramified away from {x_1, ⋯, x_n} tells us that for y∈ D, any point of g^-1(y) must be either among the x_i, or must be ramified (of order divisible by the order of the local monodromy of 𝕎 about y). Now Riemann-Hurwitz gives
2≤ 2d-n+3-(d D-n)/2=2d-n/2-(d D)/2+3
and hence (as n≥ 4),
1≤ d(2- D/2).
So D=3, and we may without loss of generality assume D={0,1,∞}.
In summary, classifying Zariski-dense local systems of pullback type on ℂℙ^1∖{x_1, ⋯, x_n} is the same as classifying projective local systems 𝕎 on ℂℙ^1∖{0,1,∞}, and covers g:ℂℙ^1→ℂℙ^1 branched over {0,1,∞} and n-3 auxiliary points, so that at most n points x with g(x)∈{0,1,∞} have ramification order not divisible by the local monodromy of 𝕎. This was done by Diarra <cit.>, again by a Riemann-Hurwitz computation.
§.§.§ Local systems of geometric origin
Having handled local systems of pullback type, all that remains in our classification of interesting finite Mod_0,n-orbits on Y(0,n,2) is to classify those local systems on 𝒳 as in <ref> of geometric origin. Before discussing this classification, let's observe that this is really the same as <ref>. Indeed, given a family π: 𝒳→ℳ as in <ref> and a local system of geometric origin on 𝒳, the restriction to a general fiber will give a local system of geometric origin on ℂℙ^1∖{x_1, ⋯, x_n} with the x_i general. Conversely, by “spreading out",[That is, if our local system appears in the cohomology of a family 𝒴→ℂℙ^1∖{x_1, ⋯, x_n}, with the x_i general, then 𝒴 extends over a family of n-punctured curves of genus 0 whose base dominates ℳ_0,n.] a local system of geometric origin on ℂℙ^1∖{x_1, ⋯, x_n} (with the x_i general) can be extended to a family as in <ref>.
Here is one form of the classification:
Let x_1, ⋯, x_n∈ℂℙ^1 be general points. Then any non-isotrivial rank two local system 𝕍 on ℂℙ^1∖{x_1, ⋯, x_n} of geometric origin, with infinite monodromy about one of the x_i and non-scalar monodromy at each x_i, has the form
𝕍=MC_λ(𝕎)⊗𝕃,
where
* λ∈ℂ^×∖{1},
* 𝕎 has rank n-2 and monodromy given by a finite complex reflection group,
* and 𝕃 is a rank one local system of finite order on ℂℙ^1∖{x_1, ⋯, x_n}.
In fact Corlette-Simpson show <cit.> that any non-isotrivial rank 2 local system of geometric origin on a smooth variety X arises in the cohomology of an abelian scheme of GL_2-type over X. In this way <ref> classifies abelian schemes of GL_2-type over ℂℙ^1∖{x_1, ⋯, x_n}, with the x_i generic, which do not have potentially good reduction everywhere. This last condition is a geometric interpretation of the condition that the local monodromy about some x_i have infinite order, and (at least in my view) makes this condition more natural.
§.§.§ Middle convolution
For completeness (and before sketching the proof), we give a definition and discussion of middle convolution. The non-algebro-geometrically inclined reader may wish to skip this section.
Let {x_1, ⋯, x_n}⊂ℂℙ^1 be a finite set of points, with ∞ among the x_i, and set X=ℂℙ^1∖{x_1, ⋯, x_n}. Consider the diagram
X× X∖Δ@^(->[rr]^j [ld]^π_1[rd]^α ℂℙ^1 × X [rd]^π_2
X ℂ^× X
where Δ is the diagonal, j is the natural inclusion, π_1, π_2 are projections onto the first and second factor respectively, and α is the map (a,b)↦ a-b. For λ a non-zero complex number, let χ_λ be the rank one local system on ℂ^× with local monodromy about 0 given by λ. Then given a local system 𝕍 on X, we define
MC_λ(𝕍):=R^1π_2*j_*(π_1^*𝕍⊗α^*χ_λ).
This definition may be appear intimidating at first glance, but it is in fact quite computable; see e.g. <cit.> for a beautiful explanation of how to perform these computations. In the case of interest, namely when 𝕍 has finite monodromy, it even has a fairly simple geometric interpretation. Indeed, if G is the monodromy group of 𝕍, and λ is a c-th root of unity, then
MC_λ(𝕍)
appears in the cohomology of a family of curves
𝒴→ℂℙ^1∖{x_1, ⋯, x_n},
where for s∈ℂℙ^1∖{x_1, ⋯, x_n}, the curve 𝒴_s is a G×ℤ/cℤ-cover of ℂℙ^1 branched at s, x_1, ⋯, x_n.
In fact one may be completely explicit; Amal Vayalinkal <cit.> has taught a computer how to perform the middle convolution and used it to explicitly enumerate the possible finite complex reflection groups that may appear in the classifications of <ref> and <ref>.
§.§.§ Completing the argument
We now briefly sketch the proof of <ref> and <ref>.
Suppose 𝕍 is an interesting local system on X=ℂℙ^1∖{x_1, ⋯, x_n}; in particular, by Corlette-Simpson <cit.> it is of geometric origin. We wish to find 𝕎 with finite monodromy, 𝕃 of rank one and finite order, and λ∈ℂ^× a root of unity, such that
𝕍=MC_λ(𝕎)⊗𝕃.
As MC_λ has (quasi)-inverse MC_λ^-1, it suffices to find λ^-1∈ℂ^× a root of unity and 𝕃 of rank one and finite order such that
𝕎:=MC_λ^-1(𝕍⊗𝕃^∨)
has finite monodromy.
Here 𝕍 is assumed to be of geometric origin, and the operations -⊗𝕃^∨ and MC_λ^-1 preserve this property. Thus whatever 𝕎 we produce is of geometric origin itself, and hence has all the attendant structures of local systems of geometric origin. Crucially for us, the local system 𝕎 necessarily:
* is defined over the ring of integers 𝒪_K of a number field K, and
* underlies a polarizable 𝒪_K-variation of Hodge structure.
By <cit.>, a local system 𝕌 which is defined over the ring of integers 𝒪_K of a number field K, and such that for each embedding 𝒪_K↪ℂ the associated complex local system 𝕌⊗_𝒪_Kℂ is unitary, necessarily has finite monodromy. Thus it suffices to choose λ^-1, 𝕃, so that 𝕎 is unitary under all such complex embeddings of 𝒪_K.
The main idea here is to use the polarization on the Hodge structure carried by 𝕎 — if the Hodge filtration on 𝕎 has length 1, the polarization is necessarily definite, and hence 𝕎 is unitary. That a choice of λ^-1, 𝕃 with this property exist is something of a combinatorial miracle <cit.>, and depends on an analysis of the structure of the Hodge filtration on 𝕍 <cit.>, ultimately relying on the fact that the points x_i are generic.
Having chosen λ^-1, 𝕃 appropriately so that 𝕎 has finite monodromy, one checks the monodromy is in fact given by a finite complex reflection group just by observing that the local monodromy about each x_i≠∞ is a pseudoreflection, which is a local calculation.
§.§ Some questions
These results leave a number of questions unresolved; we briefly record two such questions here for the reader who will depart prematurely—there are many, many more such questions later in these notes.
Can one classify conjugacy classes of finite tuples of matrices (A_1, ⋯, A_n)∈ Y(0,n,2) with Mod_0,n-orbit, without the condition that some A_i have infinite order?
Can one say anything about finite Mod_0,n-orbits in Y(0,n,r) with r>2?
We will give a number of conjectural answers and partial results towards this latter question, and its generalizations, in the coming sections. It is worth noting that a few of the ideas here—in particular the intervention of some hidden rigidity, through the use of Corlette-Simpson's work <cit.>, and the properties of local systems of geometric origin—will appear again and again throughout these notes.
§ GEOMETRY AND DYNAMICS OF CHARACTER VARIETIES IN HIGHER GENUS
We now consider (Riemann) surfaces not necessarily of genus zero. Our goal in this setting will be to discuss analogues of the questions considered in <ref>. We will take this opportunity to introduce some of the players that will take center stage in the second half of these notes—in particular, we will explain the relationship between the questions we are studying here and the theory of ordinary differential equations.
§.§ A more general setup
Let Σ_g,n be an orientable (topological) surface of genus g, with n punctures, and let
Mod_g,n=π_0(Homeo^+(Σ_g,n))
be the component group of the group of orientation-preserving self-homeomorphisms of Σ_g,n. Again, Mod_g,n has an an algebro-geometric interpretation if 2-2g-n<0: the index n! subgroup PMod_g,n preserving the punctures is the (orbifold) fundamental group of the moduli space ℳ_g,n of Riemann surfaces of genus g with n marked points. The group Mod_g,0=π_1(ℳ_g) has a simple group-theoretic interpretation: it is of index 2 in Out(π_1(Σ_g)) (in particular, it is the subgroup acting on H_1(Σ_g, ℤ) with determinant 1).
Generalizing our previous definitions, we set
Y(g, n, r)=Hom(π_1(Σ_g,n), GL_r(ℂ))/conjugation.
As π_1(Σ_g,n) has the standard presentation
π_1(Σ_g,n)=⟨ a_1, ⋯, a_g, b_1, ⋯, b_g, c_1, ⋯, c_n |∏_i=1^g [a_i, b_i]∏_j=1^n c_j⟩,
we may think of Y(g,n,r) as the set of (simultaneous conjugacy classes of) tuples of r× r matrices (A_1, ⋯, A_g, B_1, ⋯, B_g, C_1, ⋯, C_n) such that
∏_i=1^g [A_i, B_i]∏_j=1^n C_j=id.
Again the natural outer action of Mod_g,n on π_1(Σ_g,n) induces an action
Mod_g,n↷ Y(g,n,r).
The most basic question one can ask about this action (whose answer we are quite far from understanding in general, though we will discuss some conjectural answers in the next sections) is:
What are the finite orbits of this action?
One can again make the action of Mod_g,n on Y(g,n,r) completely explicit <cit.> but the formulas are perhaps less illuminating than those in the genus zero setting.
To our knowledge, prior to the work we explain in this section, there were two basic conjectural (partial) answers to <ref>.
[We will discuss their actual conjecture and some variants later in these notes, in <ref>.]
The finite orbits of the Mod_g,n-action on Y(g,n,r) are Zariski-dense in Y(g,n,r).[To make sense of this one should view Y(g,n,r) as having the topology induced by the Zariski topology on 2g+n-tuples of matrices satisfying (<ref>).]
For g≫ r, the finite Mod_g,n-orbits in Y(g,n,r) correspond exactly to those representations
π_1(Σ_g,n)→GL_r(ℂ)
with finite image.[Note that representations with finite image necessarily have finite Mod_g,n-orbit.]
These two conjectures are in tension with one another; the first should be viewed as saying there are many finite orbits, and the second as saying that there are not too many. And indeed, they contradict one another when r≥ 2. To see this, we use the following theorem of Jordan:
There exists a constant n(r)>0 such that for any finite subgroup G of GL_r(ℂ), G contains an abelian subgroup of index at most n(r).
In particular, if A, B∈GL_r(ℂ) generate a finite subgroup, then [A^n(r)!, B^n(r)!]=id. So in particular the polynomial
tr([ρ(a_1)^n(r)!, ρ(b_1)^n(r)!])-r
(with a_1, b_1 as in <ref>) vanishes on the locus of ρ with finite image; but it is not hard to see that it is not identically zero on all of Y(g, n,r).
It turns out that <ref> is false and <ref> is true.
If g≥ r^2 and
ρ: π_1(Σ_g,n)→GL_r(ℂ)
is a representation whose conjugacy class has finite orbit under Mod_g,n, then ρ has finite image.
As we will see, the proof of <ref> relies on the full force of non-abelian Hodge theory and the Langlands program for function fields. The theorem was proven earlier in rank r=2 in the beautiful paper <cit.>, by completely different (and much more elementary and explicit) methods. It would be of great interest to find a proof of <ref> of a similarly explicit nature.
One may again, as in <ref>, give an algebro-geometric interpretation of the condition that a representation has finite Mod_g,n-orbit.
An irreducible representation ρ: π_1(Σ_g,n)→GL_r(ℂ) has conjugacy class with finite Mod_g,n-orbit if and only if there exists
* a family of n-punctured curves of genus g, π: 𝒳→ℳ, with the induced map ℳ→ℳ_g,n dominant, and
* a local system 𝕍 on 𝒳 whose restriction to a fiber of π has monodromy conjugate to ρ.
The following is immediate:
Let C be a very general Riemann surface of genus g≥ r^2 with n≥ 0 very general marked points {x_1, ⋯, x_n}. Then any local system of geometric origin on C∖{x_1, ⋯, x_n} of rank r has finite monodromy.
Indeed, any local system of geometric origin (defined in <ref>) on a very general n-times punctured curve would spread out over the total space of a family as in <ref>.
One may improve the bound in <ref> to g≥ r^2/4; see <cit.>. We do not know how to similarly improve <ref>.
It is not at all clear to us to what extent the bound in <ref> is sharp, but some bound is indeed necessary—for all g and n with 2-2g-n<0 there exist many fascinating representations of π_1(Σ_g,n) with finite Mod_g,n-orbit. It would be very interesting to find some kind of classification of these; we make some conjectures in this direction later in these notes, in <ref> and <ref>.
[The Kodaira-Parshin trick, see <cit.>, <cit.>, <cit.>, <cit.>]
By <ref>, one way to construct representations of π_1(Σ_g,n) with finite Mod_g,n-orbit is to construct local systems on the total space of a family of n-punctured curves of genus g, 𝒳→ℳ, so that the induced map ℳ→ℳ_g,n is dominant. There is a famous way to do so: the Kodaira-Parshin trick.
Let 𝒞_g,n be the universal curve over ℳ_g,n. Loosely speaking, the idea of the Kodaira-Parshin trick is to construct a family of curve q: 𝒴→𝒞_g,n whose fiber over a point [C, x_1, ⋯, x_n+1]∈𝒞_g,n is a disjoint union of G-covers of C branched over x_1, ⋯, x_n+1, where G is some finite group. The local system R^1q_*ℂ then provides a (typically very interesting) local system on 𝒞_g,n, and hence, by <ref>, a π_1(Σ_g,n)-representation with finite Mod_g,n-orbit.
There are other ways to construct such representations, using e.g. TQFT methods; see <cit.> for a discussion.
We will shortly sketch the proof of <ref>—but before doing so we will take the opportunity to introduce some of the ideas and themes that will motivate the rest of these notes. The fundamental idea is that Y(g,n,r) has several different avatars, whose structure is the non-abelian analogue of the structures on the cohomology of an algebraic variety (e.g. Hodge structures on Betti/de Rham cohomology, the Gauss-Manin connection on the cohomology of a family, the Galois action on ℓ-adic cohomology, and so on). As in <ref>, we will see that rigid local systems play a special role in the argument; this observation will also motivate some conjectures in <ref> and <ref>.
§.§ Isomonodromy
Before diving into the proof of <ref>, we will need yet another interpretation of the Mod_g,n-action on Y(g,n,r), closely related to our earlier discussion of the Painlevé VI equation (<ref>) and the Schlesinger system (<ref>). The crucial idea is to pass through the Riemann-Hilbert correspondence, between local systems and ordinary differential equations.
§.§.§ The Riemann-Hilbert correspondence
Let X be a complex manifold and D⊂ X a smooth divisor. A flat bundle (ℰ, ∇) on X with regular singularities along D is a holomorphic vector bundle ℰ on X, equipped with a ℂ-linear map ∇: ℰ→ℰ⊗Ω^1_X(log D) satisfying the Leibniz rule, i.e.
∇(f· s)=f∇(s)+s⊗ df
for U⊂ X an open subset, f∈𝒪_X(U), and s∈ℰ(U), and such that
∇∘∇: ℰ→ℰ⊗Ω^2_X(log D)
is identically zero.[ If one views ∇ as a rule for differentiating sections to ℰ along vector fields, the expression ∇∘∇=0 is a fancy way of saying that mixed partials commute.] (Here Ω^i_X(log D) is the sheaf of holomorphic i-forms on X with logarithmic poles along D, see e.g. <cit.>.)
The fiber of the sheaf Ω^1_X(log D) at a point x∈ D is canonically trivialized by the 1-form dz/z, where z is any local equation for D at x. A local computation shows that the composite map
ℰ∇⟶ℰ⊗Ω^1_X(log D)→ℰ⊗ (Ω^1_X(log D)/Ω^1_X)≃ℰ|_D
is 𝒪_X-linear and hence factors through ℰ|_D. We denote fiber of the induced map ℰ|_D→ℰ|_D at x∈ X by Res_x(∇) and refer to it as the residue of the connection ∇.
We denote by MIC(X, D) the category of flat vector bundles on X with regular singularities along D (where morphisms are given by the evident commutative squares).[Here MIC stands for modules with integrable connection. Many of the notions discussed here can be generalized, e.g. to the case where D has normal crossings, but we choose not to do so for simplicity.]
There is an evident functor from MIC(X,D) to the category of local systems on X∖ D, given by
(ℰ, ∇)↦(∇|_X∖ D),
often referred to as the Riemann-Hilbert correspondence. If D=∅, this functor is an equivalence of categories. In general it is faithful and essentially surjective but not full. Indeed, given a local system on X∖ D there are always many ways to extend it to an object of MIC(X,D) if D is non-empty.
[Fuchsian ODEs]
To bring things down to earth, let's consider the case where X=ℂℙ^1 and D=x_1+⋯+ x_n, say with ∞ not among the x_i. Take ℰ=𝒪_X^r. Then a connection on ℰ with regular singularities along D has the form
∇=d-∑_i=1^n A_i/z-x_idz
where the A_i∈𝔤𝔩_r(ℂ) are r× r matrices, and
∑_i=1^n A_i=0
(this last condition ensures there is no singularity at infinity). The condition that ∇ s=0 is precisely the linear ordinary differential equation
∂ s/∂ z=∑_i=1^n A_i/z-x_is.
The matrix -A_i is the residue of ∇ at x_i.
To obtain a local system from this ODE, choose some basepoint x∉D, and consider a basis s_1, ⋯, s_r of local solutions to the ODE at x. Given a loop γ: S^1→ X∖ D based at x, one may analytically continue these solutions around the loop γ; the output only depends on the homotopy type of γ. This yields a new basis of solutions γ^*(s_1, ⋯, s_r). But we have just defined an action of homotopy classes of based loops in X∖ D on bases of local solutions to our ODE. Equivalently, this is a representation of π_1(X∖ D, x) on the space of local solutions to our ODE, or equivalently a local system of rank r whose fiber at x is the space of local solutions to our ODE in a small neighborhood of x.
§.§.§ Isomonodromic deformation
We now consider the following question: given an object (ℰ,∇) of MIC(X,D), what happens as we deform the complex structure on (X,D)? The topology of X∖ D does not change, and objects of MIC(X,D), loosely speaking, correspond to topological objects, namely local systems. So perhaps it is plausible that there is a canonical deformation of (ℰ,∇) along any deformation of (X,D). Indeed this is the case, as we now explain.
Let U be a contractible complex manifold and π: 𝒳→ U a family of compact Riemann surfaces (i.e. a proper holomorphic submersion of relative dimension one, with connected fibers). Let 𝒟⊂𝒳 be a divisor in 𝒳, finite étale over U. Let 0∈ U be a point and set (X,D)=(𝒳_0, 𝒟_0) to be the fiber over 0. Let (ℰ_0, ∇_0) be a flat bundle on X with regular singularities along D.
An isomonodromic deformation (ℰ, ∇, φ) is a vector bundle ℰ on 𝒳 equipped with a relative connection
∇:ℰ→ℰ⊗Ω^1_𝒳/U(log𝒟),
and equipped with an isomorphism φ: ℰ|_X∼→ℰ_0 such that
* ∇ admits an extension to a global flat connection ℰ→ℰ⊗Ω^1_𝒳(log𝒟), and
* φ induces an isomorphism ( ℰ, ∇)|_X ∼→ (ℰ_0, ∇_0).
It is not hard to see that isomonodromic deformations exist and are unique up to canonical isomorphism. (See e.g. <cit.>.) What do they mean? Condition (2) of <ref> simply says that (ℰ,∇) is in fact a deformation of (ℰ_0,∇_0). We claim that condition (1) means that the monodromy of this deformation is constant. Indeed, as U is contractible, the inclusion of (X, D) into (𝒳, 𝒟) induces an isomorphism of fundamental group π_1(X∖ D)∼→π_1(𝒳∖𝒟), and hence the restriction map induces a bijection between (isomorphism classes of) local systems on 𝒳∖𝒟 and those on X∖ D. Moreover (and we leave this to the reader), the conjugacy classes of the residues of ∇ are locally constant along 𝒟. Thus the isomonodromic deformation (ℰ, ∇) is the deformation of (ℰ_0, ∇_0) with constant monodromy and residues (hence the name).
We return to the situation of <ref>, and consider the behavior of the connection
∇=d-∑_i=1^n A_i/z-x_idz
under isomonodromic deformation, i.e. after perturbing the x_i. So we view the A_i as functions of the parameters x=(x_1, ⋯, x_n), and consider a connection of the form
∇= d-∑_i=1^n A_i(x)/z-x_id(z-x_i)+∑ C_i dx_i,
i.e. a connection with regular singularities along the evident divisors where z=x_i. Note that ∑ A_i(x)=0 by our assumption that there is no pole at ∞.
The condition that ∇ be isomonodromic is simply the condition that ∇∘∇=0. What condition does this impose on the A_i? For i≠ j, considering the coefficient of d(z-x_i)∧ d(z-x_j) gives precisely the first line of (<ref>); differentiating the identity ∑ A_i(x)=0 gives the second. In other words, the Schlesinger equations precisely control isomonodromic deformations of ODEs as in <ref> — so-called Fuchsian ODEs.
The Painlevé VI equation may be obtained from the Schlesinger equation when the A_i∈𝔰𝔩_2(ℂ) and n=4 by a change of coordinates; see
<cit.> for a complete derivation of the Schlesinger equations, and <cit.> for details of the connection to Painlevé VI.
In general, isomonodromic deformations of flat vector bundles on Riemann surfaces are controlled by certain non-linear algebraic ODE; we have just made this explicit in genus zero, and one can do so in higher genus as well <cit.>. The question of analyzing finite mapping class group orbits on Y(g,n,r) is precisely the question of understanding algebraic solutions to these non-linear ODE. We will return to this from an arithmetic point of view in the next section.
The first key fact that goes into the proof of <ref> is an analysis of the properties of flat bundles under isomonodromic deformation; we will return to questions along these lines in <ref>.
Let X be a compact Riemann surface and D⊂ X a reduced effective divisor. Let (ℰ,∇) be a flat vector bundle on X with regular singularities along D and irreducible monodromy. If g≥rk(ℰ)^2/4, there exists a small perturbation (X',D') of the complex structure on (X,D) such that the isomonodromic deformation (ℰ', ∇') of (ℰ, ∇) to (X', D') has ℰ' (parabolically) semistable.
Here we say a vector bundle ℰ is semistable if for all nonzero sub-bundles ℱ⊂ℰ, we have
(ℱ)/rk(ℱ)≤(ℰ)/rk(ℰ).
The word “parabolically" appearing in <ref> refers to the natural parabolic structure on a bundle with connection—see <cit.> for details. Vector bundles with parabolic structure (henceforth parabolic bundles) admit another notion of degree and hence another notion of stability; we elide this issue for the rest of these notes.
It is natural to ask if one can achieve stability in <ref>, rather than just semistability; this is ongoing work of Andy Ramirez-Cote <cit.>.
<ref> is technical indeed but it has a crucial implication:
Let π: 𝒳→ℳ be a family of n-punctured curves of genus g as in <ref>, with ℳ connected, and let X be any fiber of π. Any local system of rank r on 𝒳, with g≥ r^2/4, which underlies a polarizable complex variation of Hodge structure, restricts to a unitary local system on X.
Let 𝕍 be a local system on 𝒳 as in the statement, with (ℰ, ∇, F^∙) the associated flat bundle with (decreasing) Hodge filtration F^∙. Then ℰ|_X has (parabolic) degree zero. For simplicity we assume that the monodromy of 𝕍|_X is irreducible; in this case we show that 𝕍 itself is unitary. Let i be maximal such that F^i is non-zero; suppose that F^i≠ℰ. Then by <cit.>, F^i|_X has positive (parabolic) degree, so (as X was an arbitrary fiber of π) ℰ|_X' is not semistable for any fiber X' of π. But this contradicts <ref> — there is no way to perturb X to make ℰ|_X semistable.
So the Hodge filtration has length one, whence the polarization on 𝕍 is a definite Hermitian form. But this form is preserved by the monodromy, which is hence unitary.
We will return to questions about the behavior of flat bundles under isomonodromic deformation later in these notes (in <ref>), but before doing so we will introduce the other main ingredient of the proof of <ref>.
§.§ Big monodromy, rigidity, and vanishing theorems
Thus far we have largely studied the action of Mod_g,n on Y(g,n,r), the space of rank r representations of π_1(Σ_g,n). In <ref>, I suggested that this study is analogous to classical questions in algebraic geometry (the Hodge conjecture, the Tate conjecture, and so forth); we will discuss this further in <ref>. Part of this analogy is the idea that Y(g,n,r) ought to be viewed as a non-abelian analogue of first cohomology. Indeed,
Y(g,n,1)=H^1(Σ_g,n, ℂ^×).
So perhaps it is unsurprising that classical questions about monodromy actions on cohomology will intervene in our study of the mapping class group action on Y(g,n,r).
Let
ρ: π_1(Σ_g,n)→GL_r(ℂ)
be an irreducible representation whose conjugacy class has finite orbit under Mod_g,n, say with stabilizer Γ⊂Mod_g,n a subgroup of finite index. There is a natural Γ-representation associated to ρ, namely the action of Γ on the tangent space
T_[ρ]Y(g,n,r)=H^1(π_1(Σ_g,n), ad(ρ)).
(See e.g. <cit.> for a discussion of this cohomological interpretation of tangent spaces.)
The main result on monodromy actions on cohomology that we will use here, which follows from an analysis of derivatives of period maps—discussed further in <ref>—is:
Let π: 𝒳→ℳ be a family of n-punctured curves of genus g, so that the induced map ℳ→ℳ_g,n is dominant. Let 𝕍 be a unitary local system on 𝒳 of rank less than g. Then
H^0(ℳ, R^1π_*𝕍)=0.
Put another way, let X be a fiber of π; then π_1(ℳ) admits a natural action on H^1(X, 𝕍|_X). The theorem says that this action has no nonzero invariants (and moreover, since the statement remains the same on passing to covers of ℳ, no nonzero finite orbits). As we will see later in <ref>, this sort of statement is closely connected to major open questions in surface topology.
Let us explain the relevance to us. Consider the case where 𝕍=ad(𝕎), with 𝕎 unitary and irreducible and rk(ad(𝕎))< g. Letting ρ be the monodromy of 𝕎|_X, we see that the conjugacy class of ρ has finite orbit under Mod_g,n, say with stabilizer Γ⊂Mod_g,n. And the action of π_1(ℳ) on H^1(X, ad(𝕎)|_X) factors through the Γ-action on T_[ρ]Y(g,n,r)=H^1(π_1(Σ_g,n), ad(ρ)), so we may study it by analyzing H^0(ℳ, R^1π_*ad𝕎), which vanishes, by <ref>.
That this space is zero says that ρ is isolated as a Γ-fixed point in Y(g,n,r). Indeed, if Γ fixed a positive-dimensional subvariety Z of Y(g,n,r) passing through [ρ], then T_[ρ]Z⊂ T_[ρ]Y(g,n,r) would be Γ-fixed as well. (Compare to <ref>(3).)
One can ask finer questions about the monodromy of local systems of the form R^1π_*𝕍 as in <ref> (for example, what is their precise monodromy group?), and we will do so later, in <ref>.
§.§ Idea of the proof
We now turn to the idea of the proof of <ref>. We will make several simplifying assumptions over the course of the sketch, which hopefully the reader will forgive.
Let
ρ: π_1(Σ_g,n)→GL_r(ℂ)
be a representation whose conjugacy class has finite Mod_g,n-orbit. We would like to show that if g≥ r^2, then ρ has finite image. We assume for simplicity that ρ is irreducible. Then by <ref>, there exists a family of n-punctured curves of genus g,
π: 𝒳→ℳ,
with the associated map ℳ→ℳ_g,n dominant, and a local system 𝕍 on 𝒳, such that the monodromy of the restriction of 𝕍 to a fiber of π is given by ρ.
By work of Mochizuki <cit.>, 𝕍 can be deformed to a polarizable complex variation of Hodge structure 𝕍' (see <ref> for a variant of this result and a sketch of the proof). We assume for simplicity that 𝕍' is irreducible when restricted to a fiber of π.[This assumption is extremely strong, and in fact to give a correct proof one must circumvent it. Doing so is the source of many of the technical considerations in <cit.>.] By <ref>, 𝕍' is in fact unitary. Now by <ref>, applied to ad(𝕍'), 𝕍' is cohomologically rigid, i.e. it admits no non-trivial infinitesimal deformations.
This observation has two consequences:
* as we have deformed 𝕍 to a representation with no non-trivial deformations, we must have that 𝕍 and 𝕍' are isomorphic to one another, and
* 𝕍' (and hence 𝕍) are defined over 𝒪_K, the ring of integers of some number field K, by work of Esnault-Groechenig <cit.> (which ultimately relies on the Langlands program over function fields).[We discuss this work further in <ref>.]
We may now apply a similar argument to that used in the proof of <ref> and <ref>. Again by <cit.>, it suffices to show that for each embedding
ι: 𝒪_K↪ℂ,
the local system 𝕍⊗_ι, 𝒪_Kℂ is unitary. But we know 𝕍 is rigid (as this is an algebraic property, independent of a choice of complex embedding), hence by Mochizuki <cit.>, 𝕍⊗_ι, 𝒪_Kℂ underlies a complex variation of Hodge structure. Now by <ref>, 𝕍⊗_ι, 𝒪_Kℂ is unitary, completing the proof.
Our simplifying assumptions (that the various local systems appearing in the argument are irreducible) allow us to avoid substantial complications in the argument, but we elide them here. See <cit.> for details.
We have in this section classified all finite Mod_g,n-orbits on Y(g,n,r), when g≥ r^2, and in <ref> we classified finite PMod_0,n-orbits on Y(C), where C=(C_1, ⋯, C_n) contains some conjugacy class C_i of infinite order. Much remains to do—we understand little of the dynamics of the Mod_g,n-action on Y(g,n,r) when r≫ g. In the next section, we give a conjectural arithmetic characterization of finite orbits of this action and its generalizations, and prove some important special cases of this conjecture; in the following two sections we discuss other conjectural characterizations from the points of view of algebraic geometry and more classical surface topology.
§ ALGEBRAIC DIFFERENTIAL EQUATIONS AND DYNAMICS ON CHARACTER VARIETIES
We turn now to a much more general setting. Let S be smooth and let
f: 𝒳→ S
be a smooth proper morphism with connected fibers[Essentially all of the results in this section apply when 𝒳 is equipped with a simple normal crossings divisor over S, but we omit this here to avoid notational complication, and some mild complications around the Riemann-Hilbert correspondence in the presence of a simple normal crossings divisor. The reader will lose almost nothing by assuming f has relative dimension one, i.e. that it is a family of curves.] over the complex numbers. Let X_o be a general fiber of f, say over some point o∈ S(ℂ). We let ℳ_B(X_o, r) be the (stack) quotient
Hom(π_1(X_o), GL_r(ℂ))/conjugation
and M_B(X_o, r) the quotient in the sense of GIT (the latter being an affine complex variety). As before we are interested in the natural action of π_1(S,o) on M_B(X_o, r), and in its action on the set of isomorphism classes of objects of ℳ_B(X_o, r) (i.e. conjugacy classes representations of π_1(X_o)). The goal of this section is to give a conjectural classification of all finite orbits of this action, motivated by the Grothendieck-Katz p-curvature conjecture, in terms of arithmetic invariants of the corresponding flat vector bundles, and to sketch a proof of this conjecture in important cases of interest.
As we have previously discussed in special cases (e.g. in <ref> and <ref>), these finite orbits are related to algebraic solutions to certain non-linear differential equations. We now explain this in general.
§.§ Dynamics and foliations
Let S be the universal cover of S (which is typically only a complex manifold, and does not have the structure of an algebraic variety). We may consider the projection
π: M_B(𝒳/S,r):=(S× M_B(X_o, r))/π_1(S, o)→ S,
where here π_1(S, o) acts on S freely by deck transformations and on M_B(X_o, r) through its outer action on π_1(X_o). This quotient naturally has the structure of a local system of schemes in the sense of <cit.>, which loosely speaking means that one has a notion of flat sections to π. Explicitly, a section to π is flat if and only if, locally on S, it lifts to a constant section to the projection S× M_B(X_o,r)→S, i.e. it corresponds to a family of representations whose conjugacy class is constant. One may similarly construct a moduli stack ℳ_B(𝒳/S, r), see <cit.> for details.
On the smooth locus[For example, if f has relative dimension one, this locus contains the open subset of M_B(𝒳/S) corresponding to irreducible representations of π_1(X_o).] π^sm of π this structure has a simple interpretation; it is nothing more than a horizontal foliation, i.e. a splitting of the short exact sequence
0→ T_π^sm→ T_M_B(𝒳/S)|_π^sm→π^*T_S|_π^sm→ 0,
compatible with the Lie algebra structure on these sheaves. A representation ρ of π_1(X_o) has finite π_1(S, o)-orbit if and only if the leaf of this foliation through [ρ] is finite over S, by definition.
The above construction is highly transcendental in nature, relying as it does on the universal cover of S and on the topological fundamental groups π_1(X_o), π_1(S, o). Nonetheless the construction has an algebraic avatar.
There is a moduli stack ℳ_dR(𝒳/S, r) over S, which loosely speaking represents the functor that sends an S-scheme T to the groupoid of rank r flat bundles on 𝒳_T/T. The Riemann-Hilbert correspondence gives an analytic isomorphism between ℳ_dR(𝒳/S, r)^an and ℳ_B(𝒳/S, r) <cit.>. It turns out that the structure of a local system of schemes on M_B(𝒳/S, r) may be interpreted algebraically on ℳ_dR(𝒳/S, r); the latter stack is a crystal over S. This is the non-abelian analogue of the algebraic nature of the Gauss-Manin connection on the de Rham cohomology of a family of varieties <cit.>. For details see <cit.>; we now give a brief explanation of how this works at the level of horizontal foliations. The idea is that flat sections to this morphism correspond to isomonodromic deformations in the sense of <ref>; we make the corresponding foliation explicit at smooth points corresponding to irreducible representations. In order to do so we need to introduce some notation to give a cohomological description of the tangent spaces to ℳ_dR(𝒳/S, r).
§.§.§ The Atiyah-de Rham complex
The deformation theory of a pair (X, ℰ), with X a smooth variety and ℰ a vector bundle on X, is controlled by a vector bundle called the Atiyah bundle.
The sheaf of first-order differential operators on ℰ, Diff^1(ℰ, ℰ), is the sheaf of ℂ-linear maps δ: ℰ→ℰ such that the map
s↦δ_f(s):=δ(f s)-fδ(s)
is 𝒪_X-linear for all local sections s of ℰ and f of 𝒪_X. The Atiyah bundle At(ℰ)⊂Diff^1(ℰ, ℰ) is the sheaf of first-order differential operators δ such that δ_f is given by multiplication by a section of 𝒪_X, for all f.
Direct computation shows that for a local section δ to At(ℰ), the assignment τ_δ: f↦δ_f is in fact an 𝒪_X-valued derivation; let
τ: At(ℰ)→ T_X
δ↦τ_δ
be the corresponding map.
By construction there is a short exact sequence
0→End(ℰ)→At(ℰ)τ→ T_X→ 0,
called the Atiyah exact sequence. The data of a flat connection ∇ on ℰ is the same as the data of an 𝒪-linear splitting q^∇ of this sequence, where q^∇(v)(s) is given by contracting v with ∇(s).
Now suppose we are given a flat connection ∇ on ℰ, and consider the complex
At(ℰ)^∙_dR: At(ℰ)→End(ℰ)⊗Ω^1_X∇→End(ℰ)⊗Ω^2_X∇→⋯
where the first differential is given by taking the commutator of a differential operator with ∇ (see <cit.> for a precise formula) and the rest are given by the connection on End(ℰ) induced by ∇. We refer to this complex as the Atiyah-de Rham complex of (ℰ, ∇). There is a short exact sequence of complexes
0→End(ℰ)_dR^∙→At(ℰ)^∙_dR→ T_X→ 0,
where End(ℰ)_dR^∙ is the de Rham complex of End(ℰ) with its induced connection.
There is a natural identification between
ℍ^1(At_dR^∙(ℰ))
and the space of first-order deformations of (X, ℰ, ∇). Under this identification, the map
ℍ^1(At_dR^∙(ℰ))→ H^1(T_X)
induced by (<ref>) sends a deformation of (X, ℰ,∇) to the corresponding deformation of X (under the natural Kodaira-Spencer identification of first-order deformations of X with H^1(T_X)).
Note that the natural map
At(ℰ)^∙_dR→ T_X
has a splitting, given by the splitting q^∇ of the Atiyah exact sequence discussed above; that this is a map of complexes follows from the flatness of ∇. The induced splitting of the map
ℍ^1(At_dR^∙(ℰ))→ H^1(T_X)
is the source of the foliation on ℳ_dR(𝒳/S, r)/S, as we now explain. Let s∈ S be a point and set X=𝒳_s. Then the Kodaira-Spencer map yields a map T_sS→ H^1(X, T_X). Let (ℰ,∇)∈MIC(X) be an irreducible flat bundle corresponding to a smooth point of ℳ_dR(𝒳/S, r). Then the tangent space to ℳ_dR(𝒳/S, r) fits into a pullback square
T_[(X, ℰ, ∇)]ℳ_dR(𝒳/S, r)[r] [d] ℍ^1(At_dR^∙(ℰ)) [d]
T_sS [r] H^1(T_X)
where the left vertical map is the differential of the natural map ℳ_dR(𝒳/S, r)→ S.
Thus the natural splitting q^∇ to the right vertical map induces a section to the left vertical map, i.e. a horizontal foliation on ℳ_dR(𝒳/S, r) over S. This is the isomonodromy foliation — its leaves correspond precisely to isomonodromic deformations as before (see <cit.> for the case were 𝒳/S is the universal curve over ℳ_g; the general case follows identically). By comparison to the analytically isomorphic space ℳ_B(𝒳/S,r), the leaves of this foliation that are finite over S correspond precisely to the representations with finite π_1(S,s)-orbit, under the Riemann-Hilbert correspondence. These are, by e.g. <cit.> or <cit.>, precisely the algebraic leaves of this foliation.
§.§ Algebraic solutions to differential equations
We are thus lead to consider the following question:
How can one characterize the algebraic solutions to an algebraic differential equation?
In the case of linear ODEs, this question has a classical (conjectural) answer, due to Grothendieck and Katz: the Grothendieck-Katz p-curvature conjecture. We recall their conjecture and then discuss its analogue for the isomonodromy foliation discussed above.
§.§.§ The p-curvature conjecture
Let A∈Mat_r× r(ℚ(z)) be a matrix of rational functions with algebraic coefficients, and consider the linear ODE
(d/dz-A)f⃗(z)=0.
We are interested in understanding when this ODE admits a basis of algebraic solutions, i.e. when, if one takes the formal power series expansion of the solutions to this ODE at a point where A has no poles, the resulting power series are all algebraic over ℚ(z). Equivalently—when A has only simple poles along a divisor D⊂ℂℙ^1—we are, under the Riemann-Hilbert correspondence, interested in understanding when the corresponding representation of π_1(ℂℙ^1∖ D) has finite image.
The above linear ODE has a basis of algebraic solutions if and only if
(d/dz-A)^p≡ 0 p
for almost all primes p.
Here the condition of the conjecture makes sense because the entries of A have only finitely many denominators, so the given expression can be reduced mod p for almost all primes p.
Let us consider the differential equation
(d/dz-a/z)f(z)=0,
for a∈ℚ. The local solutions to this ODE have the form cz^a; this is an algebraic function if and only if a∈ℚ. On the other hand,
(d/dz-a/z)^pz^n=(n-a)(n-a-1)⋯ (n-a-p+1)z^n-p
is identically zero mod p for almost all p iff p splits completely in ℚ(a) for almost all p; this happens if and only if a∈ℚ, by the Chebotarev density theorem.
There is a more intrinsic formulation of <ref>, which makes sense on general smooth bases. To formulate it, we recall the notion of p-curvature.
Let k be a field of characteristic p>0 and X/k a smooth k-scheme. Let (ℰ, ∇) be a flat bundle on X/k, where we view ∇ as a k-linear map T_X→End_k(ℰ). The p-curvature morphism
ψ_p: F_abs^*T_X→End_𝒪_X(ℰ)
is the map induced by
v↦∇(v)^p- ∇(v^p).
Here v^p is a section to T_X — in characteristic p>0, the p-th power of a derivation is itself a derivation. If X is an open subset of ℙ^1 and v=d/dz, then v^p=0, and so the vanishing of p-curvature is simply the condition ∇(d/dz)^p=0, which we saw before in <ref>. One may think of ψ_p as a measure of the failure of the map ∇: T_X→End_k(ℰ) to commute with taking p-th powers (just as the curvature ∇∘∇ is a measure of the failure of a connection to commute with the natural Lie algebra structures on T_X, End_k(ℰ); see <ref>).
Now let R be a finitely-generated integral ℤ-algebra with fraction field K, X/R a smooth scheme, and (ℰ, ∇) a flat vector bundle on X/R.
The flat vector bundle (ℰ, ∇)_K∈MIC(X_K) admits a basis of flat algebraic sections[It follows that for some (equivalently all) embedding(s) K↪ℂ, the corresponding flat bundle (ℰ, ∇)_ℂ has finite monodromy. In fact the arithmetic condition of <ref> (on vanishing of p-curvature) implies (non-trivially!) that given a simple normal crossings compacitification X of X, (ℰ,∇) extends to a flat connection on X with regular singularities along the boundary. It follows that—to prove the p-curvature conjecture—it suffices to show that the arithmetic hypotheses of the conjecture imply that (ℰ, ∇)_ℂ has finite monodromy.] if and only if there exists a dense open subset U⊂Spec(R) such that for all closed points 𝔭∈ U, the p-curvature of (ℰ,∇)_𝔭 is identically zero.
This conjecture is more or less completely open; the primary evidence we have for it, due to Katz, is that it is true when (ℰ,∇) is a Picard-Fuchs equation.
Let X be a smooth variety. A flat vector bundle (ℰ, ∇) is a Picard-Fuchs equation if there exists a dense open subset U⊂ X, a smooth proper morphism π: Y→ U, and an integer i≥ 0 such that
(ℰ, ∇)|_U≃ (R^iπ_*(Ω^∙_dR, Y/U), ∇_GM),
where ∇_GM is the Gauss-Manin connection.
With notation as in <ref>, suppose (ℰ, ∇)_K is a Picard-Fuchs equation. Then the p-curvature conjecture is true for (ℰ,∇).
Katz's theorem is in fact a bit more general than the result stated above, but it has some fairly strong restrictions. For example, we do not even know that the p-curvature conjecture holds true for direct summands of Picard-Fuchs equations, except under restrictive hypotheses. But see <cit.> for some results in this direction.
Katz's proof of <ref> inspires many of the arguments in these notes, including those in <ref>; we briefly recall it as we will shortly see a “non-abelian" analogue of his approach. See <cit.> for another brief exposition.
The flat bundle (ℰ, ∇), arising as it does as a Picard-Fuchs equation, carries a decreasing Griffiths-transverse Hodge filtration F^∙; here Griffiths transversality means that
∇(F^i)⊂ F^i-1⊗Ω^1_X
for all i. The failure of ∇ to preserve this filtration is thus measured by a collection of (𝒪-linear) maps
gr^i∇: gr^i_F^∙ℰ→gr^i-1_F^∙ℰ⊗Ω^1_X.
It suffices to show that these maps are zero; indeed, in this case the monodromy of (ℰ, ∇) preserves the Hodge filtration F^∙ and hence the Hodge decomposition. But then the monodromy is unitary, as the monodromy also preserves the polarization on the local system ∇, which is definite on each graded piece of the Hodge decomposition. As (ℰ, ∇) is a Picard-Fuchs equation, its monodromy is defined over ℤ; combined with unitarity, this implies the monodromy is finite.
To show that the maps gr^i∇ are identically zero, Katz compares their reductions mod p to the associated graded of the p-curvature maps ψ_p with respect to the conjugate filtration, which are zero by assumption. This comparison is a lengthy computation about which we will say nothing.
There are a few other cases in which the p-curvature conjecture is known. The case where the corresponding monodromy representation is solvable was resolved by Chudnovsky-Chudnovsky <cit.>, Bost <cit.>, and André <cit.>, and the case where it underlies a rigid ℤ-local system, by Esnault-Groechenig <cit.>. There are a number of other interesting related works and special cases known, e.g. <cit.>; in fact the two papers by Shankar and Patel-Shankar-Whang cited here are what originally interested the author in this subject.
§.§ Finite orbits
We now formulate a variant of the p-curvature conjecture for the isomonodromy foliation on ℳ_dR(𝒳/S)→ S; the upshot of this will be a (conjectural) complete classification of finite π_1(S,s) orbits on conjugacy classes of representations of π_1(X_s).
Let R be a finitely-generated integral ℤ-algebra with fraction field K and 𝒳→ S a smooth projective morphism of R-schemes. Let s∈ S(R) be an R-point, X=𝒳_s, and (ℰ, ∇) a flat bundle on X/R. Then the leaf of the isomonodromy foliation through [(X, ℰ,∇)]∈ℳ_dR(𝒳/S) is algebraic if and only if it is integral. This occurs if and only if it is p-integral to order ω(p), for almost all primes p.
Here ω: Primes→ℤ is any function growing faster than any ϵ p for all ϵ>0, i.e.
lim_p→∞ω(p)/p=∞.
The last sentence (about p-integrality to order p) is a bit technical, and we largely ignore it for the rest of this note. However, see <ref>, which is our primary motivation for including this condition; the point is to give a variant of the p-curvature conjecture for ℳ_dR(𝒳/S) which implies the classical p-curvature conjecture, <ref>.
There are a number of imprecisions in this statement—notably ℳ_dR(𝒳/S,r) is not in general smooth, or even a scheme. One can make it precise, again using the language of crystals. We have also not explained what it means for the leaf to be integral, (resp. p-integral to order n). Loosely speaking, this latter simply means that the Taylor coefficients of the power series defining the formal leaf of the foliation are integral (resp. the coefficients of monomials of degree at most n are p-integral).
Let us make friends with this conjecture by first formulating a variant in the spirit of <ref>, for general (possibly non-linear) ODE, and then specialize to the case of the isomonodromy foliation.
Let
f^(n)(z)=F(z, f(z), ⋯, f^(n-1)(z))
be an ordinary differential equation, with F∈ℚ(z_0, ⋯, z_n). Then the Taylor series expansion of a solution
f(z)∈ℚ[[t]], f(z)=∑_n≥ 0 a_nz^n
to this ODE with (0, f(0), ⋯ f^(n-1)(z)(0)) a non-singular point of F is algebraic if and only if there exists N>0 such that a_n∈ℤ[1/N] for all n. This occurs if and only if a_1, ⋯, a_ω(p) are p-integral for almost all p, where ω is some function such that
lim_p→∞ω(p)/p=∞.
Let us make <ref> precise now:
Let R be a finitely-generated integral ℤ-algebra with fraction field K⊂ℂ and 𝒳→ S a smooth projective morphism of R-schemes. Let s∈ S(R) be an R-point, X=𝒳_s, and (ℰ, ∇) a flat bundle on X/R. Let s_K be the K-point of S associated to s, S_K the formal scheme obtained by completing S_K at s_K, and S_R the formal scheme obtained by completing S at s. Then the following are equivalent:
* there exists a element N∈ R such that the isomonodromic deformation of (ℰ, ∇) over S_K descends to S_R[1/N]
* For each embedding (equivalently, some embedding) K↪ℂ, the conjugacy class of the monodromy representation
ρ: π_1(X(ℂ)^an)→GL_r(ℂ)
associated to (ℰ, ∇)_ℂ has finite orbit under π_1(S(ℂ)^an, s).
Note that in this precisification we have elided the ω(p)-integrality condition; we leave formulating this to the reader, or see <cit.>.
This conjecture is an arithmetic answer to the general question that has concerned us in these notes. And it is closely related to the classical p-curvature conjecture:
Suppose <ref> is true for 𝒳/S=𝒞_g/ℳ_g (the universal curve of genus g over the moduli space of genus g curves) for all g≫ 0. Then <ref> is true in general.
It is well-known that the p-curvature conjecture can be reduced to the case of flat bundles on smooth proper curves. Now let (ℰ, ∇) be a flat bundle of rank r on a smooth proper curve X with vanishing p-curvature for almost all p, and choose a finite étale cover Y→ X so that the genus g of Y is large enough that <ref> holds true for 𝒞_g/ℳ_g, and in particular at least r^2. By a direct computation with Taylor series, the leaf of the isomonodromy foliation through [(ℰ,∇)]∈ℳ_dR(𝒳/𝒮) is p-integral to order ω(p). Then <ref> implies that the monodromy of (ℰ, ∇)|_Y has finite orbit under
Mod_g=π_1(ℳ_g);
hence by <ref>, (ℰ, ∇) has finite monodromy.
Here the key input was that the hypothesis of <ref> is stable under pullback. In the sketch above we deduced the statement from <ref>, but in <cit.> we give a much more elementary argument.
Our main evidence for <ref> is the following analogue of <ref>:
<ref> is true if (ℰ,∇)_K is a Picard-Fuchs equation in the sense of <ref>.
For example, when we consider 𝒞_g/ℳ_g, <ref> completely characterizes finite Mod_g-orbits in Y(g,0,r), as long as they are Picard-Fuchs equation for a single complex structure on the surface Σ_g.
We also prove a version of <ref> in the non-proper setting in <cit.>—e.g. that of the Painlevé VI and Schlesinger equations—but because we have not described the isomonodromy foliation in this setting, we do not discuss it further here.
In examples, the hypothesis that the flat bundle in question be a Picard-Fuchs equation does not seem unduly restrictive. For example, the following is a consequence of our discussion in <ref>:
Let (C_1, ⋯, C_n) be an n-tuple of quasi-unipotent conjugacy classes in SL_2(ℂ). Any finite orbit of the PMod_0,n-action on Y(C)^irr corresponds to a local system of geometric origin.
§.§ The idea of the proof
The proof of <ref> is too technical for us to do much more than gesture at it here (and we imagine many readers feel the same way about the statement). That said, we will try to sketch some of the key ideas. The key Hodge-theoretic input is the following result of Deligne:
Let X be a smooth complex algebraic variety and r≥ 0 a positive integer. The set of isomorphism classes of rank r complex local systems on X which underly a polarizable ℤ-variation of Hodge structure is finite.
We immediately deduce:
Let X be a smooth complex variety, and Γ⊂Out(π_1(X(ℂ)^an) a subgroup. Let 𝕍 be a complex local system on X, and suppose that for each γ∈Γ, the local system 𝕍^γ underlies a ℤ-variation of Hodge structure. Then the orbit of the isomorphism class of 𝕍 under Γ is finite.
In particular, in the setting and notation of <ref> this gives us a local condition for a Picard-Fuchs equation on X to have monodromy with finite orbit under π_1(S(ℂ)^an, s), as we now explain.
With notation as in <ref>, suppose that (ℰ, ∇) underlies a ℤ-variation of Hodge structure on X, with Hodge filtration F^∙. Let X_K be the formal scheme obtained by completing 𝒳_K along X_K, and S_K the formal completion of S_K at s_K. Let
(ℰ, ∇: ℰ→ℰ⊗Ω^1_X/S)
be the isomonodromic deformation of (ℰ,∇) to X_K. (Here 𝒳_K, X_K are the schemes 𝒳⊗_R K, X⊗_R K.)
If the Hodge filtration F^∙ extends to a Griffiths-transverse filtration on (ℰ, ∇), then the conjugacy class of the monodromy representation of (ℰ, ∇) has finite orbit under π_1(S,s).
Let S be the universal cover of S(ℂ)^an, and let 𝒳_S be the base-change of 𝒳(ℂ)^an to S. Choosing a lift s∈S of s, the inclusion X→𝒳_S as the fiber over s induces an isomorphism of fundamental groups. Hence we have a local system 𝕍 on 𝒳_S with monodromy the same as that of (ℰ,∇). Let NL(ℰ, ∇)⊂S be the set of s'∈ S such that the restriction of the fiber 𝕍 to the fiber of 𝒳_S→S at s' underlies a polarizable ℤ-variation of Hodge structure. By <cit.> this is a closed analytic subset of S.
But the hypothesis that F^∙ extend to a formal neighborhood of X in 𝒳 implies that NL(ℰ, ∇) in fact contains an open subset of S; hence it is all of S. Translating this statement through the language of deck transformations, this implies that every conjugate of the monodromy representation of (ℰ, ∇) under π_1(S,s) underlies a polarizable ℤ-variation of Hodge structure; hence there are finitely many such conjugates by <ref>.
The notation NL(ℰ,∇) stands for “Noether-Lefschetz" — these loci are more or less what Simpson terms “non-Abelian Noether-Lefschetz loci." They are the non-abelian analogue of Hodge loci, and Simpson asks <cit.> if their images in S are disjoint unions of algebraic subvarieties; this question is the non-abelian analogue of the famous theorem of Cattani-Kaplan-Deligne on algebraicity of Hodge loci. See <cit.> for some partial results in this direction.
Before explaining the idea of the proof, we need to briefly discuss the p-curvature of the isomonodromy foliation.
We define the p-curvature of a foliation:
Let k be a field of characteristic p>0, X/k a smooth variety, and ℱ⊂ T_X a foliation, i.e. a subbundle closed under the Lie bracket. The p-curvature of ℱ is the morphism
ψ_p: F_abs^*ℱ→ T_X/ℱ
induced by v↦ v^p.
Let π:ℳ_dR(𝒳/S, r)→ S be the structure morphism. The isomonodromy foliation (at least over the smooth locus) is a splitting of the tangent exact sequence for π, and hence its p-curvature should be a morphism
F_abs^*π^*T_S→ T_ℳ_dR(𝒳/S, r)/S.
At a characteristic p>0 point [(X_s, ℰ, ∇)] of ℳ_dR(𝒳/S, r), this is a map
ψ_p(X_s, ℰ, ∇): F^*_absT_sS→ℍ^1(End(ℰ)^∙_dR).
The map ψ_p(X_s, ℰ, ∇) is given by the composition
F_abs^*T_sS→ H^1(X'_s, T_X'_s)→ℍ^1((F_abs^*T_X_s)^∙_dR)ψ_p→ℍ^1(End(ℰ)^∙_dR)
where X'_s is the Frobenius twist of X_s, the first map is the Frobenius pullback of the Kodaira-Spencer map, the second is induced by the natural inclusion T_X'_s↪ F_abs^*T_X_s, and the third is induced by the p-curvature map for ℰ. Here (F_abs^*T_X_s)^∙_dR is the de Rham complex of F_abs^*T_X_s with its canonical (Frobenius-pullback) connection, for which the flat sections are precisely T_X'_s.
The above can be taken as a definition of the p-curvature of the isomonodromy foliation for those who prefer not to work with crystals; that said, one may make sense of p-curvature for a crystal in positive characteristic <cit.>, and if one does so, the above is a computation of the p-curvature of ℳ_dR(𝒳/S, r)/S. The formula above is likely not very illuminating, but it is at least explicit.
It will be no surprise, now, that the idea of the proof of <ref> is to verify the hypothesis of <ref>. That the Hodge filtration extends to first order is more or less an immediate consequence of Katz's comparison <cit.> of the Kodaira-Spencer map gr^i(∇) with the associated graded of the p-curvature with respect to the conjugate filtration, discussed in our sketch of the proof of <ref>, combined with our formula for the p-curvature of the isomonodromy filtration, <ref>, and some elementary deformation theory.
To go beyond first order, some new ideas are needed—in particular, one needs to
* (iteratively) extend the conjugate filtration to a well-chosen mod p isomonodromic deformation of (ℰ,∇),
* compare the Kodaira-Spencer map to the associated graded of the p-curvature of this deformation of the conjugate filtration, and
* check that the p-curvature of the isomonodromy foliation vanishes along the chosen mod p isomonodromic deformation of (ℰ,∇).
Items (1) and (2) may be handled by judicious use of Ogus-Vologodsky's non-abelian Hodge theory <cit.>. Item (3) relies on this work in combination with a slight extension of the theory of the Higgs-de Rham flow of Lan-Sheng-Zuo <cit.>, as explained by Esnault-Groechenig <cit.>. Both of these ideas are beyond the scope of these notes.
There is an analogue of the p-curvature conjecture for foliations, due to Ekedahl-Shephard–Barron-Taylor <cit.>, which gives a condition under which every leaf of a foliation ought to be algebraic. This conjecture has been considered before in the context of the isomonodromy foliation in the PhD theses of Papaioannou and Menzies <cit.>. A variant of our <ref> has been considered for arbitrary rank one foliations in the preprint <cit.>.
The approach to the proof of <ref> is inspired in part by ideas of Katzarkov and Pantev's paper <cit.>, which proves a non-abelian analogue of the Theorem of the Fixed Part in the setting of a smooth projective morphism (the quasi-projective case was proven later by <cit.>). In fact one can deduce the main theorems of those papers from <ref>, as will be explained in <cit.>.
§ ANALOGUES OF CONJECTURES ON ALGEBRAIC CYCLES
The rest of these notes aim to collect a number of conjectures on the arithmetic and geometry of local systems on algebraic varieties, and some related questions in algebraic geometry and surface topology. We hope that they will be a fruitful starting point for young people interested in these subjects.
We begin, however, with some philosophy.
§.§ Non-abelian cohomology
Let π: 𝒳→ S be a smooth proper morphism. The guiding principle here, due, we believe, to Simpson (see e.g. <cit.>), is that the spaces ℳ_dR(𝒳/S, r), ℳ_B(𝒳/S, r), parametrizing local systems on the fibers of π, are the non-abelian analogues of the cohomology sheaves
R^iπ_*Ω_𝒳/S, dR^∙, R^iπ_*ℂ,
the de Rham and singular cohomology of the fibers of π. Thus any structure that exists on classical cohomology groups, and any conjecture as to their behavior, should have a non-abelian analogue. For example, the Hodge and Tate conjectures, characterizing cohomology classes corresponding to algebraic cycles, have analogues characterizing local systems of geometric origin.
The characterization of local systems of geometric origin (defined as in <ref>) can be thought of as some way of making precise our goal from <ref>, of understanding something about the topology of algebraic maps in terms of representations of fundamental groups of algebraic varieties.
[The Hodge conjecture]
Let X be a smooth projective variety over the complex numbers. The Hodge conjecture predicts that the image of the cycle class map
cl^i: CH^i(X)⊗ℚ→ H^2i_sing(X(ℂ)^an, ℂ(i))
is the ℚ-span of the intersection
H^2i_sing(X(ℂ)^an, ℤ(i))∩ H^0,0(X(ℂ)^an, ℂ(i)).[Here the Tate twist ℂ(i) has the effect of lowering the weight by 2i, so the target has index (0,0) instead of index (i,i), as in some common statements of the Hodge conjecture.]
We rewrite this as follows. There is a natural action of ℂ^× on H^2i_sing(X(ℂ)^an, ℂ(i)), where for v∈ H^p,q(X(ℂ)^an, ℂ(i)), λ· v=λ^pv. Then the Hodge conjecture says that the image of the cycle class map is spanned by
H^2i_sing(X(ℂ)^an, ℤ(i))∩ H^2i(X(ℂ)^an, ℂ(i))^ℂ^×.
The non-abelian analogue of this statement is <cit.>:
ℤ-local systems on X underlying (polarizable) complex variations of Hodge structure are of geometric origin.
We write this in a form analogous to that of the usual Hodge conjecture, above. Namely, we let M_Dol(X, r) be the (coarse) moduli space of semistable Higgs bundles of degree zero. Here a Higgs bundle is a pair (ℰ, θ: ℰ→ℰ⊗Ω^1_X) with θ an 𝒪_X-linear map such that the natural composition
θ∘θ: ℰ→ℰ⊗Ω^2_X
is identically zero (see <cit.> for details). The map θ is referred to as a Higgs field.
There is a natural ℂ^×-action on M_Dol(X,r), given by scaling the Higgs field:
λ·(ℰ, θ)=(ℰ, λθ).
Moreover there is a natural (real-analytic!) homeomorphism M_Dol(X, r)≃ M_B(X,r) <cit.>. Now Simpson's <cit.> may be rephrased as saying that the points of
M_B(X,r)(ℤ)∩ M_Dol(X,r)^ℂ^×
correspond to local systems of geometric origin, where we make sense of the intersection here using the homeomorphism of the previous sentence.
There is little evidence for this conjecture (though I believe it!), aside from the important case of polarizable ℤ-variations of Hodge structure of weight 1, which come from Abelian schemes, and rigid local systems, where much is known by work of Katz <cit.>, Esnault-Groechenig <cit.> and others; see <ref> for further discussion.
[The Tate conjecture]
Let X be a smooth projective variety over a finitely generated field K, K^s a separable closure of K, and ℓ a prime different from the characteristic of K. One form of the Tate conjecture is that the natural cycle class map
cl^i: CH^i(X_K^s)⊗ℚ_ℓ→ H^2i(X_K^s, ℚ_ℓ(i))
has image precisely
_K'/KH^2i(X_K^s, ℚ_ℓ(i))^Gal(K^s/K'),
where the limit is taken over all finite extensions of K in K^s. That is, the image of the cycle class map in ℓ-adic cohomology is precisely the set of elements of H^2i(X_K^s, ℚ_ℓ(i)) with finite Galois orbit. Again this conjecture has a non-abelian analogue:
Let 𝕍 be an irreducible ℓ-adic local system on X_K^s. Then 𝕍 is of geometric origin if and only if it its isomorphism class has finite orbit under Gal(K^s/K).
That is, the ℓ-adic local systems on X of geometric origin and rank r are precisely
_K'/KM_B(X,r)(ℚ_ℓ)^Gal(K^s/K').
Our primary evidence for this conjecture comes from the case where X is a curve and K is finite, where the conjecture follows from work of Lafforgue <cit.>.
If X is any variety over an algebraically close field L (say of characteristic zero), and 𝕍 is an ℓ-adic local system on X_K, we may descend X to a finitely generated field K and 𝕍 to X_K^s. If the isomorphism class of 𝕍 has finite orbit under Gal(K^s/K) we say it is arithmetic. See <cit.> for some discussion of this property.
Prior to Petrov's work <cit.>, the standard version of <ref> included a p-adic Hodge theory condition, namely that 𝕍 be de Rham (see e.g. <cit.>). Inspired by the analogy with the Tate conjecture, I conjectured and Petrov proved that this condition was redundant.
Much of my interest in local systems on topological surfaces with finite orbit under the mapping class group comes from <ref>; I view this conjecture as an arithmetic analogue of our topological questions about classifying such orbits, where the mapping class group is analogous to the absolute Galois group of K. See <ref> for a concrete relationship between the two, conditional on a plausible conjecture in surface topology (namely <ref>).
[The non-abelian Ogus conjecture]
We mention one more analogue along these lines. The Ogus conjecture <cit.> predicts which classes in algebraic de Rham cohomology arise from cycles in terms of the conjugate filtration. The analogous conjecture for local systems is:
Let X be a variety over a finitely generated integral ℤ-algebra R, with fraction field K of characteristic zero. Let (ℰ,∇) be a flat vector bundle on X/R. Then (ℰ, ∇)|_X_K is of geometric origin if and only if there exists a dense open subset U⊂Spec(R) such that for all 𝔭∈ U, the p-curvature of (ℰ,∇)|_X_𝔭 is nilpotent.
Here we view the p-curvature ψ_p of (ℰ,∇)|_X_𝔭 as a map
ψ_p: ℰ|_X_𝔭→ℰ|_X_𝔭⊗ F_abs^*Ω^1_X_𝔭
by adjunction; we say it is nilpotent if there exists a filtration on ℰ|_X_𝔭 such that the p-curvature vanishes on the associated graded vector bundle. For local systems of geometric origin, one may take this filtration to be the conjugate filtration.
There are other conjectural characterizations of local systems of geometric origin. For example, arguably <cit.> give analogues of the conjecture that absolute Hodge cycles are cycle classes. We also remark that Krishnamoorthy-Sheng conjecture <cit.> that periodic de Rham bundles[Here periodicity refers to their behavior under the Higgs-de Rham flow of Lan-Sheng-Zuo <cit.>.] on curves are of geometric origin; I am not sure if there is an “abelian" analogue of this conjecture, characterizing the image of the cycle class map in similar terms, in the literature.
The conjectures listed here and their variants are quite beautiful, but we have little evidence for them. In fact there are few cases where these conjectural equivalent conditions for a local system to be of geometric origin are even known to be equivalent to one another!
Can one prove that any of the conditions here expected to be equivalent to being of geometric origin be proven to imply one another? For example, can one show that, after choosing an isomorphism ℂ≃ℚ_ℓ, an arithmetic ℚ_ℓ-local system underlies an integral variation of Hodge structure?
The only result I know along these lines is <ref>, which shows that certain variations of Hodge structure are arithmetic in a weak sense, as we now explain.
Let 𝒳→ S be a smooth proper morphism of complex varieties. Let s∈ S be very general, and set X=𝒳_s. Let η be the generic point of S and η a geometric generic point. Let 𝕍 be a complex local system on X underlying a polarizable ℤ-variation of Hodge structure, and for each ℓ let 𝕍_ℓ be the corresponding ℓ-adic local system. Fix a specialization isomorphism sp: π_1^ét(𝒳_η)→π_1^ét(X_s). Then the isomorphism class of the local system sp^*(X_η)𝕍_ℓ obtained by pulling back 𝕍_ℓ through this specialization map has finite orbit under Gal(η/η).
The upshot of the above proposition is that ℤ-variations of Hodge structure very general fibers of a smooth proper morphism 𝒳→ S have a weak arithmeticity property—we do not know how to show they have finite orbit under the Galois group of a finitely-generated field over which they spread out, but they do have finite orbit under a “geometric subgroup" of this Galois group, coming from the geometric fundamental group of S.
This is a reinterpretation of <ref>, closely following the proof of <ref>. As s is very general, we obtain from the closedness of non-abelian Noether-Lefschetz loci <cit.> that the Hodge filtration on the flat bundle corresponding to 𝕍 extends to its isomonodromic deformation, whence 𝕍 has finite π_1(S,s)-orbit by <ref>. The rest of the statement is a translation of this fact into the language of étale fundamental groups.
Below we include a table listing some of the “abelian" aspects of cohomology, and their non-abelian analogues.
Abelian Non-abelian
de Rham cohomology ℳ_dR(X, r)
Betti cohomology ℳ_B(X, r)
Dolbeault cohomology ℳ_Dol(X, r)
F^1H^1_dR(X, ℂ) ℂ-VHS locus
Hodge classes ℤ-VHS locus
The Hodge conjecture <ref>
The Tate conjecture <ref>
<ref> <ref>
H^1(X, U(1))⊂ H^1(X, ℂ^×) Zariski-dense <ref>(1)
H^1(X, ℤ^×)⊂ H^1(X, ℂ^×) Zariski-dense <ref>(2)
§.§ A case study
While we know little about the conjectures in the previous section, there is one case we more or less understand completely, due to beautiful conjectures of Sun-Yang-Zuo <cit.>, proven independently by Lin-Sheng-Wang <cit.> and Yang-Zuo <cit.>, and by completely different methods by Lam and myself <cit.>. We give a brief exposition of a variant of these conjectures, with an eye towards verifying our conjectural characterization of local systems of geometric origin in a very special case.
We return to our friend
X=ℂℙ^1∖{x_1, ⋯, x_n},
this time with n=4. By applying a fractional linear transformation we may assume
x_1=0, x_2=1, x_3=∞
and set x_4=λ. Our goal will be to classify rank 2 local systems of geometric origin on X with local monodromy in the conjugacy class
C_1=C_2=C_3=[[ 1 1; 0 1 ]]
at 0,1,λ and in the conjugacy class
C_4=[[ -1 1; 0 -1 ]]
at ∞. By <ref> we already know how to do so when λ is generic: these local systems are related, by middle convolution, to local systems of rank 2 on X with monodromy a finite complex reflection group.
In fact this is true not just for λ generic. As in <ref>, let Y(C) be the space of isomorphism classes of local systems on X with local monodromy at x_i in the conjugacy class C_i. We have:
Let p: E_λ→ℙ^1 be the double cover branched at 0,1,∞, λ. Let 𝕍 be an irreducible local system in Y(C). Then there exists a rank one local system 𝕃 on the elliptic curve E_λ such that
𝕍=MC_-1(p_*𝕃|_X)=MC_-1(p_*𝕃^∨|_X).
This construction yields a bijection between dual pairs of local systems of rank 1 on E_λ (of order not dividing 2), and irreducible local systems 𝕍 in Y(C).
A local computation shows that MC_-1(𝕍) has local monodromy conjugate to
[ 1 0; 0 -1 ]
each of 0,1,∞,λ; hence the pullback p^*MC_-1(𝕍) extends to all of E_λ. As 𝕍 is irreducible, the same is true for MC_-1(𝕍), whence p^*MC_-1(𝕍) is semisimple, hence (as π_1(E_λ) is abelian) it is a direct sum of two rank one local systems 𝕃_1, 𝕃_2. We must have that 𝕃_1⊕𝕃_2 is self-dual, as it is pulled back from X, hence isomorphic to its pullback under inversion on E_λ.
Now MC_-1(𝕍) must be isomorphic to either p_*𝕃_1|_X or p_*𝕃_2|_X. Hence one of the 𝕃_i (WLOG 𝕃_1) must have order greater than 2, as otherwise MC_-1(𝕍) would be reducible. But then 𝕃_1 is not self-dual, hence must be isomorphic to 𝕃_2^∨. Now the result follows from the invertibility of MC_-1.
Here MC is the middle convolution operation discussed in <ref>.
One may easily deduce the following from the fact that rank one local systems of geometric origin have finite order, the fact that MC_λ for λ a root of unity preserves the property of being of geometric origin, and the invertibility of the middle convolution:
An irreducible local system 𝕍 in Y(C) is of geometric origin if and only if it has the form
𝕍=MC_-1(p_*𝕃|_X)
for 𝕃 a non-trivial rank one local system of finite order.
As we have a very good understanding of the Galois action on rank one local systems on an elliptic curve, their p-curvature, and so on, one may (not entirely trivially) deduce:
<ref>, <ref>, <ref>, <cit.>, and <cit.> are true for local systems in Y(C).
Note that 𝕍 is of geometric origin if and only if the same is true for MC_-1(𝕍); this latter is of geometric origin if and only if the same is true for 𝕃 as in <ref>. But middle convolution preserves the properties of being an 𝒪_K-variation of Hodge structure, being arithmetic, having nilpotent p-curvature, and being an absolute ℚ-point (in the sense of <cit.>). <cit.> amounts to showing that MC_-1 commutes with the Higgs-de Rham flow; for this see <cit.> in the special case under consideration.
We say nothing else about the Higgs-de Rham flow in these notes; let me just remark for experts that it would be interesting to show that the middle convolution commutes with the Higgs-de Rham flow in general.
§.§ Rigid local systems
<ref> makes a strong prediction, which was explicitly conjectured by Simpson; most of our evidence for the conjectures in <ref> comes from studying this prediction. Before stating it, we make a definition:
Let X be a smooth projective variety. An irreducible local system 𝕍 of rank r on X is rigid if it corresponds to an isolated point of M_B(X,r).
Any rigid local system is of geometric origin.
Indeed, a rigid local system is necessarily arithmetic in the sense of <ref>, as there are finitely many rigid local systems (since M_B(X,r) is of finite type) and they are permuted by the absolute Galois group of any finitely-generated field to which X descends.
We have substantial evidence for <ref>. Simpson showed <cit.> that rigid local systems underly K-variations of Hodge structure, for K some number field. The papers <cit.> and <cit.> show that the conjecture holds for SL_r(ℂ)-local systems with r≤ 3.
More recently, a spectacular series of papers by Esnault and Groechenig <cit.>, has verified a number of predictions of <ref>. For example, local systems of geometric origin are direct summands of ℤ-local systems, and hence are defined not just over a number field K but in fact over the ring of integers of 𝒪_K. We do not know how to prove this for rigid local systems in general, but Esnault and Groechenig do so in an important special case (as used crucially in the proof of <ref>).
Let X be a smooth projective variety. An irreducible complex local system 𝕍 on X is cohomologically rigid if H^1(X, ad(𝕍))=0.
<ref> has a natural geometric interpretation as a strengthening of <ref> — a cohomologically rigid local system corresponds to an reduced isolated point of M_B(X, r).
A cohomologically rigid local system is defined over the ring of integers 𝒪_K of some number field K.
It is natural to ask if there exist rigid local systems which are not cohomologically rigid. An example was found by de Jong, Esnault, and Groechenig <cit.>. However, this example does have a form of cohomological rigidity, as we now explain. Let G be an algebraic group over ℂ, with Lie algebra 𝔤. Let X be a smooth projective variety and 𝕍 a G(ℂ)-local system on X. We say that 𝕍 is G-cohomologically rigid if H^1(X, 𝔤)=0, where we view 𝔤 as a local system on X via the adjoint representation of G.
Klevdal and Patrikis <cit.> show that G-cohomologically rigid local systems are integral, building on Esnault-Groechenig's work in the case that G has identity component SL_r. We believe the examples of de Jong, Esnault, and Groechenig are G-cohomologically rigid, where we take G to be the Zariski closure of their monodromy.
Do there exist irreducible local systems which are rigid but not G-cohomologically rigid, where G is the Zariski-closure of their monodromy group?
We mention two other pieces of evidence for <ref>. First, Katz <cit.> shows, as a consequence of his classification of rigid local systems on ℙ^1∖{x_1, ⋯, x_n} (discussed in <ref>), that such rigid local systems on ℙ^1∖{x_1, ⋯, x_n}, with quasi-unipotent local monodromy at the x_i, are of geometric origin. Second, Esnault and Groechenig show <cit.> that such rigid local systems have nilpotent p-curvature, i.e. they satisfy <ref>.
All the conjectures and results in this section have analogues for smooth quasi-projective varieties; in this case one studies local systems with fixed quasi-unipotent monodromy about the boundary divisors of a strict normal crossings compactification. In particular, Katz's results on ℙ^1∖{x_1, ⋯, x_n} fall into this paradigm, and the results of Esnault-Groechenig and Klevdal-Patrikis mentioned above hold in this generality.
§.§ Special points and dense subloci
As we have just seen, the conjectures of <ref>, and <ref>, “explain" the zero-dimensional components of the moduli of local systems on a smooth projective variety X—they all (conjecturally) correspond to local systems on X of geometric origin. It is natural to seek structural properties of the moduli of local systems on X which generalize these conjectures to the entire moduli space.
One such conjecture was proposed separately by Esnault-Kerz and Budur-Wang:
Let X be a smooth variety. The local systems of geometric origin are Zariski-dense in M_B(X, r).
Of course this conjecture would imply <ref>, by considering density of local systems of geometric origin in the zero-dimensional components of M_B(X,r). Aside from a desire to generalize <ref>, we believe that Esnault and Kerz were motivated by desired applications to hard Lefschetz theorems for local systems, while Budur and Wang were motivated by analogies with the André-Oort conjecture.
Taking X to be a generic curve, <ref> implies <ref>, as a local system of geometric origin on a generic curve will spread out to a family of curves as in <ref>, and hence have finite mapping class group orbit. As <ref> is false by <ref>, the same is true for <ref>; this was part of our motivation for proving <ref>.
Nonetheless we have found the philosophy behind <ref> to be inspirational. The goal of this section is to propose some ways that it might be salvaged. One plausible approach, pursued already in <cit.> and <cit.>, is to consider local systems having some, but not all, of the properties of local systems of geometric origin, and to try to prove their Zariski-density in the moduli of all local systems. For example:
Let X be a quasi-projective complex variety with X a simple normal crossings compactification.
The local systems on X with quasi-unipotent local monodromy about the components of X∖ X are Zariski-dense in M_B(X,r).
As local systems with geometric origin have this quasi-unipotent local monodromy property, this was intended as evidence for <ref>. Esnault-de Jong define a notion of “weakly arithmetic" local system (a weakening of the notion discussed in <ref>) and prove the density of such local systems in M_B(X, r).
Here we propose some analogous questions:
Let X be a smooth projective variety over ℂ. Are the following loci Zariski-dense in M_B(X, r):
* The locus of ℂ-variations of Hodge structure?
* The locus of local systems defined over ℤ?
Does there exist a fixed number field K such that the 𝒪_K-points of ℳ_B(X,r) are Zariski-dense in its ℂ-points? It is also natural also to ask about stronger forms of density than Zariski-density; for example <cit.> and <cit.> consider a form of strong approximation for the character varieties discussed in <ref>.
For experts, we also give a p-adic analogue of this question:
Let X be a smooth projective variety over W(k), the ring of Witt vectors of a finite field k of characteristic at least 3. There is a “Frobenius pullback" map
F^*: ℳ_dR(X,r)(W(k))→ℳ_dR(X,r)(W(k))
(see e.g. <cit.>). Are the periodic points of this map Zariski-dense?
The density of weakly arithmetic local systems, proven by <cit.>, is an ℓ-adic version of <ref>. For a stronger (still open) version, see <cit.>.
We now briefly discuss some evidence, aside from the case of (cohomologically) rigid local systems, that the answer <ref> might be positive.
§.§.§ Density of ℂ-VHS
One result that gives hope that <ref>(1) might have a positive answer is the following famous result of Simpson, a generalization of which (due to Mochizuki) was used in the proof of <ref>.
Let X be a smooth projective variety and 𝕍 an irreducible local system on X. Then 𝕍 can be deformed to a complex variation of Hodge structure.
The proof proceeds by considering the stable Higgs bundle (ℰ, θ) corresponding to 𝕍 under the non-abelian Hodge correspondence. Simpson shows (using the properness of the Hitchin map) that the limit as t→ 0 of (ℰ, tθ) in M_Dol(X, r) exists. The limit is evidently a ℂ^×-fixed point, and Simpson shows these are precisely the Higgs bundles corresponding to complex variations of Hodge structure. The result now follows from the fact that M_Dol(X, r) and M_B(X, r) are (real-analytically) isomorphic, and in particular homeomorphic.
In fact the proof shows that there is a ℂ-VHS in each irreducible component of M_B(X,r).
The following is part of ongoing joint work with Botong Wang and Ruijie Yang:
Suppose X is a compact Riemann surface of genus g. Then the ℂ-VHS locus is Zariski-dense in M_B(X, r). If (C_1, ⋯, C_n) are a collection of semisimple quasi-unipotent conjugacy classes in GL_2(ℂ), then the ℂ-VHS locus is Zariski-dense in the character variety parametrizing rank 2 local systems on ℙ^1∖{x_1, ⋯, x_n}, with local monodromy about x_i lying in C_i.
The first part of this theorem is almost trivial; the locus of unitary local systems is Zariski-dense. The second part is a bit more involved, and we say nothing more about it here, except to note that already the locus of supermaximal local systems, in the sense of <cit.>, are Zariski dense. See <cit.> for a discussion of supermaximal local systems in a related context.
Some evidence against the most optimistic possible answer here is given by considering the case of local systems on ℙ^1∖{x_1, ⋯, x_4}, where the local monodromy around each puncture is conjugate to
[ 1 1; 0 1 ].
Suppose one has a ℂ-VHS on ℙ^1∖{x_1, ⋯, x_4} with these local monodromies—the corresponding Higgs bundle is necessarily stable of degree zero and non-unitary, hence corresponds to a direct sum of two line bundles 𝒪_ℙ^1(a)⊕𝒪_ℙ^1(-a), with a>0 and non-zero Higgs field
θ: 𝒪_ℙ^1(a)→𝒪_ℙ^1(-a)⊗Ω^1_ℙ^1(x_1+⋯+x_4)= 𝒪_ℙ^1(2-a).
The only possibility is that a=1 and θ is an isomorphism, whence this Higgs bundle is unique up to isomorphism—hence there is a unique ℂ-VHS with the given local monodromies (this is the so-called uniformizing local system). But the character variety parametrizing local systems on ℙ^1∖{x_1, ⋯, x_4} with the given local monodromies is positive-dimensional, whence the ℂ-VHS locus is not Zariski dense. (See <cit.> for some closely related examples.)
§.§.§ Density of integral points
We now briefly discuss <ref>(2) and its variants. We are quite far from even the weakest possible forms of a positive answer. For example, as far as I know the following is open:
Let X be a smooth projective variety over ℂ, and suppose there exists an irreducible complex local system of rank r on X. Does there exist a number field K and an irreducible 𝒪_K-local system on X of rank r?
Some evidence for a positive answer to <ref> is given by the following:
Let X be a smooth projective variety over ℂ, and suppose there exists an irreducible complex local system of rank r on X. Then for each prime ℓ there exists an irreducible ℤ_ℓ-local system on X.
The proof relies on both the arithmetic and geometric Langlands program.
A great deal of work has been done on related questions in the case of character varieties of surfaces. For example, it is not hard to show:
Let X be a compact Riemann surface of genus g≥ 2. Then ℤ-points are Zariski dense in M_B(X, 2).
As integral points are dense in ℂ^×, it suffices to prove density in the locus of local systems with trivial determinant.
From the main result of <cit.>, taking the boundary to be empty, and the Zariski-density of the unitary locus, it is enough to produce one π_1(X)-representation defined over ℤ with a unitary Galois conjugate—the orbit of this representation under the mapping class group of X will be Zariski dense. But now the tautological local system on any compact Shimura curve (or étale cover thereof) of genus g suffices (and such exist for all g≥ 2, as one may take étale covers of a compact Shimura curve of genus 2).
The key idea here was to use the mapping class group action on the character variety to produce an abundance of integral points. This action on integral points has been studied in the beautiful papers <cit.>.
Of course one may consider stronger forms of density, e.g. density of integral points in the analytic topology on M_B(X, r), or in p-adic topologies (i.e. strong approximation questions). In particular, the example of certain Markoff surfaces, mentioned in <ref>, has been studied by Bourgain-Gamburd-Sarnak <cit.> and <cit.>.
Aside from their intrinsic interest and the application to conjectures on integrality of rigid local systems, Zariski-density of integral points in character varieties would have a number of useful applications, for example to the conjecture of Ekedahl-Shephard–Barron-Taylor for the isomonodromy foliation, discussed in <ref>.
§.§ Motivic subvarieties
The conjectures discussed in <ref> attempt to intrinsically characterize those local systems (i.e. points of ℳ_B(X, r), ℳ_dR(X,r), and so on) of geometric origin. What about higher-dimensional subvarieties?
It is not clear to us what a precise definition of a motivic subvariety of e.g. ℳ_B(X, r) should be, but it should, for example, be stable undering taking intersections; for each f: X→ Y, the image of a motivic subvariety under the induced map
ℳ_B(Y, r)f^*→ℳ_B(X,r);
should be motivic; for each smooth proper g: Z→ X, the image of
W_g,rR^ig_*⟶ M_B(X,r),
where W_g,r⊂ M_B(Z,r') is the set of local systems 𝕍 on Z with rkR^ig_* 𝕍=r; the W_g,r themselves should be motivic; subloci of M_B(X,r) consisting of local systems with monodromy contained in a conjugate of a fixed subgroup G⊂GL_r(ℂ), etc.
It would be of great interest to find a characterization of motivic subvarieties analogous to the conjectures in <ref>; a number of authors have proposed and studied such characterizations, notably <cit.>, <cit.>. For example, a motivic subvariety of the character variety M_B(X,r) should:
* Be defined over the ring of integers of a number field,
* Have its image in M_Dol(X,r) under the comparison map of non-abelian Hodge theory be stable under the natural ℂ^×-action on M_Dol(X,r),
* Have its ℤ_ℓ-points stable under the absolute Galois group of some finitely-generated field to which X descends,
and so on. We will discuss a variant of the question of characterizing motivic subvarieties in <ref>.
§ VECTOR BUNDLES AND MAPPING CLASS GROUPS
We finally return to our fundamental example, that of the universal curve
𝒞_g,n→ℳ_g,n
over the moduli space ℳ_g,n of curves of genus g with n marked points. The goal of this section is to record a number of conjectures on the (pure) mapping class group PMod_g,n=π_1(ℳ_g,n), its representations, and its action on Y(g,n,r)=M_B(Σ_g,n, r), motivated by some of the considerations in previous chapters. Some of these are well-known to surface topologists; others are aimed at drawing relationships between algebro-geometric questions about Riemann surfaces and questions in surface topology.
§.§ Superrigidity
We begin with a classical conjecture of Ivanov:
Let g≥ 3. Then any finite index subgroup Γ of Mod_g,n has finite abelianization, i.e. H^1(Γ, ℂ)=0.
This conjecture is motivated in part by a well-known analogy between Mod_g,n and lattices in simple Lie groups of higher rank (see e.g. <cit.> for further considerations along these lines). All representations of such groups are in fact cohomologically rigid, by e.g. Margulis super-rigidity <cit.>. It seems natural to conjecture the same is true for irreducible representations of Mod_g,n; in fact, we conjecture:
Let g≥ 3. Any irreducible representation ρ of Mod_g,n is cohomologically rigid, i.e. H^1(Mod_g,n, ad(ρ))=0.
A variant of this conjecture is considered by Simpson in <cit.>, who attributes it to Hain and Looijenga. Note that he suggests that there exist rigid but not cohomologically rigid irreducible representations of Mod_g,n, whose construction he attributes to Hain and Looijenga; according to Hain <cit.>, this was a miscommunication, and no such representations are known to exist. Indeed, Simpson suggests that one may construct such representations as sub-objects of tensor powers of the standard representation Mod_g,n→Sp_2g; but any irreducible such sub-object is in fact cohomologically rigid.
Note that <ref> implies <ref>. Indeed, suppose Γ⊂Mod_g,n is a finite index subgroup with H^1(Γ, ℂ)≠ 0. Then Γ admits a surjection onto ℤ, and hence a non-trivial family of rank one representations
ρ_t: Γ↠ℤ→ℂ^×,
sending 1∈ℤ to t∈ℂ^×. The induced representations Ind_Γ^Mod_g,nρ_t form a non-trivial family of semisimple representations of Mod_g,n, hence for generic t there is some summand of ρ_t which is not rigid, contradicting <ref>.
Note that the analogy between Mod_g,n and lattices in higher-rank Lie groups is imperfect. For the latter, all representations are rigid. On the other hand Mod_g,n admits non-rigid reducible representations. See e.g. <cit.> for examples.
This conjecture is motivated in part by an explanation of the ubiquity of “hidden rigidity" in the arguments of <ref> and <ref>.
And a positive answer would have a pleasant consequence towards the conjectures discussed in <ref>.
Let X be a very general curve of genus g≥ 3. Let 𝕍 be an irreducible local system on X which underlies a ℤ-variation of Hodge structure. Assume <ref>. Then 𝕍⊗ℤ_ℓ is arithmetic, in the sense of <ref>.
The proof is closely related to that of <ref>. As in the proof of that theorem, the hypotheses of <ref> are satisfied (taking 𝒳→ S to be 𝒞_g→ℳ_g). Hence the isomorphism class of 𝕍 has finite orbit under Mod_g=π_1(ℳ_g). By <cit.>, there exists a subgroup Γ⊂π_1(𝒞_g) containing π_1(X), and a projective representation of Γ whose restriction to π_1(X) has monodromy that of ℙ𝕍. By <ref> this projective local system is rigid, hence arithmetic.
Thus ℙ𝕍 is arithmetic. As it necessarily has finite determinant (as it is a ℤ-VHS), 𝕍 is itself arithmetic.
Moreover, <ref> would imply a different sort of classification of finite orbits of Mod_g,n on the character varieties Y(g,n,r) to <ref>, conditional on <ref>, by an analogous argument:
Suppose g≥ 3. Let
ρ: π_1(Σ_g,n)→GL_r(ℂ)
be an irreducible representation whose conjugacy class has finite orbit under Mod_g,n. Then for any complex structure on Σ_g,n, the local system associated to ρ is of geometric origin.
While the above conjecture is likely out of reach—we have few methods to prove that some abstract local system is of geometric origin—the following prediction might be approachable:
Let g≥ 3, and Γ⊂Mod_g,n a finite index subgroup. Then Γ acts on Y(g,n,r)^irr with only finitely many fixed points. All such fixed points correspond to local systems defined over the ring of integers of some number field 𝒪_K.
Of course both conjectures above follow immediately in the regime where <ref> applies.
It may be instructive to compare the statement of <ref> to the classification of representations of Sp_2g(ℤ), g≥ 2, which follows from e.g. superrigidity. Not only are such representations rigid—in fact, they all factor through a continuous representation of Sp_2g(ℂ)×Sp_2g(ℤ) via the natural (diagonal) embedding
Sp_2g(ℤ)↪Sp_2g(ℂ)×Sp_2g(ℤ).
Any continuous complex representation of the factor Sp_2g(ℤ) has finite image.
Now Sp_2g(ℤ) is the fundamental group of the moduli stack 𝒜_g of principally polarized Abelian varieties of dimension g. Thus superrigidity, along with the classification of representations of Sp_2g(ℂ) (in particular, they are all summands of tensor powers of the standard representation), tells us that if 𝕍 is any local system on 𝒜_g, there exists a finite étale cover π: 𝒜'→𝒜_g, and some n≥ 0, such that π^*𝕍⊂π^*𝕎_taut^⊗ n, where 𝕎_taut is the tautological local sytem on 𝒜_g.
How does this compare to <ref>? If one accepts in addition <ref>, <ref> tells us that irreducible local systems on ℳ_g,n should be of geometric origin—in particular, they should be pulled back from a period domain.
§.§ The Putman-Wieland conjecture
Most of our evidence for <ref> and <ref> comes from work towards a conjecture of Putman and Wieland, which is essentially equivalent to <ref>. We spend the next few sections on a digression about this conjecture and related “big monodromy" conjectures, and their relationships to questions about vector bundles on curves, before returning to our “non-abelian" study of the Mod_g,n-action on Y(g,n,r) and attempting to make sense of the notion big monodromy there.
Let Σ_g,n be an orientable surface of genus g with n punctures. Fixing a base-point x_0 in Σ_g,n, there is a natural action of Mod_g,n+1 on π_1(Σ_g,n, x_0); hence if Σ_g'→Σ_g is a cover branched at n points, a finite index subgroup of Mod_g,n+1 naturally acts on H_1(Σ_g', ℤ). Indeed, let Σ_g',n' be the complement of the ramification points in Σ_g'; then π_1(Σ_g',n') is a subgroup of π_1(Σ_g,n), and hence admits a natural action by its stabilizer Γ in Mod_g,n+1, a finite index subgroup. Hence Γ acts naturally on H_1(Σ_g',n', ℤ), its abelianization, and one can check that this action descends to the quotient H_1(Σ_g', ℤ).
Suppose g≥ 3 and n≥ 0. Then there is no non-zero vector in H_1(Σ_g', ℤ) with finite Γ-orbit, under the action of Γ on H_1(Σ_g',ℤ) described above.
We refer to <ref> for fixed (g,n) as PW(g,n).
Putman and Wieland show <cit.> that the conjecture PW(g-1,n+1) implies <ref> for Mod_g,n.
Putman and Wieland originally made conjecture <ref> for all g≥ 2. However Marković observed <cit.> that there are counterexamples in genus 2, using a beautiful construction of Bogomolov and Tschinkel <cit.>.
A number of cases of the Putman-Wieland conjecture have been verified by topological means—see e.g. <cit.> and the references therein. But much recent progress has been algebro-geometric in nature, as we now explain.
§.§.§ Algebro-geometric evidence
This conjecture has a simple algebro-geometric description. Let φ: Σ_g'→Σ_g be a finite cover branched at n points, and let ℳ_φ be the moduli stack of complex structures on Σ_g', Σ_g compatible with φ, so that we have a diagram
𝒳[r]^p [rd]_π 𝒞[d]^q[r] @[dr] | □ 𝒞_g,n[d]
ℳ_φ[r] ℳ_g,n
where for each m∈ℳ_φ, the fiber of
p_m: 𝒳_m→𝒞_m
over m is a holomorphic map from a genus g' curve to a genus g curve which is topologically the same as φ. Here the map ℳ_φ→ℳ_g,n classifies the family of curves 𝒞/ℳ_φ, so the square on the right is Cartesian.
The Putman-Wieland conjecture may be rephrased as follows:
Suppose g≥ 3. Then for any étale map f: ℳ→ℳ_φ,
H^0(ℳ, f^*R^1π_*ℂ)=0.
Another way to say this is: the relative Jacobian of π has no isotrivial isogeny factor.
This reformulation may now be attacked via Hodge-theoretic methods, and in particular, via an analysis of the derivative of the period map associated to the local system R^1π_*ℂ. In fact, the following is more or less immediate from <ref>:
Let H be a finite group such that each irreducible representation of H has dimension less than g. Then the Putman-Wieland conjecture is true for any H-cover Σ_g'→Σ_g.
In fact, a more or less identical argument shows that the Putman-Wieland conjecture is true for covers of Σ_g of degree d<g. See <cit.> for a leisurely introduction to the algebro-geometric aspects of the Putman-Wieland conjecture, and <cit.> for some closely related results.
Related methods allow us to prove the Putman-Wieland conjecture in the large n, as opposed to large g, regime.
Let φ: Σ_g'→Σ_g be an H-cover branched over n points, with H a simple group. Let r be the maximal dimension of an irreducible representation of H, and suppose
n>3r^2/√(g+1)+8r.
Then the Putman-Wieland conjecture is true for φ.
Note that this result applies even if g=0!
See also <cit.> for some results towards the Putman-Wieland conjecture of an algebro/differential-geometric nature.
§.§.§ Big monodromy
It seems natural to conjecture that much more than the Putman-Wieland conjecture is true:
Let H be a finite group, g≥ 3, and φ: Σ_g'→Σ_g a Galois H-cover. With notation as in diagram (<ref>):
* the identity component of the Zariski closure of the monodromy group of the local system R^1π_*ℂ is the derived subgroup of the centralizer of H in Sp_2g'(ℂ), and
* the monodromy of R^1π_*ℂ is an arithmetic subgroup of its Zariski-closure.
The monodromy representations associated to the local systems R^1π_*ℂ are known as higher Prym representations and have been studied in special cases for a long time; for example, if φ is an unramified ℤ/2ℤ-cover, they correspond precisely to classical Prym varieties.
It is not hard to see that the algebraic group identified in <ref>(1) is the largest possible—it must preserve the symplectic form on the cohomology of our family of curves, commute with the H-action, and it must be semisimple (this last is always the case for local systems of geometric origin). Thus <ref> is an instance of a standard slogan in algebraic geometry:
Monodromy groups should be as big as possible.
There is a fair amount of evidence for <ref>. For example, the papers <cit.> all prove special cases under various topological hypotheses. The main purpose of the paper <cit.> is to prove <ref>(1) for g large:
Let φ: Σ_g'→Σ_g be a Galois H-cover branched at n points. Let r be the maximum dimension of an irreducible complex representation of H. If either
* n=0 and g≥ 2r+2, or
* n>0 and g>max(2r+1, r^2),
then <ref>(1) holds for φ.
§.§ Big monodromy and Riemann-Hilbert problems
We now explain some of the ideas that go into the proof of <ref> and <ref>, and some of the conjectures they inspire.
§.§.§ Vector bundles and monodromy
We consider the following situation. Let
q: 𝒞→ℳ
be a family of n-punctured curves of genus g over a smooth base ℳ, with the associated map ℳ→ℳ_g,n dominant étale. Let 𝕌 be a local system on 𝒞. We would like to understand the monodromy of the local system R^1q_*𝕌. For example, one might take 𝕌=p_*ℂ in the notation of (<ref>), in which case understanding this monodromy (and in particular, the monodromy on the weight one part W^1R^1q_*𝕌) amounts to <ref>.
If 𝕌 carries a complex variation of Hodge structure (say of weight zero)—and in particular if 𝕌 is unitary—we can study this monodromy via the derivative of the period map associated to the complex variation of Hodge structure W^1R^1q_*𝕌, as we now explain. Explicitly, if q factors as
𝒞@^(->[r]^ι[rd]_q 𝒞[d]^q
ℳ
with q a smooth relative compactification of q, then
W^1R^1q_*𝕌=R^1q_*ι_*𝕌.
For the rest of this section we assume q is itself proper, so W^1R^1q_*𝕌=R^1q_*𝕌, for notational simplicity. We also assume that 𝕌 is unitary. Fix m∈ℳ a point, and set C=𝒞_m. Let (ℰ,∇)=(𝕌|_C⊗𝒪_C, id⊗ d) be the flat bundle on C associated to 𝕌|_C by the Riemann-Hilbert correspondence. The Hodge filtration on (R^1q_*𝕌)_m=H^1(C, 𝕌|_C) has terms given by
F^1H^1(C, 𝕌|_C)=H^0(C, ℰ⊗Ω^1_C), H^1(C, 𝕌|_C)/F^1=H^1(C, ℰ).
Thus the derivative of the period map associated to the variation of Hodge structure on R^1q_*𝕌 at m is a map
dP_m: T_mℳ→Hom(H^0(C, ℰ⊗ω_C), H^1(C, ℰ))
which can be made explicit as follows. As the classifying map ℳ→ℳ_g,n is assumed to be étale, we may identify T^*_mℳ with the space of quadratic differentials on C, namely H^0(C, ω_C^⊗ 2). By Serre duality, H^1(C, ℰ) is dual to H^0(C, ℰ^∨⊗ω_C). Hence dP_m is adjoint to a map
H^0(C, ℰ⊗ω_C)⊗ H^0(C, ℰ^∨⊗ω_C)→ H^0(C,ω_C^⊗ 2),
namely the natural pairing induced by taking sections s_1∈ H^0(C, ℰ⊗ω_C), s_2∈ H^0(C, ℰ^∨⊗ω_C) to
s_1⊗ s_2∈ H^0(ℰ⊗ℰ^∨⊗ω_C^⊗ 2),
and then composing with the map to H^0(C, ω_C^⊗ 2) induced by the natural (trace) pairing
ℰ⊗ℰ^∨⊗ω_C^⊗ 2→ω_C^⊗ 2.
(See <cit.> for a proof.)
Suppose that the monodromy representation of π_1(ℳ,m) on H^1(C, 𝕌|_C) admits a non-zero invariant vector, or in other words, that R^1q_*𝕌 admits a constant sub-variation of Hodge structure (here we are using the Theorem of the Fixed Part). Then, possibly after replacing 𝕌 with its complex conjugate 𝕌, the map
H^0(C, ℰ⊗ω_C)→Hom(H^0(C, ℰ^∨⊗ω_C), H^0(C, ω_C^⊗ 2))
will have non-zero kernel. Put another way, there exists a section η∈ H^0(C, ℰ⊗ω_C) such that the induced (non-zero!) map
-∪η: ℰ^∨⊗ω_C→ω_C^⊗ 2
induces the zero map on global sections. In particular ℰ^∨⊗ω_C is not generated by global sections at the generic point of C.
Thus we have shown:
Suppose that for a general fiber C of q the vector bundle (𝕌|_C⊗𝒪_C)^∨⊗ω_C is generically generated by global sections. Then for all f: ℳ'→ℳ dominant étale,
H^0(ℳ, f^*R^1q_*𝕌)=0.
In fact this is more or less the idea of the proof of <ref> and <ref>. It may be instructive to compare the statement to that of <ref>.
§.§.§ Generic global generation
We have just seen that the global generation properties of vector bundles on curves impact the monodromy of certain variations of Hodge structure on their moduli. We make this precise as follows:
Let g≥ 3, and let
ρ: π_1(Σ_g,n)→ U(r)
be a unitary representation. Fix a generic complex structure X on Σ_g,n, with X the corresponding compact Riemann surface, and D=X∖ X, and let ℰ_⋆ be the corresponding parabolic bundle. Then ℰ_0⊗ω_X(D) is generically generated by global sections.
We refer to this conjecture as the ggg conjecture (for generically globally generated).
Here ℰ_⋆ is the parabolic bundle associated to ρ under the Mehta-Seshadri correspondence <cit.>. See e.g. <cit.> for the notation on parabolic bundles we are using. The following is the special case where n=0:
Let g≥ 3, and let
ρ: π_1(Σ_g)→ U(r)
be a unitary representation. Fix a generic complex structure X on Σ_g, and let (ℰ, ∇) be the corresponding flat bundle. Then ℰ⊗ω_X is generically generated by global sections.
Taking ρ to have finite monodromy, these conjectures imply the Putman-Wieland conjecture (<ref>), by <ref>. More generally, they would give some evidence for <ref>:
Suppose g≥ 3 and assume <ref>. Let
ρ: PMod_g,n+1→ U(r)
be a representation whose restriction to the point-pushing subgroup π_1(Σ_g,n)⊂PMod_g,n+1 is irreducible. Then ρ is cohomologically rigid.
Let 𝕌 be the local system on ℳ_g,n+1 corresponding to ρ. It is not hard to see that it has finite determinant. Let π: ℳ_g,n+1→ℳ_g be the forgetful map; by assumption, the restriction of 𝕌 to a fiber of π is irreducible, hence π_*ad(𝕌)=0. Thus
H^1(Mod_g,1, ad(𝕌))=H^0(ℳ_g, R^1π_*ad(𝕌)),
which vanishes by <ref>.
It would be very interesting to formulate analogues of <ref> for non-unitary local systems. The unitary case is already of some interest, though; for example, the above rigidity result would provide some evidence for the famous conjecture that mapping class groups have Kazhdan's property T, which in particular implies that unitary representations are rigid.
§.§.§ Global generation and big monodromy
<ref> shows that global generation properties of the vector bundles under consideration are related to cohomological vanishing of the sort considered in the Putman-Wieland conjecture, <ref>. What about big monodromy, of the form considered in <ref>? We first explain an algebro-geometric variant of <ref>, which follows from the proof of that theorem (see <cit.>):
Notation as in <ref>, with q proper; let C be a general fiber of q, of genus g≥ 4. Suppose that 𝕌 has finite monodromy, the monodromy representation ρ of 𝕌|_C is irreducible, and 𝕌|_C⊗ω_C, 𝕌^∨|_C⊗ω_C are globally generated. Let G be the Zariski-closure of the image of the monodromy representation
π_1(ℳ)→ GL(H^1(C, 𝕌|_C)).
* If ρ is orthogonally self-dual, then G=Sp(H^1(C, 𝕌|_C)).
* If ρ is symplectically self-dual, then G=SO(H^1(C, 𝕌|_C)).
* If ρ is not self-dual, then G=SL(H^1(C, 𝕌|_C))× H for some finite group of scalars H.
A self-dual irreducible representation ρ of a group G has (ρ⊗ρ)^G=1. As ρ⊗ρ=Sym^2ρ⊕∧^2ρ, we thus have that either (Sym^2ρ)^G=1 or (∧^2ρ)^G=1. In the former case, we say ρ is orthogonally self-dual, and in the latter we say it is symplectically self-dual. As the cup product is alternating, the monodromy representations H^1(C, 𝕌|_C) appearing in <ref> are, by Poincaré duality, symplectically self-dual if ρ is orthogonally self-dual, and orthogonally self-dual if ρ is symplectically self-dual. The groups Sp, SO appearing in the statement of <ref> are the group of automorphisms of H^1(C, 𝕌|_C) with trivial determinant that preserve the Poincaré duality pairing.
The proofs of <ref>, and its generalization to non-proper curves (see <cit.>), rely on this result; they proceed by verifying the global generation hypothesis. We briefly sketch the approach, and then make some related conjectures on global generation.
The key idea is that the global generation assumption allows us to functorially recover ρ from the derivative of the period map associated to R^1q_*𝕌. Indeed, we have:
Let C be a smooth proper curve and 𝕌 a unitary local system on C, with ℰ=𝕌⊗𝒪_C the associated vector bundle. Suppose that ℰ⊗ω_C, ℰ^∨⊗ω_C are globally generated. Then the image of the composition
H^0(C, ℰ⊗ω_C)⊗𝒪_CdP⊗id⟶ H^1(C, ℰ)⊗ H^0(C, ω_C^⊗ 2)⊗𝒪_Cid⊗ev⟶ H^1(C, ℰ)⊗ω_C^⊗ 2
is canonically isomorphic to ℰ⊗ω_C, where here dP is adjoint to the derivative of the period map discussed in <ref>.
We view this as a “generic Torelli theorem with coefficients" — compare e.g. to the proof of the generic Torelli theorem in <cit.>, or the proof of generic Torelli for hypersurfaces <cit.>.
We now sketch the proof of <ref>, taking <ref> as input.
We begin by showing that the complex variation of Hodge structure R^1q_*𝕌 appearing in <ref> is irreducible.
By <cit.>, the vector bundle ℰ=𝕌|_C⊗𝒪_C satisfies ℰ⊗ω_C, ℰ^∨⊗ω_C are globally generated, for a fiber C of q over a general point m∈ℳ.[The proof is a somewhat involved deformation theory argument, about which we say nothing more here.] Hence by our twisted Torelli theorem <ref>, ℰ⊗ω_C, and hence ℰ can be functorially recovered from the infinitesimal variation of Hodge structure associated to R^1q_*𝕌 at m.
In particular, if the complex variation of Hodge structure R^1q_*𝕌 is a non-trivial direct sum, the same is true for ℰ. But the Narasimhan-Seshadri correspondence <cit.> implies that ℰ is stable, hence irreducible.
A slight enhancement of this argument shows that the identity component of the Zariski-closure of the monodromy group of R^1q_*𝕌 is in fact a simple group, acting irreducibly <cit.>. Now <cit.> (which Zarhin attributes to Deligne) implies that this group is either SO, Sp, or SL, acting via a minuscule representation. We rule out the non-standard minuscule representations using a further analysis of the infinitesimal variation of Hodge structure associated to R^1q_*𝕌, this time along so-called Schiffer variations <cit.>.
We make the following optimistic strengthening of <ref>:
Let g≥ 3, and let
ρ: π_1(Σ_g)→ U(r)
be a unitary representation. Fix a generic complex structure X on Σ_g, and let (ℰ, ∇) be the corresponding flat bundle. Then ℰ⊗ω_X is generated by global sections.
By the proof of <ref> (and its strengthening <ref>), we have:
Assume <ref>. Then <ref>(1) is true for unramified covers Σ_g'→Σ_g, with g≥ 4.
It would be of some interest to find (and prove) an analogue of <ref> (for parabolic bundles, along the lines of <ref>) which would imply <ref>(1) in full generality, even for ramified covers.
See also <cit.> for some further discussion of these and related questions.
§.§.§ Riemann-Hilbert problems
There is another point of view on <ref> and its variants, that has a long history—that of Riemann-Hilbert problems. That is, given a monodromy representation with corresponding flat bundle (ℰ,∇), what can one say about the properties of the vector bundle ℰ? For example, Hilbert's 21st problem <cit.> asked when a given representation ρ of π_1(ℂℙ^1∖{x_1, ⋯, x_n}) can be obtained as the monodromy of a Fuchsian ODE, or in modern terms, when there exists a connection on the trivial bundle 𝒪_ℂℙ^1^r with regular singularities at x_1, ⋯, x_n and monodromy ρ (see <ref>). The analogous question in higher genus (where instead of asking that ℰ be trivial, we ask that it be semistable) was answered by Esnault-Viehweg and Gabber <cit.>.
Here we consider a variant of this type of question, where one fixes a flat bundle with regular singularities (ℰ, ∇) on a marked Riemann surface (X,D), and then ask how ℰ behaves under isomonodromic deformation, i.e. when one perturbs the complex structure on (X,D). We have already seen a version of these questions in <ref>. The general expectation is that ℰ should behave “as generically as possible" after a general isomonodromic deformation, but this is in fact not always the case.
<cit.>
Let X be a smooth proper curve of genus at least 2. There exist flat vector bundles (ℰ,∇) on X with irreducible monodromy such that no isomonodromic deformation has semistable underlying vector bundle.
Note that this theorem contradicts some other results in the literature, e.g. the main results of <cit.>. See <cit.> for a discussion.
In fact the examples arise from the Kodaira-Parshin trick, discussed in <ref>; the point is that, as we have seen before, non-unitary variations of Hodge structure never have semistable underlying vector bundle. Nonetheless, the following seems plausible (as stability is a generic property):
Let (X,D) be a marked curve of genus g at least 2, and (ℰ,∇) a flat bundle on X with regular singularities along D, and irreducible unitary monodromy. Then after a general isomonodromic deformation, ℰ is (semi)stable.
<ref> is immediate when D is empty, by the Narasimhan-Seshadri correspondence <cit.>, and the semistable case follows when g is large compared to the rank of ℰ, from <ref>. In <cit.>, Landesman and I prove that general isomonodromic deformations of flat bundles are in general not “too far" from being semistable—that is, we bound their Harder-Narasimhan polygon.
It is also natural to expect that the vector bundles underlying isomonodromic deformations of a fixed non-trivial irreducible unitary connection behave generically in a cohomological sense. For example, when D is empty, such ℰ is stable of slope zero, and so for ℒ a line bundle of degree 1 on X, one expects from the Riemann-Roch theorem that
H^0(X, ℰ⊗ℒ^⊗ s)=0
for s≤ g-1. For example, it seems reasonable to conjecture:
Let g≥ 3, and let (X,D) be a marked smooth projective curve of genus g≥ 3. Let (ℰ, ∇) be a flat bundle on X with irreducible, non-trivial unitary monodromy, and regular singularities along D, whose residue matrices have eigenvalues with real parts in [0,1).[These are the bundles appearing in the Mehta-Seshadri correspondence <cit.>; they are also known as Deligne canonical extensions.] There exists some non-decreasing function f, with f(3)=1, such that after isomonodromic deformation to a general nearby curve X', we have:
* (weak form) H^0(X', ℰ(Z))=0 for a general effective divisor Z on X' of degree d≤ f(g).
* (strong form) H^0(X', ℰ(Z))=0 for all effective divisors Z on X' of degree d≤ f(g).
This conjecture is closely related to <ref>. For simplicity of notation we assume D=∅. By considering the short exact sequence
0→ℰ^∨(-p)⊗ω_X→ℰ^∨⊗ω_X→ℰ^∨⊗ω_X|_p→ 0,
and using that H^1(ℰ^∨⊗ω_X)=0 for ℰ as in <ref>, we see that ℰ^∨⊗ω_X is generated by global sections at p if and only if H^1(X, ℰ^∨(-p)⊗ω_X)=0, or equivalently, by Serre duality, if H^0(X, ℰ(p))=0. Thus when D=∅, <ref>(1) implies <ref>, and <ref>(2) implies <ref>. (In fact <ref> implies <ref> even for D non-empty, but we omit the proof to shield the reader from more parabolic bundle notation.)
Our primary evidence for <ref> comes from the proof of <cit.>, which one can use to prove <ref> when g is large compared to the rank of ℰ. It would be useful for someone to extract the precise function f(g,r) one can obtain from the proof of that result, and to write a careful proof of the implication. The statement given there shows that <ref>(2) is true when g≥ 2+2rk(ℰ) and f(g)=1.
We regard <ref> (and in particular the special case discussed in the previous paragraph) as an analogue of Green-Lazarsfeld's generic vanishing theorem <cit.> and its variants (see e.g. <cit.> for some analogues in higher rank). While those theorems analyze the cohomological behavior of generic flat bundles on a fixed variety, <ref> aims to understand the cohomological behavior of flat bundles with fixed monodromy on a variety (curve) with general complex structure.
§.§.§ Prill's problem
We briefly remark on the connection between <ref> and another question in classical algebraic geometry: Prill's problem <cit.>.
[Prill's problem]
Let X,Y be smooth projective curves of genus at least 2 over the complex numbers, and let f: Y→ X be a non-constant morphism. Can a general fiber of f move in a pencil? That is, can
H^0(Y, 𝒪_Y(f^-1(x)))≥ 2
for general x∈ X?
The answer, we believe, was expected to be “no." Indeed, by the Riemann-Hurwitz formula we have
f^-1(y)≤ g(Y);
and a generic effective divisor on a curve of genus g, of degree at most g does not move in a pencil.
By the projection formula we have f_*𝒪_Y(f^-1(x))=(f_*𝒪_Y)(x). Hence setting ℰ=f_*𝒪_Y/𝒪_X, we have that H^0(Y, 𝒪_Y(f^-1(x)))≥ 2 if and only if
H^0(X, ℰ(x))≠ 0.
As ℰ carries a unitary flat connection with regular singularities at the branch points of f and residues with real parts in [0,1) (with monodromy the non-trivial summand of the permutation representation associated to the deck transformation group of f), we expect this group to be zero for generic X, x, by <ref>(1), when the genus of X is at least 3.
In fact this bound on the genus is necessary—Landesman and I observed in <cit.> that every genus 2 curve X admits a finite degree 36 étale cover f:Y→ X such that every fiber of f moves in a pencil—thus the answer to Prill's problem is “yes" when the genus of X is 2. In fact the construction is the same as that used by Marković in his disproof of the Putman-Wieland conjecture in genus 2 <cit.> (see <ref>), due to Bogomolov and Tschinkel <cit.>. In <cit.>, we observe that any counterexample φ: Σ_g'→Σ_g to the Putman-Wieland conjecture, <ref>, gives rise to an example of a cover of a general curve of genus g such that Prill's problem has a positive answer.
§.§ Non-abelian big monodromy
Having digressed somewhat into abelian questions (e.g. the monodromy of certain local systems), we finally return to where we started: the action of the mapping class group and its subgroups on the space of conjugacy classes of rank r representations of π_1(Σ_g,n,r), namely Y(g,n,r).
As in <ref>, we now view this action as the analogue of the monodromy representation associated to R^1π_*ℂ, where
π: 𝒞_g,n→ℳ_g,n
is the map from the universal n-punctured curve of genus g to the moduli space of genus g curves with n marked points. (Indeed, the fiber of R^1π_*ℂ^× is precisely Y(g,n,1), and the monodromy action on this fiber identifies with the natural mapping class group action on Y(g,n,1).) More generally, given a family of smooth n-punctured curves of genus g, p: 𝒞→ℳ, we obtain an action of π_1(ℳ) on Y(g,n,r). Explicitly, p induces a classifying map ℳ→ℳ_g,n, and the induced action of π_1(ℳ) on Y(g,n,r) is given by the composition
π_1(ℳ)→π_1(ℳ_g,n)≃PMod_g,n→Aut(Y(g,n,r)).
If we think of this action as a “non-abelian" monodromy representation, we are naturally led to try to understand the extent to which <ref> (that monodromy groups should be as big as possible) holds in this case. In this section, we make a modest attempt at making sense of this slogan in the non-abelian setting.
For g≥ 1, one sense in which the local system R^1π_*ℂ^× has big monodromy is that for any dominant map f: ℳ→ℳ_g,n, the group
H^0(ℳ, f^*R^1π_*ℂ^×)
is finite. In other words, there are only finitely many points of Y(g,n,1) fixed by π_1(ℳ). <ref> tells us that the same is true in general for the action of π_1(ℳ) on Y(g,n,r), when g≥ r^2, so <ref> may be thought of as a “non-abelian big monodromy" statement. And <ref> predicts the analogous statement holds true whenever g≥ 3.
There are a number of other ways one might make sense of non-abelian big monodromy. Goldman and others have studied the ergodicity of the Mod_g,n-action on the space of unitary representations of π_1(Σ_g,n) in a series of beautiful papers, for example <cit.>, and this action was studied from the view of topological density in a number of other papers, e.g. <cit.>. Closer to our point of view is the work of Katzarkov, Pantev, and Simpson <cit.>, who give two possible interpretations of big monodromy in this setting:
* No invariant function (NIF): there are no meromorphic functions on M_B(Σ_g,n, r) invariant under the action of π_1(ℳ).
* Big orbit (BO): There exists a point of M_B(Σ_g,n, r) whose orbit under π_1(ℳ) is dense in the Zariski topology on M_B(Σ_g,n, r).
They show that both notions hold true for the Mod_g-action on M_B(Σ_g, r) for g large and r odd <cit.> and (again for r odd) for the π_1(B)-action on M_B(Σ_g, r) when B is the base of a Lefschetz pencil of sufficiently high degree in a fixed algebraic surface <cit.>.
They conjecture the following:
Let 𝒞→ℳ be any non-isotrivial smooth proper family of genus g≥ 2 curves. Then the induced π_1(ℳ)-action on M_B(Σ_g, r) satisfies NIF and BO.
As far as we know, little progress has been made on this conjecture since <cit.>.
§.§.§ Invariant subvarieties
In <ref>, <ref>, and <ref>, we studied the finite orbits of π_1(ℳ) on Y(g,n,r). What about higher-dimensional invariant subvarieties? A natural (imprecise) expectation in the case ℳ=ℳ_g,n, analogous to <ref>, is that for g≥ 3, any such subvariety should all be motivic, in the sense of <ref>.
[Imprecise]
Let Z⊂ Y(g,n,r) be a maximal irreducible subvariety stable under the action of a finite index subgroup of Mod_g,n. Then is Z “of geometric origin" for any complex structure on Σ_g,n?
The primary evidence we have for a positive answer is the theorem of Corlette-Simpson <cit.> on rank 2 local systems, on which we heavily relied in <ref>. We recall it now:
Let X be a smooth quasi-projective complex variety, and
ρ: π_1(X)→SL_2(ℂ)
a Zariski-dense representation. Either ρ is rigid and of geometric origin, or there exists a map f: X→ C, for C some Deligne-Mumford curve, such that ρ is pulled back along f.
Combining this with the proof of <ref>, we have the following:
Let 𝒞→ S be a family of n-punctured curves of genus g, with S a smooth variety. Let s∈ S be a point, and set C=𝒞_s. Then letting M_B(C, SL_2)^non-deg be the subvariety of M_B(C, 2) consisting of Zariski-dense SL_2-local systems on C, every irreducible component Z of
(M_B(C_b, SL_2)^non-deg)^π_1(S,s)
is motivic in the following sense. Either
* Z is a point, and corresponds to a local system of geometric origin, or
* Z consists of local systems pulled back from a fixed Deligne-Mumford curve.
Note that the two conditions in the result above are not mutually exclusive.
Finally, we give a conjectural description of invariant subvarieties along the lines of our non-linear analogue of the p-curvature conjecture, <ref>.
Let 𝒳→ S be a smooth proper morphism over a finitely-generated integral ℤ-algebra R, s∈ S an R-point, and Z⊂ℳ_dR(𝒳/S, r)_s a closed substack. Then Z(ℂ) is invariant under a finite index subgroup of π_1(S, s) if its formal isomonodromic deformation has an integral model.
This is meant to be the higher-dimensional analogue of <ref>; it specializes to that statement if Z is a point. It is arguably the non-abelian analogue of <cit.>, which aims to characterize the identity component of the Zariski-closure of the monodromy group of an ODE in terms of its p-curvatures. This latter conjecture is in fact equivalent to the classical p-curvature conjecture, <ref>, by <cit.>. On the other hand, we do not know how to reduce <ref> to <ref>.
§.§.§ Geometric subgroups of the mapping class group
We conclude with a brief discussion of some questions that seem broadly relevant to the analysis of the “non-abelian monodromy" of π_1(ℳ) on Y(g,n,r) associated to an arbitrary family of n-punctured curves of genus g, q: 𝒞→ℳ, with ℳ smooth. As before, this action factors through the map of fundamental groups
π_1(ℳ)→π_1(ℳ_g,n)=Mod_g,n,
induced by the classifying map ℳ→ℳ_g,n. We call the image of such a map a geometric subgroup of the mapping class group Mod_g,n. Thus a natural question becomes:
What are the geometric subgroups of Mod_g,n?
There are some evident restrictions on such subgroups. For example, by the Torelli theorem, non-trivial geometric subgroups of Mod_g,n cannot be contained in the Torelli group. Indeed, if 𝕍 is any variation of Hodge structure on ℳ_g,n with quasi-finite period map, then 𝕍 yields an analogous restriction: the restriction of 𝕍 to any geometric subgroup of Mod_g,n must have infinite monodromy. Moreover any variation of Hodge structure whatsoever on ℳ_g,n must have semisimple monodromy when restricted to a geometric subgroup (as variations of Hodge structure are always semisimple).
It seems natural to ask for a geometric analogue of this last observation, which would be useful in approaching <ref>. Let γ⊂Σ_g,n be a simple closed curve, and let Γ_γ⊂Mod_g,n be the centralizer of the Dehn twist about γ.
Fix a simple closed curve γ⊂Σ_g,n. Can an infinite geometric subgroup of Mod_g,n be conjugate to a subgroup of Γ_γ?
Here we view the subgroups Γ_γ as analogues of parabolic subgroups of GL_r(ℂ); they are the fundamental groups of punctured neighborhoods of boundary divisors in the Deligne-Mumford compactification of ℳ_g,n.
alpha
|
http://arxiv.org/abs/2409.02258v1 | 20240903193131 | Generalized implementation of invariant coordinate selection with positive semi-definite scatter matrices | [
"Aurore Archimbaud"
] | stat.ME | [
"stat.ME",
"62H99, 62-08, 65F99"
] |
1]Aurore Archimbaud
[1]TBS Business School, 1 Place Alphonse Jourdain, 31000 Toulouse, France
[mycorrespondingauthor]Corresponding author. Email address: <[email protected]>
§ ABSTRACT
Invariant coordinate selection (ICS) is an unsupervised multivariate data transformation useful in many contexts such as outlier detection or clustering. It is based on the simultaneous diagonalization of two affine equivariant and positive definite scatter matrices. Its classical implementation relies on a non-symmetric eigenvalue problem (EVP) by diagonalizing one scatter relatively to the other. In case of collinearity, at least one of the scatter matrices is singular and the problem cannot be solved. To address this limitation, three approaches are proposed based on: a Moore-Penrose pseudo inverse (GINV), a dimension reduction (DR), and a generalized singular value decomposition (GSVD). Their properties are investigated theoretically and in different empirical applications. Overall, the extension based on GSVD seems the most promising even if it restricts the choice of scatter matrices that can be expressed as cross-products. In practice, some of the approaches also look suitable in the context of data in high dimension low sample size (HDLSS).
Dimension reduction Generalized Eigenvalue Problem High-dimension Pseudo-inverse Singular scatters Singular value decomposition
[2020] Primary 62H99
Secondary 62-08
Tertiary 65F99
§ INTRODUCTION
Invariant Coordinate Selection (ICS) is a powerful unsupervised multivariate method designed to identify the structure of multivariate datasets on a subspace. It relies on the joint diagonalization of two affine equivariant and positive definite scatter matrices _1 and _2 and is particularly relevant as a dimension reduction tool prior to clustering <cit.> or outlier detection <cit.>. It goes beyond the well-known Principal Components Analysis (PCA) method by not maximizing the inertia but optimizing a generalized kurtosis. More precisely, some theoretical results <cit.> proved that under some elliptical mixture models, the subspace spanned by the first and/or last components carries the information regarding the multivariate structure and recovers the Fisher discriminant subspace, whatever the choice of scatter matrices is.
The goal of ICS is to find the p× p matrix B=( b_1,…, b_p)^⊤ of eigenvectors, which simultaneously diagonalizes two scatter matrices _1 ∈𝒫_p and _2 ∈𝒫_p, with 𝒫_p be the set of all symmetric positive definite matrices of order p and where ^⊤ denotes the transpose operation. Typically, this simultaneous diagonalization corresponds to a generalized eigenvalue problem (GEP) or solving a linear matrix pencil, which can be simplified to a non-symmetric eigenvalue problem (EVP) by diagonalizing one scatter relatively to the other. To fix the order of the components and their normalization, we follow the standard definition of <cit.> by diagonalizing _2 relatively to _1:
B _1 B^⊤ = I_p B _2 B^⊤ = D,
where D is a diagonal matrix with decreasing diagonal elements ρ_1≥…≥ρ_p>0, which correspond to the eigenvalues of _1^-1_2 and B contains the corresponding eigenvectors as its rows.
This problem can be re-written as:
_2 b_i = ρ_i _1 b_i
⇔_1^-1_2 b_i = ρ_i b_i, for i∈{1,…,p}.
with the following normalization:
* b_i^⊤_1 b_j=0 for i≠ j and b_j^⊤_1 b_j=1 for i = j, with i, j ∈{1,…,p},
* b_i^⊤_2 b_j=0 for i≠ j and b_j^⊤_2 b_j=ρ_j for i = j, with i, j ∈{1,…,p}.
Equivalently, as stated in <cit.>, the eigenvalues ρ_i, for i ∈{1,…,p} and the eigenvectors b_1, …, b_p can also be sequentially defined by solving the successive maximization or minimization problems of the ratio:
𝒦( b) = b^⊤_2 b/ b^⊤_1 b,
where ρ_1 is the maximal possible value of 𝒦( b) over b ∈ℝ^p which is achieved in the direction of the eigenvector b_1.
The so-called invariant coordinates or components are then obtained as: _n =(_n - 1_n T(_n)^⊤) B(_n)^⊤, where _n =(_1,…,_n)^⊤∈ℝ^n × p is a p-variate sample with n observations, 1_n denotes an n-variate vector full of ones, and T(_n) denotes a location estimator, usually the one that goes along with _1.
In practice, the requirement of positive definiteness for the two scatter matrices is limiting
and we focus on the case where at least one of these scatter matrices is singular. This is a common case since the variance-covariance matrix can be semi-definite positive as soon as some variables are collinear or in a high dimension low sample size context (HDLSS), i.e. if the number of variables exceeds the number of observations. Nowadays, more and more data are easily collected and collinearity issues arrive more frequently even when the number of observations is still higher than the number of observations. In this case, performing ICS is very challenging as we might not be able to compute: (i) one or the two scatter matrices and (ii) the inverse of _1 as required to solve the GEP <ref>. A simple idea is to perform a variable selection, if many variables are known to be non-relevant. However, this procedure can induce the deletion of a substantial number of variables to obtain a convenient number of dimensions on which both scatter estimators can be defined. And so, it can also lead to a potential loss of information.
The collinearity is a long-standing issue in multivariate analysis since quite a lot of methods rely on the simultaneous diagonalization of two or more scatter matrices<cit.>, for which the non-singularity of a scatter matrix is required. One of the most well-known methods that solves a GEP of two scatter matrices is
the classical Fisher Linear Discriminant Analysis (LDA) which maximizes the separation ratio of the between and the within-group covariance matrices Σ_B and Σ_W.
In the case of collinearity, the rank of the between-class covariance matrix Σ_B is equal to the number of groups, leading to a singular matrix. So, the maximization problem cannot be performed by solving the Σ_W^-1Σ_B eigenvalue problem anymore. To overcome such an issue, <cit.> and <cit.> review some solutions even in case the within-group covariance matrix Σ_W is also singular. Among others, the proposed approaches exploit the Moore-Penrose pseudo-inverse or the Generalized Singular Value Decomposition (GSVD). Another well-used approach is the dimension reduction through a singular value decomposition like in <cit.> or <cit.>. This rank reduction is also used for the LDA method in the HDLSS context, as explained by <cit.> and as a pre-processing step of ICS in <cit.>.
The objective of this paper is to adapt some of those approaches to generalizing ICS to the singularity issue, to investigate theoretically and practically their properties and to provide R <cit.> implementations as well.
The structure of this paper is as follows. In Section <ref>, we introduce a definition of ICS with semi-definite positive scatter matrices and we present the challenges associated with such a context. In Section <ref> we propose three approaches to adapting ICS to the case of semi-definite positive scatter estimates based on the Moore-Penrose pseudo inverse (GINV), the dimension reduction (DR) and the generalized singular value decomposition (GSVD). We also investigate theoretically their properties in terms of: (i) the criterion to optimize, (ii) the affine invariance of the scores and (iii) the symmetry of the roles of the two scatter estimates _1 and _2. Section <ref> focuses on different empirical applications to infirm or confirm the theoretical aspects. Finally, Section <ref> concludes the paper and discusses further perspectives.
§ ICS WITH SEMI-DEFINITE POSITIVE SCATTER MATRICES
In Subsection <ref>, some scatter matrices are detailed and a more general definition is given in case it is semi-definite positive. Subsection <ref> generalizes ICS to positive semi-definite scatter matrices, while Subsection <ref> presents the main challenges associated with it.
§.§ Scatter matrices
Let 𝒫_p be the set of all symmetric positive definite matrices of order p, 𝒮𝒫_p be the set of all symmetric positive semi-definite matrices of order p. Generally, a scatter matrix is defined as a p × p scatter matrix (_n) ∈𝒫_p which is affine equivariant in the sense that:
(_n A + 1_n γ^⊤) = A^⊤(_n) A,
where A is a full rank p × p matrix, γ a p-vector and 1_n an n-vector full of ones. Among the most common ones there are the regular covariance matrix:
(_n)= 1/n-1∑_i=1^n (_i-)(_i-)^⊤,
where denotes the empirical mean, and the so-called scatter matrix of fourth moments:
_4(_n) = 1/(p+2)n∑_i=1^n r_i^2 (_i-)(_i-)^⊤,
where r_i^2 = ( x_i - x̅)^⊤(_n)^-1( x_i - x̅) is the classical squared Mahalanobis distance. It is well known that those two scatter matrices are not robust against non-normality and the presence of outliers. One of the most widely-used robust alternatives is the minimum covariance determinant estimator () <cit.>. For a tuning parameter α∈ [0.5, 1], the selects out of the n observations those n_α = ⌈α n ⌉ observations _i_1, …, _i_n_α for which the sample covariance matrix has the smallest determinant:
_α(_n) = c_α1/n_α∑_j=1^n_α (_i_j-_α,n)(_i_j-_α,n)^⊤,
where _α,n is the sample mean of the selected set of observations and c_α is a consistency factor. To increase efficiency, it is often combined with a reweighting step, see for example <cit.> for more details.
Theoretically, we can consider scatter matrices (_n) which are only symmetric positive semi-definite. If _n is not of full rank, then (_n) ∈𝒮𝒫_p. For robust scatter matrices based on a subset of observations lying on a subspace of lower dimension than the entire space, (_n) belongs to 𝒮𝒫_p. This arises for example in the presence of outliers in the orthogonal complement subspace (OC outliers). In this context, we can extend the definition of a scatter matrix (_n) ∈𝒮𝒫_p which is said to be affine equivariant in the sense that:
(_n A + 1_n γ^⊤) = A^⊤(_n) A,
for any A and γ as previously defined. <cit.> proves that if _n lies in some r ≤ p-dimensional hyperplane, then
affine equivariant location and scatter statistics are essentially affine equivariant statistics defined on this hyperplane. For a p × (p-r) matrix M of rank p-r and m ∈ℝ^p-r, let us defined the hyperplane ℋ(M,m)={∈ℝ^p | M^⊤= m}, then for any affine equivariant scatter matrix (_n): M^⊤(_n) M = 0_(p-r) × (p-r), where 0_j × k denote the j × k matrix of all zeroes. In addition the lemma states that if L is any p × r matrix such that A = [ L M ] is non singular then:
(_n A ) = [ ^(r)(_n L) 0_r × (p-r); 0_(p-r) × r 0_(p-r) × (p-r) ],
where ^(r)( Y) is a scatter matrix for n × r data matrices Y. <cit.> also note that if the data is in general position[Data is in general position if there is no subset of k observations lying on a subspace of dimension k-2, with k≤ p+1 and p denotes the number of variables.], then all affine equivariant scatter statistics, symmetric in the observations[i.e. V( Q _n )= V(_n) for any permutation matrix Q of order n and _n ∈ℝ^n × p, the initial data containing n observations, characterized by p variables.], are proportional to the variance-covariance matrix. In practice, if the data is perfectly multicollinear but with n > p then the data is not in general position. If n ≤ p, the situation depends on the data themselves but there are examples, mainly in the automotive field <cit.>, where the data is not in general position.
Another challenge is that many of the well-known robust affine equivariant scatter statistics such as the M-estimators <cit.> or the are not well-defined when the data contains collinear variables. Usually, we can only define the variance-covariance matrix or the projection-based estimators <cit.> such as the Stahel-Donoho estimator <cit.>. To overcome this issue, regularized estimators of scatter matrices have been proposed: the regularized M-estimators <cit.> or the minimum regularized covariance determinant estimator <cit.> () among others.
The scatter matrix based on the fourth moments 4 can also be defined in a more general context if the inverse of is replaced by its pseudo-inverse ^+:
_4(_n) = 1/(p+2)n∑_i=1^n r_i^2 (_i-)(_i-)^⊤,
where r_i^2 = ( x_i - x̅)^⊤^+(_n)( x_i - x̅) is the classical squared Mahalanobis distance. All those p× p scatter matrices belong to 𝒮𝒫_p if _n is not full rank and are no longer necessary affine equivariant. This is a well-known consequence and it is common to relax the affine invariance of the multivariate methods in case of singularity. For example, <cit.> propose to focus on some “weak invariance" based on the relative ranking of the outlierness measures of the observations instead of requiring the same score value after an affine transformation.
For convenience, for the rest of the paper, the dependence on _n is dropped from the different scatter matrices (_n) when the context is obvious.
§.§ ICS as a generalized eigenvalue problem
With the common definition of the ICS method, the two scatter matrices _1 and _2 should be definite positive to find a finite and nonzero eigenvalue ρ to the eigenproblem (<ref>): _2 b = ρ_1 b ⇔_1^-1_2 b = ρ b. The positive definiteness of _1 is required to compute its inverse whereas the positive definiteness of _2 ensures nonzero eigenvalues.
If _1 is singular and not proportional to _2, then the equivalence (<ref>) is not true anymore since _1 is not invertible. The problem can not be simplified to a non-symmetric eigenvalue problem (EVP) anymore and so, we have to solve the initial Generalized Eigenvalue Problem (GEP):
_2 b_i = ρ_i _1 b_i for i ∈{1,…,p},
with _1 ∈𝒮𝒫_p and _2 ∈𝒮𝒫_p which are not necessarily of full ranks. If _1 and/or _2 is singular, that means that the null spaces of the two scatter matrices are not empty and they do not necessarily span the same subspace. Concretely, in this context, solving the GEP of _1 and _2, leads to consider the following cases:
* if b ∈(_1) ∩(_2) then ρ∈ℝ^+*,
* if b ∈(_2) - (_1) then ρ=0,
* if b ∈(_1) - (_2) then ρ=∞,
* if b ∈(_1) ∩(_2) then any ρ∈ℝ is a solution of the GEP. The corresponding eigenvectors are not well-defined and might cause stability issues for some algorithms. However, the structure of the data is not associated with those directions and so, we do not need to consider them further.
So, contrary to the classical ICS, the directions b associated with infinite or zero eigenvalues should also be analyzed as they might highlight some of the structure of the data. Let us illustrate the challenges of considering semi-definite scatter matrices for ICS on one artificial data.
§.§ Challenges with singular scatter matrices: an illustrative example
Let X=(X_1,…,X_p)^⊤ be a p-multivariate real random vector and assume the distribution of X is a mixture of two Gaussian distributions with different covariance matrices:
X∼ (1-ϵ) N( 0_p, [ W_1 0; 0 0 ]) + ϵ N( 0_p, [ W_1 0; 0 W_2 ]),
with ϵ <1/2, W_1 ∈𝒫_r_1 and W_2 ∈𝒮𝒫_p-r_1 with ( W_2) = r_2 ≤ p-r_1.
Such a distribution illustrates a model containing two clusters: the majority of the data and a group that can be identified as outlying observations. The first cluster follows a Gaussian distribution such that the majority of the data is contained in a r_1-dimensional subspace spanned by the range of W_1. The observations from the second cluster behave the same as previously on the r_1-dimensional subspace but they are also present in r_2 directions not spanned by the majority of the data. The goal of the ICS method is to find this r_2-dimensional subspace where the observations of the second cluster are outlying. Here this subspace is spanned by the range of W_2 which is the orthogonal complement of the range of W_1, i.e. the null space of W_1.
Let us try to recover this subspace using the ICS method with a theoretical “perfectly robust” scatter functional _1 = [ W_1 0; 0 0 ] and a theoretical “non-robust” scatter functional, the covariance of , _2= [ W_1 0; 0 ϵ W_2 ]. We have _1 ∈𝒮𝒫_p and _2 ∈𝒮𝒫_p, with (_1)=r_1<(_2)≤ p. In addition, (_1) = ( W_1) and (_2) = ( W_1) ⊕( W_2).
Several of the aforementioned cases arise on this example. First, the intersection of the spaces spanned by the two scatter functionals _1 and _2 corresponds to the r_1-dimensional subspace spanned by W_1, so r_1 nonzero eigenvalues should be found. Then, since (_1) - (_2) ≠{0}, a new direction associated to an ∞ eigenvalue should also be analyzed. In fact, this is the one that reveals the outliers. Finally, if (_2) < p, then the two scatter functionals share a part of their null subspaces. This subspace is not important since it contains no structure. However, we consider this phenomenon in our analysis because it is common in practice that the data is not of full rank. In addition, this feature could also make some algorithms unstable.
Practically, to illustrate the model (<ref>), we generate 1000 observations with exactly 20 outliers, W_1= I_2 and W_2 = (2,0). On the left scatterplot matrix of the Figure <ref>, we can see that the outliers represented by some blue triangles behave differently than the majority of the data only on the third variable. The subspace spanned by this third variable is the only one of interest to identify these observations as outliers.
This example can be seen as tricky since the outliers are well-identified on the third variable and no observations lie on the fourth one. So, we apply an affine transformation based on a non-singular p × p particular Toeplitz matrix A:
A =
[ 1 p-1/p p-2/p 1/p; p-1/p 1 p-1/p p-2/p; p-2/p p-1/p 1 p-1/p; 1/p p-2/p p-1/p 1 ],
to transform the initial data to ^* = A. We can notice on the right scatterplot matrix of Figure <ref>, that the outliers are no longer as well separated on the third transformed variable as they were initially. In addition, we are no longer able to see that the observations lie in a three-dimensional subspace. However, the structure of outlierness of the data is still contained in one dimension only. The challenge is to be able to recover the direction spanned by the outliers with ICS.
§ IMPLEMENTATION OF ICS FOR SEMI-DEFINITE POSITIVE SCATTER MATRICES
Several methods exist to solve a Generalized Eigenvalue Problem. Among others, it exists the well-known QZ-algorithm introduced by <cit.> or the procedure described by <cit.>. However, in practice, solving a GEP of two scatter matrices with a common null space from a numerical point of view is particularly challenging. Indeed, the presence of this null space makes procedures like the well-known QZ-algorithm very unstable. So far, the available algorithms are not satisfactory as they might lead to complex and negative eigenvalues. For that reason, GEP is not directly solved in general and surrogate approaches are used in <cit.>. In this section, we investigate three theoretical and practical implementations of ICS based on Moore-Penrose pseudo-inverse (GSVD) in Subsection <ref>, dimension reduction (DR) in Subsection <ref> and
generalized singular value decomposition (GSVD) in Subsection <ref>. We focus on collinearity issue with n>p.
§.§ ICS with a Moore-Penrose pseudo-inverse
If we assume that _1 and _2 can be defined and computed on _n but that _1 ∈ with (_1)=r_1<p, then it is not possible to solve _1^-1_2 since _1 is not invertible. Instead, we can replace the inverse of _1 with its Moore-Penrose pseudo-inverse _1^+ and solve:
_1^+ _2 b = ρ b,
where _1^+ = P Λ^+ P^⊤ = P_1 Λ_r_1^-1 P_1^⊤, with Λ_r_1^-1 containing only the inverse of the r_1 nonzero eigenvalues of _1,
P is an orthogonal matrix containing the eigenvectors of _1. P can be partitioned as P = [ P_1 P_2], with P_1, the p× r_1 matrix containing the first r_1 eigenvectors associated to the r_1 nonzero eigenvalues of _1, which is an orthonormal basis for the range space of _1. Similarly, the p× p-r_1 matrix P_2 spans the null space of _1. P_1 and P_2 are semi-orthogonal matrices such that: P_1^⊤ P_1 = I_r_1 and P_2^⊤ P_2 = I_p-r_1.
Solving _1^+ _2 restricts the direction b associated to the largest eigenvalue ρ_1 to b = v_1 + v_0 with v_1 ∈(_1), v_0 ∈(_1) only onto the subspace spanned by _1 and expressed by v_1:
v_1= b ∈ℝ^p, b ≠ 0 b^⊤__1_2__1 b/ b^⊤ P_1 Λ_r_1 P_1^⊤ b,
where __1= P_1 P_1^⊤. The roles of _1 and _2 are not exchangeable anymore.
Instead of solving the non-symmetric EVP _1^+ _2, we transform it to _1^+1/2_2 _1^+1/2, which is symmetric:
_1^+ _2 b = ρ b ⇔_1^+1/2_2 _1^+1/2 b^* = ρ b^*
⇔
(Λ_r_1^-1/2 P_1^⊤_2 P_1 Λ_r_1^-1/2-ρ I_r_1) b^*=0,
with b= P_1 Λ_r_1^-1/2 b^*. By multiplying by P_1Λ_r_1^1/2, and because P_1 is only semi-orthogonal, the equation (<ref>) can be rewritten as:
( P_1 P_1^⊤_2 P_1 P_1^⊤ -ρ P_1 Λ_r_1 P_1^⊤) b=0,
which leads to the following modified ICS criterion for the eigenvector associated with the largest eigenvalue:
b ∈ℝ^p, b ≠ 0max b^⊤ P_1 P_1^⊤_2 P_1 P_1^⊤ b/ b^⊤ P_1 Λ_r_1 P_1^⊤ b,
with P_1 P_1^⊤= __1, an orthogonal projection matrix onto the (_1), as P_1 is an orthonormal basis for (_1). So ( P_1 P_1^⊤_2 P_1 P_1^⊤) ⊆(_1) and ( P_1 Λ_r_1 P_1^⊤) = (_1). In addition, ℝ^p can be decomposed such that: ℝ^p = (_1) ⊕(_1), thus the solution b of the criterion (<ref>) can be expressed as:
b = v_1 + v_0 with v_1 ∈(_1) and v_0 ∈(_1),
with v_1= b ∈ℝ^p, b ≠ 0 b^⊤__1_2__1 b/ b^⊤ P_1 Λ_r_1 P_1^⊤ b.
As (_1) ⊆( __1_2 __1), optimizing the new criterion (<ref>) restricts the solutions to directions b only onto the subspace spanned by _1 and expressed by _1.
If the structure of the data is only visible onto the subspace spanned by _2, in the null space of _1, then it is not possible to highlight it and recover the outlying observations: if b ∈(_1) - (_2), then ρ=∞.
The roles of _1 and _2 are not exchangeable anymore. Indeed, the directions found only span the range of the inverted scatter matrix. So, the ranks of the null spaces of _1 and _2 are now important. The results remain if _2 is singular or not.
Equivalence with the classical ICS.
If _1 ∈ then solving the _1^-1_2 eigenvalue problem or using the Moore-Penroe pseudo-inverse of _1 is equivalent because _1^+=_1^-1.
If _2(_n) is not defined, the standard ICS algorithm is not applicable but computing _2 on the whitened data might be possible: _2(_n _1^+1/2).
Going back to our artificial example <ref>, using the Moore-Penrose pseudo-inverse of _1 leads to optimize the following criterion:
b ∈ℝ^p, b ≠ 0max b^⊤_ W_1_2 _ W_1 b/ b^⊤ W_1 b = b ∈ℝ^p, b ≠ 0max b^⊤ W_1 b/ b^⊤ W_1 b=1.
Clearly, in this case, any b ∈ℝ ^p is a solution of the maximization which implies that the structure of outlierness contained in W_2 cannot be highlighted. We obtain two eigenvalues equal to one since _1 is two-dimensional and two others equal to zero. The projection of the data onto the eigenvectors space is illustrated in Figure <ref>. Definitely, the outliers cannot be identified because the eigenspace is restricted to the subspace spanned by _1 which does not contain the structure of outlierness defined by W_2. So, the pseudo-inverse of _1 does not always give the correct solution to the singularity issue of the scatter matrices.
If the roots ρ_1,…,ρ_p are all distinct, then for the orthogonal transformation _n^* = _n A + 1_n γ^⊤, with A being non-singular and γ∈ℝ^p, the coordinates
_n^* =(_n^* - 1_n T(_n^*)^⊤) B(_n^*)^⊤ and
_n =(_n - 1_n T(_n)^⊤) B(_n)^⊤,
then
_n^* = _n J,
where J is a p × p diagonal matrix with diagonal elements ± 1, which means the coordinates Z_n^* and Z_n are invariant up to their signs through an orthogonal transformation.
Let _n^* = _n A + 1_n γ^⊤, with A being non-singular and γ∈ℝ^p and _1(_n) ∈ with (_1)<p. By definition of a scatter matrix: _1(_n^*) = A^⊤_1(_n) A and if A is an orthogonal matrix, then:
_1^+(_n^*) = A^-1_1^+(X_n) ( A^⊤)^-1 = A^⊤_1^+(X_n) A.
Following the computations detailed in the proof of Property <ref>:
_1^+(_n^*) _2(_n^*) b̃ = ρb̃⇔ ( A P_1 P_1^⊤_2(_n) P_1 P_1^⊤ A^⊤ -ρ A P_1 Λ_r_1 P_1^⊤ A^⊤ ) b̃ =0,
with b= A P_1 Λ_r_1^-1/2b̃ and by multiplying by A P_1Λ_r_1^1/2.
This leads to the following modified ICS criterion:
b ∈ℝ^p, b ≠ 0max b^⊤ A __1(_n)_2(_n) __1(_n) A^⊤ b/ b^⊤ A P_1 Λ_r_1 P_1^⊤ A^⊤ b.
Compared to the criterion (<ref>), the eigenvectors are rotated by A^⊤ and so projecting the transformed data _n^* onto B A^⊤ or projecting _n onto B leads to the same coordinates _n^* and _n up to their signs.
If _1 ∈𝒮𝒫_p then the ICS coordinates are not necessarily invariant by an affine transformation since the assumption of orthogonality is required in the proof (see the next counter-example <ref>).
We consider the simulated data transformed by the non-singular matrix A (<ref>). In this case, the structure of outlierness of the data is still contained only in one dimension. So, if the two scatter matrices _1 and _2 are of full ranks then, doing ICS on the initial data _n or on the transformed _n^*=_n A should lead to the same coordinates. However, if (_1)<p and if we use the pseudo-inverse _1^+ then we lose this affine invariance property of the Invariant Components (IC). Indeed, in the simulated example, we obtain two different eigenvalues, ρ_1=1.1237 and ρ_2=1 instead of the two equal to one, as illustrated on the right panel on Figure <ref>. Obviously, projecting the data onto the eigenvectors' space leads to new scores, and the affine invariance of the coordinates is lost.
To conclude, using the generalized inverse shows some differences. First, the initial ICS criterion (<ref>) may be modified to the criterion (<ref>), which leads to finding directions only on the subspace spanned by _1 and so the structure contained in the space spanned by (_1) -(_2) cannot be highlighted. Second, if we use a generalized inverse, the coordinates are invariant up to an orthogonal transformation as for PCA but no longer to an affine transformation. This is unfortunate since an additional choice is required: standardize the data or not.
Finally, the two scatter functionals _1 and _2 are not exchangeable anymore. Indeed, the directions found only span the range of the inverted scatter matrix. The results remain if _2 is singular or not.
§.§ ICS with a dimension reduction as pre-processing
Another well-known approach consists of getting rid of the singularity issues by doing a reduction of dimension (DR) first, hoping that no information about the data structure will be lost. The idea is to perform a Singular Value Decomposition (SVD) of the initial data and to project it onto the right-singular vectors associated with the non-zero singular values. Among others, <cit.> or <cit.> use this pre-processing step before applying their outlier detection algorithms based on PCA or Mahalanobis distances. This rank reduction is also used for the LDA method in the HDLSS context, as explained by <cit.>. However, the performance of the preprocessing for the LDA method relies on the rank of the covariance matrix which has to fall into a specific range to ensure that the new within-covariance becomes non-singular. Another pitfall is noted by <cit.> who advised against using an SVD before a robust PCA in the presence of OC outliers.
Considering the reduced data _n^* =_n P_1 ∈ℝ^n × r__n of rank r__n, we have to solve the GEP of _1(_n^*) and _2(_n^*):
_2(_n^*) b = ρ_1(_n^*) b,
where _n = U D P^⊤ with U and P two n × n and p × p orthogonal matrices, D = [ Δ^1/2_r__n× r__n 0_r__n× (p-r__n); 0_(n-r__n)× r__n 0_(n-r__n)× (p-r__n) ]. The elements of the diagonal matrix Δ^1/2 are the square roots of the positive eigenvalues of _n _n^⊤ and _n^⊤_n. The columns of P are also the eigenvectors of _n^⊤_n and the columns of U are the eigenvectors of _n _n^⊤. U and P can be partitioned as U=[ U_1 U_2] where U_1 is n × r__n, U_2 is n × (n-r__n) and P=[ P_1 P_2] where P_1 is p × r__n and P_2 is p × (p-r__n). U _1 and P_1 are both semi-orthogonal matrices and U_1 (resp. P_1) are orthonormal basis for the column space (resp. the row space) of _n.
If _n is of full rank.
If _n is of full rank then performing an SVD as a preprocessing step before ICS leads to the same components as if we directly compute the invariant coordinates (IC) from the initial data _n. Indeed, in this case, (_n)=r= p, the data is transformed by a P non-singular orthogonal p × p matrix and it is known that the invariant coordinates are invariant by an orthogonal transformation. However, from a computational point of view, some numerical discrepancies can arise due to the additional step.
If (_1(_n^*))= (_n^*) but (_2(_n^*)) - (_n^*) ≠{0}.
If (_1(_n^*))= (_n), then the GEP (<ref>) of _1(_n^*) and _2(_n^*) can be simplified to the classical EVP: _1(_n^*)^-1_2(_n^*) b = ρ b.
However, doing the first step of dimension reduction and transforming the data onto _n^* does not ensure the non-singularity of _2(_n^*). So, it is possible to choose a _2(_n^*) of lower rank than _n. In this case, the solution can lead to some directions associated to a zero eigenvalue of multiplicity potentially greater than one.
If (_1(_n^*)) - (_n^*) ≠{0}.
If (_1(_n^*)) - (_n^*) ≠{0}, it means that _1(_n^*) is still singular and doing the first step of dimension reduction does not solve the problem.
If _1 = and _2 is any scatter matrix as defined in (<ref>), then performing ICS with the Moore-Penrose pseudo-inverse of _1 or running ICS on the reduced data leads to the same coordinates up to their signs.
Let us start with the initial EVP in <ref>: _2(_n) b = ρ_1(_n) b. By multiplying by P^⊤ and with b = P b̃ the equation can be rewritten as:
P^⊤_2(_n) P b̃ = ρ P^⊤_1(_n) P b̃⇔_2(_n P)b̃ = ρ_1(_n P) b̃,
with
(_n P ) = [ ^(r__n)(_n P_1) 0_r__n× (p-r__n); 0_(p-r__n) × r__n 0_(p-r__n) × (p-r__n) ] and ^(r__n)(_n P_1) = (_n^*). Using the notations introduced in Subsection <ref> for the Moore-Penrose pseudo-inverse: _1(_n^*)=Λ_r_1. This is because the right eigenvectors from the singular value decomposition of _n are the ones of _n^⊤_n=_1(_n) = P_1 Λ_r_1 P_1^⊤ and _1(_n^*)= P_1^⊤_1(_n) P_1 with P_1 being a p × r__n semi-orthognal matrix and so r__nik<p.
Finally, we solve:
[ _2(_n^*) 0_r__n× (p-r__n ); 0_(p-r) × r__n 0_(p-r__n ) × (p-r__n ) ]b̃ = ρ[ Λ_r__n 0_r × (p-r__n ); 0_(p-r) × r__n 0_(p-r__n ) × (p-r__n ) ]b̃,
with b̃ = v_1 + v_0 with v_1 ∈(_n) and v_0 ∈(X_n),
and so if we restrict the solutions only onto the subspace spanned by (_n^*):
_2(_n^*) v_1 = ρ_1(_n^*) v_1
⇔_2(_n^*) v_1 = ρΛ_r__n v_1
⇔ v_1 ∈ℝ^p, v_1 ≠ 0 v_1^⊤_2(_n^*) v_1/ v_1^⊤Λ_r_1 v_1.
If we compare to the modified criterion <ref> obtained using the generalized inverse:
a ∈ℝ^p, a ≠ 0max a^⊤ P_1 P_1^⊤_2(_n) P_1 P_1^⊤ a/ a^⊤ P_1 Λ_r_1 P_1^⊤ a⇔ a ∈ℝ^p, a ≠ 0max a^⊤ P_1 _2(_n^*) P_1^⊤ a/ a^⊤ P_1 _1(_n^*) P_1^⊤ a.
with a= a_1 + a_0 with a_1 ∈(_1(_n)) = (_n) and a_0 ∈(_1(_n)). So, after projecting, the new coordinates are the same up to their signs.
So in this case, doing the pre-processing leads to an additional step to the method which is not needed and which implies the same drawbacks as doing ICS with a generalized inverse.
From a practical point of view, estimating the rank of _n might be very challenging as illustrated in Subsection <ref> and lead to a loss of information regarding the structure of the data.
To conclude, the preprocessing step of dimension reduction does not fulfill all its promises. First, it cannot guarantee that it solves the singularity issues of the scatter matrices. Then, even if it does, if we choose _1 as the variance-covariance matrix, we recover exactly the same modified criterion to solve as when we use the generalized inverse and so the same drawbacks. Finally, if we choose _2 as the variance-covariance matrix, we might be unable to recover the structure of the data if it is only contained on the subspace spanned by _1.
§.§ ICS with a generalized singular value decomposition
In this section, we focus on an implementation based on a Generalized Singular Value Decomposition (GSVD) as proposed by
<cit.> and <cit.> in the LDA context. More specifically, they use a GSVD for computing eigenvectors to define the Fisher's discriminant subspace, when the between Σ_B and the within-group Σ_W covariance matrices, are susceptible to be singular. The only requirement with this method is to express Σ_B and Σ_W as cross-product matrices, which is easily obtained by their definition.
This procedure, which uses a GSVD to solve a GEP, can be applied to other scatter matrices which can be expressed as crossproducts.
However, defining a stable algorithm for the GSVD is very challenging and a lot of research was done regarding this topic, such as <cit.> among others. In this section, we present the GSVD procedure as it is given in <cit.>, restricted to the case of real matrices. We retain this definition since it is already implemented into LAPACK and can be used directly in R through the geigen <cit.> package.
Let us define __1∈ℝ^n × p s.t. __1^⊤__1 =_1=_1( X_n) and X_V_2∈ℝ^n × p s.t. X_V_2^⊤ X_V_2 =V_2=_2( X_n). _1, _2 ∈𝒮𝒫_p with (__1) = (_1) = r_1 ≤ p and (__2) = (_2) = r_2 ≤ p.
The GSVD of __1 and __2 allows us to define the generalized eigenvalues and eigenvectors of the pencil __2^⊤__2-ρ__1^⊤__1:
B __1^⊤__1 B^⊤ = B _1(_n) B^⊤ = [ 0 0; 0 D_1^⊤ D_1 ] and B __2^⊤__2 B^⊤ = B _2(_n) B^⊤ = [ 0 0; 0 D_2^⊤ D_2 ],
where __1 = U D_1 [ 0 R] Q^⊤, __2 = V D_2 [ 0 R] Q^⊤, U and V are n × n, Q is p × p, U, V and Q are orthogonal. R is r × r, upper triangular and nonsingular with r= ([__1^⊤, __2^⊤]^⊤). [ 0 R] is r × p (in other words, the 0 is an r× (p - r ) zero matrix). D_1 and D_2 are n × r. Both are real, nonnegative, and diagonal, satisfying D_1^⊤ D_1 + D_2^⊤ D_2= I_r. Write D_1^⊤ D_1 = (α_1^2,…, α_r^2) and D_2^⊤ D_2 = (β_1^2,…, β_r^2), the ratios α_j/β_j for j = 1, …, r are called the generalized singular values.
B^⊤ = Q [ I_n-r 0; 0 R^-1 ]. Note that the normalization is not the same as the one presented in the standard definition <ref> but it can easily be adapted.
Vectorially, it is equivalent to solving a modified version of the GEP (<ref>): _2 b_i = ρ_i _1 b_i, for i=1,…,p:
β_i^2 _2 b_i = α_i^2 _1 b_i
⇔_2 b_i = ρ_i _1 b_i, for i=1,…,p.
where ρ_i =α_i^2/β_i^2 is real, nonnegative and possibly infinite.
The rows of B are the eigenvectors of __2^⊤__2-ρ__1^⊤__1 or equivalently of _2 -ρ_1, and the “nontrivial” eigenvalues are the squares of the generalized singular values: ρ_i = α_i^2/β_i^2, for i=p-r+1,…,p. The “trivial” eigenvalues are those corresponding to the leading p -r rows of B, which span the common null space of __1^⊤__1 and __2^⊤__2. These eigenvalues are not well defined and are not of interest. All the cases of interest are summarized in Table <ref>.
Re-writting the GEP (<ref>) as in (<ref>) presents some advantages compared to the other two methods. First, it allows to find all the directions which can reveal some structure of the data in the general case where _1 ∈𝒮𝒫_p and _2 ∈𝒮𝒫_p, as summarized in the Table <ref>. Second, it is clear that _1 and _2 play a symmetric role. This is important since the other methods can miss the structure of the data if it is contained into the subspace spanned by _2 and in the null space of
_1 in particular. Third, this formulation is still equivalent to the classical EVP (<ref>) if the two scatter matrices are of full ranks with ρ_i=α_i^2/β_i^2. In addition to these nice characteristics, the invariant coordinates remain invariant by affine transformation.
Affine invariance property.
For two affine equivariant scatter matrices _1 and _2, and using the eigenvectors defined by (<ref>), the invariant coordinates are invariant by an affine transformation.
Adaptation of the proofs from <cit.>, appendix A.1, for distinct and multiple roots detailed in Appendix <ref>.
Let us take the same example as previously from Subsection <ref>. Using the GSVD of __1 and __2 to solve the GEP (<ref>) leads to investigate four different cases for the direction b as illustrated in the left panel of Figure <ref>:
* if b ∈(_1) ∩(_2) = ( W_1), then the direction b is restricted to the subspace spans by W_1 as when we use the Moore-Penrose pseudo-inverse and we obtain two eigenvalues equal to one,
* if b ∈(_2) - (_1) = {0}, then no direction b exists,
* if b ∈(_1) - (_2) = ( W_2), then ρ=∞ because β^2=0 and so the direction b can highlight the structure of outlierness contained into the ( W_2), which is not the case when we use the Moore-Penrose pseudo-inverse,
* if b ∈(_1) ∩(_2) = (_2), then ρ is a “trivial” eigenvalue and any direction b ∈ℝ^p is a solution.
However, only the “non-trivial” eigenvalues, corresponding to the first three cases, are interesting for highlighting the structure of the data. More precisely, in this example, only the eigenvector b ∈(_1) - (_2) = ( W_2) associated with the infinite eigenvalue, contains the structure of outlierness of the data. This is clearly visible in Figure <ref> which illustrates the projection of our simulated data onto the eigenvectors space. So, using the GSVD outperforms the use of a Moore-Penrose pseudo-inverse because it recovers the structure of outlierness of the data. In addition, in the right panel of Figure <ref>, we can see we obtain the same coordinates and eigenvalues if we transform the initial data by the non-singular matrix A.
To conclude, solving the GEP of _1 and _2 through the GSVD of __1 and __2 presents three major advantages. First, it solves the possible singularity issues of _1 and/or _2 by searching in all directions, and remains equivalent to the EVP of _1^-1_2 if the scatter matrices are of full ranks. In addition, it leads that _1 and _2 play a symmetric role since the symmetry of the problem is kept. The affine invariance property of the scores continues to be valid in the general case of semi-definite positive scatter matrices. Finally, it is interesting to note that this GSVD procedure is already implemented into the geigen R package. However, in practice, it is difficult to define another affine equivariant scatter estimator than the variance-covariance matrix.
Overall, all three approaches present some advantages and limits as summarized in Table <ref>. The next section investigates if in practice those properties are kept.
§ EMPIRICAL APPLICATIONS
In this section, we illustrate the characteristics of the different approaches on different empirical applications. We restrict our analysis to ICS with a generalized inverse (GINV), pre-processed by a dimension reduction (DR) through a singular value decomposition and using the generalized value decomposition (GSVD). We exclude the direct GEP approach since it might result in complex and negative eigenvalues. First, Subsection <ref> analyses the consequences of exchanging the role of _1 and _2 on a simulated correlated mixture of Gaussian distributions. Subsection <ref> focuses on two examples for which estimating the rank of the data is challenging. Subsection <ref> evaluates the impact of transforming the data through an affine transformation on a collinear industrial data set. Finally, Subsection <ref> investigates if those approaches are applicable also in case of high dimension low sample size (HDLSS) with n<p.
§.§ Comparison of the three approaches: collinear clustering application
Let X = (_1, …, _d)^⊤ be a d-variate real random vector distributed according to a mixture of two Gaussian distributions such that:
X∼ϵ_1 N(μ_1, I_d) + ϵ_2 N(μ_2, I_d) ,
with ϵ_1 + ϵ_2 =1, μ_1 = 0_d, μ_2 = (δ, 0, …, 0)^⊤ where δ=10.
We generate n=1000 observations on d=3 variables for two balanced groups with ϵ_1 = ϵ_2 =0.5. Two collinear variables are added: _4 = _2-3_3 and _5 = _3 +5_4.
§.§.§ -_4
First of all, we compare the different ICS methods for the scatter pair -_4. For GSVD, we consider _4 since it is not possible to compute _4(_n) as _n is not full rank and so (_n) is singular and it cannot be inverted. Figure <ref> shows the scatterplots matrix of the IC resulting of ICS with the generalized inverse of (GINV on 1^st column), after a dimension reduction (DR on 2^nd column) and with a generalized singular value decomposition (GSVD on 3^rd column). The second row illustrates the same results when we exchange _1 and _2.
It is interesting to note that depending on the method we do not obtain the same number of components: 5 with GINV and 3 for the other two. Indeed, with a DR the rank of the data is estimated to be 3 and so only three dimensions are kept. For GSVD, three non-trivial eigenvalues are also detected. For GINV, 5 components are illustrated but the last two are associated with almost zero eigenvalues: 1.5e^-15 and -2.5e^-18. Clearly, this indicates numerical issues and the last two components should be disregarded. Now, if we focus on the first three, we can notice that all the methods allow us to easily identify the two clusters on IC_3. In addition, the components are the same between GINV and DR as mentioned in Property <ref>. Finally, if we exchange _1 and _2 then the clustering structure is shown on the first component for the three methods. So here, on an example of simple collinearity it appears that the three methods lead to similar results. The only point of attention is with GINV, where some trivial eigenvalues are estimated and should be put away.
§.§.§ _0.5-
Focusing on a different scatter pair based on a more robust scatter matrix such as the _0.5 raises several issues. Indeed, it is not possible to compute the _0.5 on our data because “More than half of the observations lie on a hyperplane”, so GINV and GSVD are not applicable. Instead, we can perform ICS with the on the reduced data or use its regularized version _0.5 as shown in Figure <ref>. As previously, with the DR approach, only three dimensions are kept and the clusters are identifiable on the third IC. This is also true with the but five eigenvalues are estimated for _0.5- or -_0.5. In addition, two of those eigenvalues are really small or high: 3.5e^-15, -5.3e^-15 and 6.7e^+14, 1.9e^+14. So in each case with _0.5 some eigenvalues need to be disregarded and it is an additional step to take into account.
§.§ Challenging estimation of the rank
This section investigates the difficulty of correctly estimating the rank of two empirical applications: a nearly singular industrial data in Subsection <ref> and some simulated data with OC outliers in Subsection <ref>.
§.§.§ HTP3: nearly singular industrial data
We consider the HTP3 data set, analyzed by <cit.> and available in the R package ICSOutlier<cit.>. It describes n=371 high-tech parts designed for consumer products and characterized by p=33 tests. The part 32 showed defects in use and is considered as an outlier. Here, the data set contains tests in different units which lead to nearly singularity and so the classical ICS algorithm returns an error.
Using the ICSQR implementation presented in <cit.> solves the issue for a combination of scatter based on a one-step M-scatter matrix and . In Figure <ref>, we compute the so-called squared ICS distances <cit.>, denoted ICSD^2, of the k selected components for different approaches to perform ICS. On the first plot, we use ICSQR with only the first component and the defective part (in orange) is clearly identified as having a high distance and so being an outlier compared to the other observations. We obtain similar results if we use the GSVD approach with -_4 or _4- as illustrated in Appendix <ref>. However, GINV is not working in this context as we can never compute _4. For the DR approach, a new challenge arises regarding the estimation of the rank of the SVD. As mentioned in <cit.>, it is common practice to use a relative rule to estimate the rank based on the first eigenvalue λ_i with i=1,…,p such as: (i) λ_i/λ_1 < √(()ν) with the epsilon machine ν = 2.2^-16, (ii) λ_i/λ_1 < max(n,p)ν or (iii) (∑_i=1^lλ_i^2)/(∑_i=1^pλ_i^2) < 0.99, to explain at least 99% of the inertia as with PCA for example. Here, we obtain respectively a rank of 23, 33 or 3 based on the different criteria, meaning that we do not reduce the dimension in the second case. For the others, as we can see in the second column of Figure <ref>, the defective part is identified with =23 only if we take two components, and it is not detectable in case of =3. Considering different scatter pairs as the _0.5 is tricky because the scatter matrix cannot be computed on the reduced data. In this case, it is necessary to consider -_0.5 and not _0.5-, but the results are not improved as visible in the Appendix <ref>. So on real datasets, the estimation of the rank for the DR approach can be very challenging and not the best approach.
§.§.§ OC outliers
This issue for estimating the rank is even more problematic in case of the presence of OC outliers. We generate n=100 observations following the projected mean-shift outlier model presented by <cit.> without noise: _n = U D V^⊤ +(1μ^⊤+S) V^⊤ with random orthogonal U, V, p=5, r=3, D = {1000, 400, 200}, μ=0, and the row outlier matrix S has first O rows as L*[1,…,1] and 0 otherwise, O=4 and L=3.5.
In this context, the true rank of the data set is equal to 4 and so the classical ICS returns an error. With a DR step first, if we estimate the rank to 4 then ICS with -4 identifies the outliers on IC_1 as illustrated in Figure <ref>. However, it is not possible to compute a more robust scatter matrix like the _0.5. In this context, we can estimate the rank based on 95% of explained variance, leading to two dimensions but the information about the OC outliers is lost no matter the scatter pair. However, using a GSVD or GINV approach directly works fine as visible in the second and third plots of Figure <ref> in Appendix <ref>. With - the situation is a bit tricky because the outliers are found on IC_2 as illustrated in Figure <ref>.
§.§ Impact of affine transformation: HTP2 - collinear industrial data
We consider another industrial data set also analyzed by <cit.> and available in the R package ICSOutlier <cit.> called HTP2 to evaluate the impact of an affine transformation on the data such as the classical standardization. It contains n=149 tests for p=457 high-tech parts with a defective part at number 28. This data set is ill-conditioned and so the classical ICS algorithm returns an error as well as the ICSQR implementation and the GINV approach.
The rank estimation for a DR is quite challenging and unstable. It is estimated to 138 with the first criterion mentioned in Subsection <ref>, 141 with the second and even to 1 with the third one based on the inertia. In addition, if the data is standardized then the two criteria lead to a rank of 141 and 51 for the third one. If we focus on the case of a rank of 138, then we cannot compute a robust scatter matrix like the _0.5. In Figure <ref>, we compute the ICSD^2 based on IC_1 and -_4 and the defective part is weirdly detectable as the observation having the smallest distance instead of the highest. However, doing it on the standardized data with 141 dimensions allows identifying it (see Figure <ref> in Appendix <ref>). This behavior shows that the DR approach is sensible to the estimation of the rank and the standardization of the data. On the contrary, if we perform ICS of -_4 with GSVD on the initial data (2^nd plot) or on the standardized one (3^rd plot), then the outlier is revealed and its ICSD^2 is stable between the two cases. It is noteworthy that the GSVD estimates 141 non-trivial eigenvalues. Finally, the regularized approach based on the does not identify the outlier (see Figure <ref> in Appendix <ref>).
§.§ High Dimension Low Sample Size (HDLSS) case
To go further, we generate some data in an HDLSS context with more variables p than observations n but not lying in general position. Following <cit.>, we generate n=50 observations on p=100 variables such as in Subsection <ref>. In this context, we retrieve similar results as when n>p: the classical ICS and GINV are returning an error, the results depend on the estimated rank and GSVD is working fine as we can see in Figure <ref> for -_4 and -_4. For the robust scatter matrix _0.5 it is not possible to compute it on reduced data of 4 components and the outliers are not visible if we kept only two dimensions (see Figure <ref> in Appendix <ref>). The last plot shows that - identifies the outliers, but on IC_2 which is not what is expected.
§ CONCLUSION
Following previous ideas mainly used in LDA context, we proposed three new ways of generalizing ICS to the case of positive semi-definite scatter matrices based on generalized inverse, dimension reduction or GSVD. We also investigated their theoretical properties (summarized in Table <ref>) and provided implementations in R.
Theoretically, the approach based on GSVD looks the most appealing by keeping all the nice properties of the classical ICS. In practice, this method can only deal with a specific type of scatter matrices that has to be expressed as crossproducts and being affine equivariant. Relaxing this affine invariance property of ICS, empirical results showed interesting and stable results for different situations for GSVD with -_4, even on standardized data. For DR, estimating the rank of the data appears to be the main challenge with quite some sensibility on the results. In addition, the method is only orthogonal invariant and does not ensure that some robust scatter matrices can be computed on the reduced data. Finally, GINV seems the least suitable as it requires that V_2 can be computed on the initial data.
Overall, those methods allow to generalize ICS to the context of positive semi-definite scatter matrices and might be also helpful in the HDLSS case as long as the data are not in general position. Depending on the approach, it is important to think about which scatter should be the first one and if some OC outliers are present. In practice, it might be useful to perform ICS multiple times, exchanging _1 and _2 and trying the different implementations to compare the results or performing localized projection pursuit after ICS as suggested by <cit.>. In the future, another idea could be to penalize or regularize ICS, as <cit.> or <cit.> do for LDA for example.
§ COMPUTATIONAL DETAILS
All computations are performed with R version 4.3.3 <cit.> and uses the packages ICS<cit.> for ICS, ICSClust<cit.>, ICSOutlier<cit.>, rrcov<cit.> for the MCD scatter matrix and geigen<cit.> for computing GSVD.
Replication files are available upon request.
§ ACKNOWLEDGEMENTS
This work is a generalization of the research conducted during my PhD under the supervision of Professor Anne Ruiz-Gazen whom I deeply thank for her guidance and insightful remarks on this topic.
§ APPENDIX A. CALCULATION DETAILS OF THE MOORE-PENROSE PSEUDO-INVERSE FOR SUBSECTION <REF>
(_n^*)^+ has to satisfy the four conditions to be a Moore-Penrose pseudo-inverse:
* Condition 1: (_n^*) (_n^*) ^+ (_n^*) = (_n^*).
* Condition 2: (_n^*)^+(_n^*) (_n^*)^+ = (_n^*)^+.
* Condition 3: ((_n^*) (_n^*)^+)^⊤ = (_n^*) (_n^*)^+.
* Condition 4: ((_n^*)^+ (_n^*))^⊤ = (_n^*)^+ (_n^*).
The proof of conditions 1 and 2 can be generalized to any matrix A but conditions 3 and 4 rely on the assumption of orthogonality of A.
Indeed, for condition 3, we have ((_n^*) (_n^*)^+)^⊤ = ( A^⊤)^-1_1(_n) _1(_n)^+ A^⊤. Since A is orthogonal ( A^⊤)^-1 = A and _1(_n)^+ = A_1(_n)^+ A^⊤, so we obtained the desired equality:
( A^⊤)^-1_1(_n) _1(_n)^+ A^⊤= A _1(_n) A^⊤ A _1(_n)^+ A^⊤= (_n^*) (_n^*)^+.
The proof is similar for the condition 4.
§ APPENDIX B. AFFINE INVARIANCE FOR ICS WITH A GENERALIZED SINGULAR VALUE DECOMPOSITION FOR SUBSECTION <REF>
(i) Adaptation of the proof from <cit.>, appendix A.1, for distinct roots.
Let _n^* = _n A + 1_n γ^⊤, with γ∈ℝ^p. Then _1(_n^*) = A^⊤_1(_n) A and _2(_n^*) = A^⊤_2(_n) A.
By definition of ICS we have, for i = 1,…,p:
α_i^2(_n^*) _2(_n^*) b_i(_n^*) = β_i^2(_n^*) _1(_n^*) b_i(_n^*),
α_i^2(_n^*) A^⊤_2(_n) A b_i(_n^*) = β_i^2(_n^*) A^⊤_1(_n) A b_i(_n^*).
Multiplying by ( A^⊤)^-1: α_i^2(_n^*) _2(_n) A b_i(_n^*) = β_i^2(_n^*) _1(_n) A b_i(_n^*).
If α_i^2(_n)/β_i^2(_n) is a distinct root, then α_i^2(_n)/β_i^2(_n) = α_i^2(_n^*)/β_i^2(_n^*) and b_i(_n) ∝ A b_i(_n^*), so b_i(_n^*) ∝ A^-1 b_i(_n) :
α_i^2(_n) _2(_n) b_i(_n) = β_i^2(_n) _1(_n) b_i(_n).
Projection onto b_i(_n^*):
z_i^* = b_i(_n^*)^⊤ x^* = ( A^-1 b_i(_n))^⊤ A^⊤ x = b_i(_n)^⊤ x = z_i.
(ii) Proof from <cit.>, appendix A.1, for multiple roots.
In case of a multiple root of multiplicity p_l, the eigenvectors are not uniquely defined and can be chosen as any linearly independent vectors spanning the corresponding p_l-dimensional eigenspace. However the roots are still the same and so the subspace spanned by the corresponding p_l-dimensional eigenspace is still the same.
The case of multiple roots may appear when _1 ∈𝒮𝒫_p and/or _2 ∈𝒮𝒫_p as it means than (_1) ≠{0} and/or (_2) ≠{0}. For example, if we only focus on the cases where B ∈(_2) - (_1) or B ∈(_1) - (_2), if ((_2) - (_1)) >1 and/or ((_1) - (_2)) >1 then 0 and/or ∞ are multiple roots.
§ APPENDIX C. ADDITIONNAL RESULTS FOR THE EMPIRICAL APPLICATIONS IN SECTION <REF>
We display additional results regarding Subsection <ref>
in Figure <ref>, Subsection <ref> in Figure <ref>,
Subsection <ref> in Figure <ref> and Subsection <ref> in Figure <ref>.
myjmva
|
http://arxiv.org/abs/2409.02547v1 | 20240904090607 | Modelling the TESS light curve of Ap Si star MX TrA | [
"Yu. Pakhomov",
"I. Potravnov",
"A. Romanovskaya",
"T. Ryabchikova"
] | astro-ph.SR | [
"astro-ph.SR"
] |
§ INTRODUCTION
Chemically peculiar magnetic Ap/Bp stars are characterized by strongly inhomogeneous distribution of chemical elements in their atmospheres, both over the surface and in depth. Driven by selective atomic diffusion <cit.> these inhomogeneities follow the surface geometry of magnetic field and form the horizontal abundance gradients - the chemical spots with increased or decreased abundance of certain elements up to few dex (in logarithmic scale) relative to the Sun. Due to the line blanketing the opacity differs significantly within spots and quiet photosphere that affects the emergent flux.
The axial rotation of a spotted star leads to periodic brightness variability.
This is the key ingredient of the "oblique rotator" model <cit.>, which successfully explain variability of Ap/Bp stars.
Recent development in computation of model atmospheres with individual chemical composition <cit.> as well as observational Doppler imaging (DI) technique <cit.> has led to major advances in the interpretation of observations of Ap/Bp stars. In their study of model atmosphere computed with line-by-line opacity treatment <cit.> showed that individual abundance patterns result in changing the atmospheric structure: temperature and pressure distribution, and modify the spectral energy distribution (SED). Elements such as silicon, iron and chromium which are often significantly overabundant in the line forming region of Ap/Bp atmospheres, were found to be the principal contributors to opacity. In particular, Si plays an exceptional role both in line and continuum opacities in the ultraviolet (UV) region <cit.> that leads to the flux redistribution between far- (λ≲ 1600 Å ) and near-UV as well as visible spectral region. In turn, this effect manifests in distinct photometric behaviour of silicon Ap/Bp stars depending on the bandpass. The combination of these theoretical and numerical advances with the capability of surface mapping with DI offers a direct opportunity for robust modelling and interpretation the spectral and photometric variability of Ap/Bp stars.
In a series of papers <cit.> the light curves of Ap/Bp stars were modelled using the surface distributions of elements obtained by DI method. As a result, a sufficiently good agreement between the synthetic light curves and the observed ones was obtained, proving surface elemental inhomogeneity as the reason for the light changes of investigated stars. Also, in agreement with theoretical predictions, the key contribution of spots with silicon, iron, and chromium overabundance to the rotational light modulation was confirmed.
Nevertheless, the problem is still relevant due to the growing number of precise high-cadence photometric observations from the spacecrafts Kepler/K2 <cit.>, BRITE <cit.>, TESS <cit.> etc., and accurate representation of the light curves of Ap/Bp stars may shed new light on some new effects. Thus, the large diversity of the surface distributions of elements in Ap/Bp stars will allow to disentangle the contribution of individual element to the total photometric variation and compare with theoretical expectations. Impact of effects such as deviation from the local thermodynamic equilibrium (LTE) and vertical abundance gradients (stratification) on the flux variations of Ap/Bp stars is not completely explored and its accounting will probably improve the accuracy of the fit to the observations.
The aim of the present work is the quantitative investigation of the effect of inhomogeneous surface abundance distribution of most peculiar elements in Ap star on its photometric variability. (HD 152564) is a bright southern Ap star which belongs to silicon subgroup of this class and demonstrates pronounced photometric variability <cit.>. Based on high-resolution phase-resolved spectroscopy of <cit.> determined the fundamental and atmospheric parameters (the effective temperature = 11950 ± 200 K, the surface gravity = 3.6 ± 0.2, the microturbulent velocity ξ_t = 0.0 km/s, the macroturbulent velocity ζ_macro = 0.0 km/s, the projection of the rotational velocity = 69 ± 2 km/s, the mass M/M_⊙ = 2.1, the radius R/R_⊙ = 3.8), abundances of 12 ions of 9 elements and also mapped the surface abundance distributions for Si, He, Fe, Mg, and O using DI. Later, <cit.> obtained Cr maps with the same DI technique. Analysis of high-precision photometric data from the TESS spacecraft revealed a period of P=2.1639 day, which, together with the values of the projectional rotational velocity and the radius of the star R, gives the value of the orbital inclination i=51^∘. The phase light curve has an amplitude of about 0.03^m and a quasi-sinusoidal shape, which reflects the modulation by the rotation of the spotted star. A pronounced rotationally modulated variability due to a highly inhomogeneous surface distribution of elements and expected presence of vertical stratification of silicon and iron makes an exceptionally suitable object for investigation in the context of the problem above.
The present work is based on the Doppler Imaging results obtained in <cit.> and focuses on the modelling of the light curve of .
The paper is organized as follows: Section <ref> provides data on the photometric observations of and surface abundance maps we used. Section <ref> describes the modelling technique, calculations of the surface intensity map and synthetic light curve, Section <ref> describes the results and comparison with observations. In Section <ref> we present our conclusions.
§ OBSERVATIONAL DATA
For construction of the light curve of we used the photometric observations obtained by the TESS mission during its Cycle 3 in Sector 39 (from 05/27/2021 to 06/24/2021) and comprising about 20 000 measurements with 120-s cadence in total. We used the photometry obtained during this cycle as being closest to the main period of the spectroscopic observations used for Doppler Imaging in <cit.>. Given the good representation of the light curve from season to season, these observations are reliable for our main purpose of accessing the general shape and amplitude of the light curve. The data were retrieved through the Mikulski Archive for Space Telescopes [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html] (MAST) portal automatically processed with the Science Processing Operations Center (SPOC) software package <cit.>. The processed SPOC light curve provides two types of fluxes: SAP (Simple Aperture Photometry) flux and PDCSAP (Pre-search Data Conditioning SAP) flux with removed long-term trends. In particular case of MX TrA photometry in Sector 39 PDCSAP fluxes after removing an offset appeared to be almost identical to the SAP fluxes but more noisy, hence we used latter for magnitudes conversion. The SAP fluxes were converted to the TESS magnitude scale which is close to the Cousins I_C filter using the formula m_TESS = -2.5 𝐥𝐨𝐠(SAP_FLUX) + 20.44, taken from the project documentation [https://tess.mit.edu/public/tesstransients/pages/readme.html#flux-calibration].
The phased light curve of was built using an ephemeris JD(max.light) = 2458647.7774 + 2^d.1639 E from <cit.>. It should be noted that the light curve of is slightly asymmetric, so the initial epoch in the ephemeris was determined from the centre of gravity of the maximum. This explains a small negative shift relative to zero of the points with maximal brightness in the phased light curve.
We also used surface distribution maps of 4 elements: He, Si, Fe from <cit.> and Cr from <cit.> for calculations of the synthetic light curves. These maps with a resolution of about 11^∘ on equator were obtained with DI technique based on a spectroscopic time series from the 10-m South African Large Telescope (SALT). Details of the spectroscopic observations and Doppler Imaging procedure are given in the papers cited above. For chromium, <cit.> provide two versions of the maps with slightly different abundance scale, depending on the set of lines they used. We explore both of them in attempt for the best representation of the observed ligth curve (see below).
§ MODELLING OF LIGHT CURVE
Our approach to light curve modelling is based on the following fundamental steps:
* Construction of a surface intensity map. For this purpose, the specific intensities in the elements of surface grid are calculated with the individual abundances from the Doppler maps.
* Disk integration of specific intensities for all rotational phases. Convolution of the integrated flux at each phase with the bandpass of the chosen filter allows to compare the calculated synthetic light curve with the observed one.
Details are given in the subsections below.
§.§ Construction of intensity map
At the first step a grid of stellar atmosphere models was calculated with the LLmodels code <cit.> taking into account the individual chemical composition. The stellar parameters of ( = 11950 ± 200 K, = 3.6 ± 0.2) and the mean elemental abundances were adopted from <cit.>. The abundances of silicon, iron, helium, and chromium were varied within the limits inferred from Doppler maps (Fig. <ref>). The maps are 544×272 pixels in size, which corresponds to a equidistant step of about 0.66^∘ in latitude and longitude. The final grid consists of 256 atmosphere models calculated for all possible combinations of abundances in the ranges of log A_Si=[-4.50 ... -2.30], log A_Fe=[-4.70 ... -3.70], log A_He=[-2.11 ... -1.61], and log A_Cr=[-6.50 ... -4.10], where abundances log A_X = log(N_X/N_tot) are expressed through the ratio the number density N_X of element X to the total number density N_tot. The abundances of other elements remained unchanged. The synthetic SEDs were computed simultaneously with the atmosphere models. Using the response curve of the TESS imaging receiver <cit.> we calculated the flux and intensity of radiation from the 1 cm^2 of stellar surface. The calculated intensities in the TESS magnitude scale were combined into a grid with the gradients significantly smoothed out in the logarithmic scale. For each point in the surface map the specific intensity was calculated using the grid interpolation and taking into account local abundances of Si, Fe, He, and Cr. Thus an intensity map I_l,b was constructed in the bandpass of the TESS image receiver. The ratio of the minimum and maximum intensities was about 0.93. This map scaled to the maximum value is shown in Fig. <ref>. By its appearance the intensity map better resembles the silicon distribution map. This was expected, since silicon makes the most significant contribution to the absorption coefficient, especially in the UV range. Due to energy redistribution in the stellar spectrum, strong absorption in the UV leads to an increase in flux in the visible range. Therefore, dark spots with overabundance of silicon in Doppler map appear bright in the intensity map.
§.§ Synthetic magnitudes and light curve
The intensity map in rectangular coordinates was further transformed into a spherical one in orthographic projection taking into account inclination of the rotation axis i=51^∘ <cit.>.
Apparent intensity I'_l,b of an arbitrary surface element ("point" in the map) with coordinates (l,b) and intensity I_l,b towards the observer is
I'_l,b = I_l,b [1-c_1 (1-μ)-c_2 (1-μ)^2] μ |cos(b)| δ b δ l
where μ=cos(ϑ), ϑ is the vertex angle between the direction from the center of the star toward the observer and a point on the stellar surface with coordinates (l,b); c_1=0.1816 and c_2=0.1651 are limb darkening coefficients; |cos(b)| δ b δ l is the area of surface element on the sphere.
Limb darkening coefficients for quadratic law I_μ=I_0 [1-c_1 (1-μ)-c_2 (1-μ)^2]
were calculated from the emergent radiation intensities I_μ,λ convolved with TESS bandpass T_λ for seven values of μ using Levenberg-Marquardt method <cit.> to approximate function
I_μ/I_0 = ∫ I_μ,λ T_λ dλ/I_0∫ T_λ dλ = 1-c_1 (1-μ)-c_2 (1-μ)^2
The calculations were made for stellar atmosphere model with average abundances of Si, Fe, He, Cr. Impact of individual chemical composition on the limb darkening coefficients is small and does not exceed 0.1 – 0.2%.
The radiation flux in the bandpass of the TESS image receiver F_TESS=∑ I'_l,b is obtained by integrating contribution of all points over the visible hemisphere of the star (ϑ < π/2). The synthetic magnitude is
m ^syn = -2.5 log F_TESS
The magnitude zero point here is equal to zero, because it was already taken into account when calculating the specific intensities I_l,b in magnitude scale to create the grid.
The magnitude m^syn refers to the average radiation flux from an area of 1 cm^2 on the stellar surface. The apparent magnitude m^TESS was calculated as
m^TESS=m^syn-5 log θ/2× 2.06265× 10^11,
Here, θ is the angular diameter of the star in μas. We neglect interstellar extinction due to its smallness <cit.>.
The angular diameter θ=0.18297 μas was calculated from the difference between the observed average magnitude over the rotation period m^obs and the synthetic one m^syn:
θ=2 × 2.06265× 10^11× 10^-(m^obs-m^syn)/5
Combining this angular diameter with the distance to the star of 191±9 pc, obtained from inversion of Gaia DR3 parallax (π=5.2209 ± 0.2382 μas) <cit.>, the radius of 3.8 ± 0.2 R_⊙ was obtained and found to be in very good agreement with spectroscopic determination by <cit.>.
The synthetic light curve in m^TESS units was computed with Eq.<ref> for the full set of rotational phases and is presented in Fig. <ref> together with the TESS observations.
§ RESULTS AND DISCUSSION
§.§ Synthetic light curve
Fig. <ref> represents theoretical light curves accounting for the individual contribution of each considered element and the total curve with cumulative impact of all elements in comparison with the observed TESS light curve. The observed light curve has a quasi-sinusoidal shape, with a more gradual descending lag. One can see from the figure that the total synthetic light curve perfectly matches the observations in terms of both amplitude and shape. Considering the individual contribution of elements, silicon has the largest one, about 64%, to the amplitude of the light variation. The contributions at the phases of maximum and minimum are different due to inhomogeneous distribution of silicon spots over the stellar surface. That is why the light curve due to silicon surface abundance variations is asymmetrical relative to zero phase. This asymmetry is also manifested in the shape of the observed TESS light curve as the bar near phase φ≈0.1. A systematic error in Si abundance of the order of ±0.2 dex, which is typical in abundance analysis, results in amplitude difference of order 0.005 mag. The amplitude reduces with silicon abundance decreases and increases with silicon in excess. The next largest contributor to brightness variations is chromium with the value of relative amplitude about 22%. Iron have amplitude about 20% of total, but the light curve is shifted by phase which is consistent with the longitudinal position of the most contrast Fe spot from DI. Therefore, Fe provides somewhat lower contribution to total, about 17%. Helium is responsible for the smallest changes in magnitudes. A feature in TESS light curve at phase of 0.1 is fitted well by Si, Fe, and He but with a somewhat reduced amplitude. Accounting for the contribution of chromium matches the amplitude, but the representation of light curve shape near maximum worsens due to ambiguity in chromium abundance scale. Exploring chromium maps with slightly different abundance scales from <cit.> reveals that the second map (their Fig. 2) with the lower abundance gradient provides the better fit of the light curve.
We also considered the possible contribution to the brightness variability of the light elements: magnesium and oxygen. These elements also possess a highly inhomogeneous surface distribution in <cit.>. The mean oxygen abundance is sub-solar (from NLTE analysis log A_O≈-4.0), but the element is concentrated in three large equatorial spots with near solar abundance which occupy a significant fraction of the stellar surface. Magnesium is also depleted in atmosphere with the mean abundance log A_Mg≈-5.0 , but the region of its maximum (slightly sub-solar) abundance coincides with the circumpolar ring in the Fe distribution. We estimated the variability of TESS magnitudes due to inhomogeneous distributions of Mg and O computing intensity maps as described in Sect. <ref>. The differences between brightest and dimmest regions on the intensity maps are only 0.0001 mag and 0.0002 mag for Mg and O, respectively. Integration over stellar disc will significantly reduce these values. Therefore, the impact of these elements on brightness variations is negligible.
In summary, inhomogeneous surface distribution of four elements: Si, Cr, Fe and He completely explains the observed photometric variations of . This is in agreement with both theoretical expectations <cit.> and modelling of light variations in other Ap/Bp stars <cit.>.
§.§ Estimation of impact of abundance stratification
Vertical abundance gradients of elements (abundance stratification) affect the opacity distribution with depth in the atmosphere, resulting in differences in the emergent fluxes compared to a chemically homogeneous atmosphere. Therefore stratification should be considered as one of the effects potentially affecting the light curves of Ap/Bp stars. However, the straightforward accounting for stratification in light curve modelling is complicated, because the most suitable objects for stratification analysis are the stars with narrow spectral lines (low projected rotational velocities ≲10-15 km/s), but they appear inconvenient for DI and vice versa. is no exception. Although its spectrum shows a large difference in abundances derived from spectral lines of different ionization stages of Si and Fe that is considered as evidence of stratification, the accurate reconstruction of the stratification profile is almost impossible due to the rapid axial rotation and severe line blending.
Fortunately, in the <cit.> list we found a star BD+00^∘1659 which is a slowly rotating (=7 km/s) twin of by its atmospheric parameters and chemical composition. A detailed analysis of this star, including stratification in its atmosphere will be presented in a forthcoming paper by <cit.>. In the present work we employ the stratification profiles for Si and Fe in BD+00^∘1659 in the application to . These profiles are presented in Fig. <ref>, where the contribution function of the various atmospheric layers to the radiation in the TESS bandpass is also shown in scale of Rosseland optical depth.
Two atmospheric models and corresponding synthetic SEDs were used to evaluate the effect: a chemically homogeneous model calculated for the mean abundances log A_He = -1.60 dex, log A_Si = -3.46 dex, log A_Fe = -3.85 dex and a stratified one calculated taking into account the abundance gradients of Si and Fe shown in Fig. <ref>. The inhomogeneous horizontal distribution of elements was ignored at this step and these models were adopted for the entire atmosphere. This is justified by the fact that the current stratification analysis is not spatially resolved but is based on the radiation integrated over the visible hemisphere of the star. The difference in radiation fluxes between vertically stratified (F_S) and chemically homogeneous (F_0) models is Δ m = -2.5log(F_S/F_0) and its wavelength dependence is shown in Fig. <ref> which referred to the disk-integrated flux with homogeneous horizontal distribution of chemical elements. One can see that the maximum amplitude in the visible region reaches longward of the Balmer jump and the sign of the effect abruptly changes below λ≲ 2000 Å. In the TESS bandpass the flux difference reaches -0.01^m, i.e. in this case, stratification enhances the light amplitude. However, this is an upper limit. In reality stratified spots occupy small fraction of the surface. We need to multiply the flux of stratified model on the filling factor f∼0.2 corresponding to the fractional area of Si spots. Consequently, the flux ratio will be reduced by order of magnitude. Also the effect of stratification on emergent flux is very sensitive to the depth of the stratification step in the atmosphere as follows from the comparison with the contribution function in Fig. <ref>. Shifting into upper atmosphere the step appears in a region (e.g. logτ_Ross≈-2) where the contribution of layers to the continuum flux is small, thus reducing the difference in flux by up to two orders of magnitude. Depending on the position of stratification step the amplitude could both increase or decrease. The sophisticated 3D analysis with simultaneous accounting for both vertical and horizontal abundance gradients requires knowledge of the stratification profile in each surface element that is unavailable for rapidly rotating Ap stars like . Generally, we estimate the contribution of vertical stratification to the light variations of in the visual region as negligible, that is consistent with a good representation of the observations with horizontal abundance inhomogeneities only.
§.§ UV variations
One of the principal effects due to silicon overabundance in the atmosphere is the redistribution of flux between the far-UV and visible regions. Indeed, observations of some Ap Si stars clearly demonstrate the effect of phase shift or complete reversal of the light curve depending on the wavelength range <cit.>. Although photometric observations of in the far-UV are yet not available, it is instructive to calculate synthetic light curve in this region (Fig. <ref>). We used bandpasses of two GALEX filters for far-UV (FUV, centred at λ∼1530 Å ) and near UV (NUV, at λ∼2350 Å )[https://asd.gsfc.nasa.gov/archive/galex/tools/Resolution_Response/index.html]. Comparison of the two curves in Fig. <ref> reveals the antiphase brightness changes in NUV and FUV filters while the NUV light curve is in phase with the visual TESS one. The amplitudes are also significantly different, with largest light variations in FUV. The physical basis for this difference is that bandpass of FUV filter centred shortward of 1527 Å - the photoionization threshold of Si I and contains numerous resonance lines and autoionization features of Si II. Shortward (λ≲1600 Å ) the energy is blocked due to absorption and redistributed to the longer wavelengths which leads to the flux increasing in near-UV and visual regions. This feature can be used for the photometric identification of Ap Si stars <cit.>.
Generalizing the approach, we calculated the light curves of in the 1100-10000 Å range with a 100 Å filter and plot in Fig. <ref> the wavelength dependence of the photometric amplitude and phase of the maximum. The figure clearly illustrates the amplitude increase toward the shorter wavelengths and the existence of a dip near 2000 Å - the "null region" where the flux is almost constant over the rotational cycle. The existence of such a "null region (-s)" pointing to the mechanism of flux redistribution was previously detected in spectrophotometric observations of Ap stars <cit.>. The phases of the maximum also differ on either side of "null region" . While at longward a gradual phase shift to negative values is expected, at short wavelengths where the flux is effectively blocked by silicon absorption there is a sharp increase up to a phase difference of Δφ=0.5 (anti-phase variability) relative the visual region.
§ CONCLUSIONS
In the present paper we report the results of modelling the high-precision TESS light curve of Ap Si star based on the model of the oblique rotator and maps of surface elemental distribution previously obtained with DI technique. We were able to successfully reproduce the observed shape of the light curve and its amplitude with an accuracy better than 0.001 mag accounting for the inhomogeneous surface distribution of four elements: Si, Fe, Cr, and He. This list is enough for a good fit of the observations. Diversity of the surface distributions of elements leads to a phase shift and different contribution of an individual element to the light minimum and maximum. The total synthetic light curve perfectly reproduces the shape of the observed one. The contribution of light elements: O and Mg to the light variations appears to be negligible.
We also estimated the effect of the vertical stratification of Si and Fe in the atmosphere on the emergent flux. We show that, in principle, stratification can contribute to light variations increasing emergent flux near Balmer jump and reducing it in the far-UV. However, in the TESS bandpass, the total effect does not exceed ∼0.01 mag and will be reduced by an order of magnitude taking into account horizontal chemical inhomogeneities. Hence, it does not contribute significantly to the TESS light curve amplitude.
Empirically, we conclude that taking into account only the inhomogeneous horizontal abundance distribution of Si, Fe, Cr, and He is enough for good representation the observed light curve of in TESS bandpass.
The wavelength dependence of the amplitude of light variations and phase of the maximum was calculated from synthetic light curves. It shows the well known for other Ap Si stars effect of increasing amplitude and antiphase variability between far-UV and visible regions. This result clearly demonstrates the possibility for identification a new Ap Si stars e.g. using photometric observations in the far-UV with the upcoming Spektr-UF (WSO-UV) space mission <cit.> and phase-correlated optical observations.
Conceptualization, I.P. and T.R.; methodology, Yu.P.; software, Yu.P.; validation, Yu.P., T.R. and I.P.; formal analysis, Yu.P., I.P., and T.R.; investigation, Yu.P.; resources, A.R.; data curation, A.R. and Yu.P.; writing—original draft preparation, Yu.P. and I.P.; writing—review and editing, Yu.P., I.P. and T.R.; visualization, Yu.P.; supervision, I.P.; project administration, Yu.P.; funding acquisition, I.P. All authors have read and agreed to the published version of the manuscript.
This research was funed by the grant of Russian Science Foundation №24-22-00237, https://rscf.ru/en/project/24-22-00237/.
Informed consent was obtained from all subjects involved in the study.
Dataset available on request from the authors.
We obtained the observed data of the TESS space mission and processed using the SPOC (Science Processing Operations Center) automatic software package and obtained through the portal MAST (Mikulski Archive for Space Telescopes).
We thank Denis Shulyak for your program LLmodels and useful tips.
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
GALEX GALaxy evolution EXplorer - NASA orbiting space telescope
DI Doppler imaging
LTE Local thermodynamic equilibrium
NLTE Non local thermodynamic equilibrium
SED Spectral energy distribution
TESS Transiting Exoplanet Survey Satellite
UV Ultraviolet
References
Definitions/mdpi
|
http://arxiv.org/abs/2409.02839v1 | 20240904160928 | Jäger: Automated Telephone Call Traceback | [
"David Adei",
"Varun Madathil",
"Sathvik Prasad",
"Bradley Reaves",
"Alessandra Scafuro"
] | cs.CR | [
"cs.CR",
"cs.CY",
"cs.NI"
] |
plain
§ ABSTRACT
Unsolicited telephone calls that facilitate fraud or unlawful telemarketing continue to overwhelm network users and the regulators who prosecute them.
The first step in prosecuting phone abuse is traceback — identifying the call originator. This fundamental investigative task currently requires hours of manual effort per call.
In this paper, we introduce , a distributed secure call traceback system.
can trace a call in a few seconds, even with partial deployment, while cryptographically preserving the privacy of call parties, carrier trade secrets like peers and call volume, and limiting the threat of bulk analysis.
We establish definitions and requirements of secure traceback, then develop a suite of protocols that meet these requirements using witness encryption, oblivious pseudorandom functions, and group signatures. We prove these protocols secure in the universal composibility framework.
We then demonstrate that has low compute and bandwidth costs per call, and these costs scale linearly with call volume.
provides an efficient, secure, privacy-preserving system to revolutionize telephone abuse investigation with minimal costs to operators.
Aside from being a nuisance, illegal calls, like robocalls, are often associated with scams stealing millions of dollars each year. They are particularly detrimental to the financial safety of the vulnerable populations that fall prey to these activities.
Efforts made by law enforcement to trace illegal calls are manual and time-consuming, often taking several hours or even days to complete a traceback. In this paper, we present ,
a provably secure system of protocols under the universal composability framework
maybe replace with: "a system of protocols that is provably secure in the universal composability framework."
We show that is feasibly fast and requires minimal resources in terms of CPU, Storage, and Bandwidth to handle over 10,000 calls per second across the network. With this tool and collaboration among phone providers, authorities can trace any phone call within a few seconds.
I noticed that we do not mention any privacy property in the abstract. I am ok with it, I just noticed.
<ccs2012>
<concept>
<concept_id>10003033.10003106.10003113</concept_id>
<concept_desc>Networks Mobile networks</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10003014.10003017</concept_id>
<concept_desc>Security and privacy Mobile and wireless security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10003006.10003013</concept_id>
<concept_desc>Security and privacy Distributed systems security</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Networks Mobile networks
[500]Security and privacy Mobile and wireless security
[300]Security and privacy Distributed systems security
[
[
=====
§ INTRODUCTION
Telephone networks are inundated with unsolicited “robocalls” for telemarketing
or outright fraud.
While individuals are bothered by the seemingly constant ringing of the phone,
government agencies, enterprises, and non-profits now have greater
difficulties reaching stakeholders for legitimate, desirable purposes.
Public outcry has motivated policy makers, public
officials, and phone providers to all take actions <cit.> meant to
address the problem.
Prior to the recent spate of incessant robocalls, the Federal Trade Commission (FTC) established the Telemarketing Sales Rule <cit.>
to set the bounds on what forms of automated calls are considered permissible.
A general summary is that callers
must have affirmative, opt-in consent from the called party to dial them for
commercial purposes. The FTC also maintains a “do-not-call” list that
should prevent unsolicited calls to registered individuals. These measures are
legal, not technical, so violations are pursued through regulatory
action. These measures have clearly not been effective.
In late 2019, a sudden outbreak of bipartisanship struck the United States
Congress, who passed the TRACED Act to combat illegal calling. Among other
measures, the law further expanded penalties for illegal calling and empowered
regulators to make substantial changes to network policy to reduce robocalls.
These changes to date have included requiring providers to register and submit
Robocall Mitigation Plans to the FCC, the mandatory blocking of calls claiming
to originate from invalid numbers, encouraging the labeling of suspect calls
by providers, setting deadlines on participation in robocall investigations,
and mandating that all providers implement a call authentication mechanism
known as STIR/SHAKEN (). requires originating providers to sign
outbound call requests to indicate their actual source, similar to DKIM, expecting that it would allow regulators to identify the source of the call,
prevent caller ID spoofing, and give robocallers “no place to hide.”
In practice, all of these efforts have failed to significantly change the
state of affairs. Call labelling is unreliable, robocallers have moved to
using legitimate numbers for a very short period, and the majority of calls in
the network arrive without a signature <cit.> because pre-VoIP networks cannot be
modified to support .
Robocallers continue to operate for a simple reason: it is profitable and low
risk. While regulators and law enforcement have been successful in winning
judgements against accused robocall operators, with fines into many millions
of dollars, the defendants often return to their schemes under assumed
identities or are replaced by other “entrepreneuers” using similar
techniques. Because the robocalling problem is so vast, it is reasonable to
assume that they will not face penalties given how painstaking it is to bring
a case against a single robocaller and how small the relevant agency staffs are.
One of the biggest hurdles is identifying the source of a call. The
telephone network is a network-of-networks, like the Internet, and a given
call often passes through many networks before reaching its
destination. Routes change rapidly and unpredictably, and
providers routing the call only know the previous “hop” and the next
“hop.” Fortunately, providers keep meticulous records on calls they
route for billing or paying peers. To identify the
source of a call, an investigator must start at the destination with
knowledge of the call time, call destination, and claimed call source, and go
hop-by-hop until they reach the originator. This process is termed
traceback.
Prior to 2019, a traceback, could take multiple subpoenas and months to complete for a
single call <cit.>. The 2019 TRACED Act mandated the creation of a
clearinghouse to handle tracebacks, and it also mandated timely
responses to traceback requests by providers. The result was the creation of
the Industry Traceback Group (ITG) <cit.>, and the FCC reported to Congress that
currently a traceback can be completed in under 24 hours with the help of ITG.
Unfortunately, tracebacks are
still largely manual, and the ITG has a skeleton staff of a few employees.
They are (rightfully) proud that they manage to complete around 300
tracebacks per month, though this is far from sufficient
to deal with millions of robocalls each month. It will certainly become
a bottleneck if we want law enforcement to target perpetrators of high-touch
fraud schemes like digital kidnapping, where a fraudster provides a convincing
story and fabricated voice of the loved one of a target to extort a ransom.
Automation is needed to scale traceback to a significant fraction
of the current abuse, but simple approaches will be unacceptable to some
portion of current stakeholders. If each carrier were required to implement an
API for traceback, it runs the risk of a malicious carrier fabricating
incorrect responses. Note that some smaller carriers are known to skirt or
outright violate laws for profit, including facilitating illegal robocalls. If
each provider were required to submit their call routing records to a central
source, subscribers would justifiably worry that their social networks and
telephone activity could be leaked. Providers would balk at revealing
peering arrangements and call volumes, and the central database would be a
magnet for curious intelligence agencies and law enforcement dragnets.
In this paper, we present [, pronounced “YAYger,” is German for “hunter.” It can also refer to Jägermeister, a popular liqueur, making it an appropriate name for a protocol to supplement .], a distributed system and protocol suite to
provide rapid, automated traceback for phone calls. To trace an illegal call, an investigator will obtain the caller's and recipient's telephone numbers along with precise timing details about the call. This information is typically sourced from public complaints, industry honeypots, the ITG's data collection, consumer voicemails, or other commercial channels. enables the investigator with the call details to identify its originating network. The originating network can then be held liable or identify the customer responsible for the illegal call.
The key insight is to
allow traceback over encrypted call records, with the caveat that only a party
with detailed knowledge of the call can identify or decrypt the routing records for a given call. Our solution
requires no modifications to the existing telephone network. Instead, we
assume access to the billing systems that already maintain the records we
need. We do not require interaction between providers at any point. The
compute, storage, and bandwidth costs for providers are modest and scale
linearly with call volume, so small carriers need few resources.
We provide
for cryptographic mechanisms to ensure that authorized traceback users can be
appropriately rate-limited to prevent bulk abuse.
Our system is robust against
a single provider on a call who fabricates or does not submit a routing record
for a call. Encrypted call records do not reveal the provider who submitted
them, but a provider who submits an invalid or incorrect record can be
identified.
Because the purpose of traceback is to identify the source of a call, we do not actually need every routing record for a call to be present in . Ideally, at least the first, originating record, will be present, but if it isn't, any other record will still improve traceback performance.
Our scheme will provide benefits even if some providers do not
participate, so it can be deployed incrementally.
We make the following contributions:
* We specify and define the key properties and requirements for secure
telephone call traceback.
* We design protocols that meet the requirements for secure
traceback, and implement them in a prototype distributed system dubbed
.
* We provide formal guaranties by proving the security of these protocols in the Universal Composability
(UC) framework.
* We demonstrate that has low compute and bandwidth costs per call, and these costs scale linearly with call volume. In the process, we
also develop a performant witness encryption library in C++. Code for that library and our full implementation is available <cit.>.
Robocall enforcement is ultimately a legal problem, albeit with technical challenges. fills a technical need for effective investigative tools, though , cannot independently determine whether a call was illegal.
Nevertheless, because the telephone ecosystem is heavily regulated, honest participation can be incentivized through the risk of civil or criminal prosecution.
§ BACKGROUND
This section discusses background information on the state of the telephone network abuse and prosecuting violators. In doing so, we described challenges with locating abuse actors.
§.§ Telephone Network Abuse
Phone network abuse is a global problem, with the United States being one of the most severely
affected countries.
One of the most common forms of abuse is pre-recorded automated bulk phone calls, or robocalls.
Many robocalls violate US law, including sales calls made without affirmative opt-in.
Fraudulent calls often impersonate government officials and steal millions of dollars from victims.
Federal statues prescribe eyewatering financial penalties for each and every illegal call,
but penalties require enforcement to deter abuse. Currently,
phone abusers avoid prosecution by using spoofed or short-term telephone numbers, regularly
changing service providers, and altogether vastly exceeding available enforcement resources.
§.§ Routing Phone Calls through the Network
Determining the source of a single call currently requires significant
investigative effort, and part of the reason is that the phone network
is a network-of-networks with no global vantage point and no single end-to-end
authentication of identity.
A given telephone provider will connect with one or more other providers to
send and receive call traffic.
When a subscriber places a call, her provider “originates” the call and then
uses a signalling protocol to communicate with its peer networks to find a route
to the called party's network. When the call is finally set-up, over potentially many intermediate , the call is considered
“terminated” and the call audio will begin.
Call routes are selected considering carrier[In this work, we will use carrier and provider interchangeably.] charges, network maintenance, and agreements with other carriers, and these factors change moment-to-moment.
Each provider that carries the call will bill the provider who sent it, and Call Detail Records (CDRs) are kept to support this. However, only the originating provider knows any details about the call originator beyond the phone number the subscriber claimed when they set up the call.[Most businesses expect to be able to specify the “from” field shown for caller ID. A common case is to allow a desk line to appear to be coming from the corporate switchboard number, but this feature is abused by illegal callers.] No provider ever knows more about the call than the previous and next provider in the route. To identify a party responsible for an illegal call, an investigator must first find the originating provider, which is unknown to the terminating provider who delivered the call to its recipient.
This section provides an overview of a call between Alice and Bob, as shown in Figure <ref>.
In our example, the call path between Alice and Bob is illustrated with a red line from carrier P_1 (“originating carrier”), through P_2 and P_3 (“transit carriers”) to P_4 (“terminating carrier”) in Figure <ref>.
When Alice dials Bob's number, her phone sends a call initiation request along with the dialed number to her Voice Service Provider.
This provider routes the call to Bob through the phone network.
The originating carrier often knows her true identity, but this information is not propagated within the call initiation signal being shared between the carriers involved.
Telecom operators have one or more connections with other operators, expanding the route options available to connect a call from the origin to its destination.
Each route may have a different cost based on carrier charges, network maintenance, and agreements with other carriers.
If Bob were on the same carrier's network as Alice, the call would be established on the same network (the process of connecting a call is called call termination).
Otherwise, the carrier interacts with neighboring networks (“transit carriers”) to find the optimal path to Bob based on the least-cost routing (LCR) policy.
The LCR aims to minimize the cost of routing outbound traffic by considering factors like call rates, reliability, and inter-carrier agreements.
Signal messages are exchanged between the participating carriers to establish and disconnect a phone call.
The messages are based on a call signaling protocol.
Common fields in these messages include the calling party's “claimed” number, the called party's number, the call timestamp, and other details.
Each carrier in the call path generates and stores Call Detail Records (CDRs) for billing purposes and as evidence of the call for regulatory compliance.
Each carrier[In this work, we will use carrier and provider interchangeably.], operating as an independent call processing entity, only knows the previous carrier it received the call from and the subsequent carrier it routes to.
This setting encourages competition, reduces complexity management, and prevents bad actors from mapping the entire network.
However, it complicates the process of identifying the originating point of a phone call (a process called traceback).
To pinpoint a call's source, entities must iterate through each carrier, starting from the terminating carrier until they reach the originating carrier.
The following section describes the process of locating the source of illegal robocalls.
§.§ Locating Abuse Actors
Authorities must find and prosecute perpetrators responsible for generating illegal calls.
In the United States, the TRACED Act of 2019 <cit.> requires the FCC to mandate the caller authentication framework.
Although in principle S/S can be used to traceback illegal robocalls from their destination to their origin, there are several limitations.
Industry reports from October 2023 estimate <cit.> more than half of all voice traffic in the US is still not signed using S/S, making it impossible to track the origin of such calls using S/S information alone.
Industry insiders attribute this large portion of unauthenticated voice traffic to legacy infrastructure that does not support S/S.
Furthermore, phone calls originating outside the US often do not contain S/S information since the framework is not mandated in other countries.
Therefore, regulators, enforcement agencies, and other entities rely exclusively on manual traceback processes to identify the source of illegal robocalls <cit.>.
§.§ Manual Traceback Process
As the TRACED Act requires, the FCC has designated the Industry Traceback Group (ITG) <cit.> to serve as the central entity to coordinate the traceback process.
The ITG manages the labor-intensive and time-consuming tasks of identifying the source of suspected illegal robocalls.
The ITG constantly monitors active robocall campaigns using data from honeypots <cit.>, consumer reports, and other sources.
After assessing the legality of the robocall, the ITG initiates a traceback request and manually coordinates across numerous carriers to pinpoint the source of suspected illegal robocalls.
The traceback process involves tracing the call path from the terminating carrier through transit carriers and ultimately to the originating carrier to identify the source of the call.
Traceback has proven to be a crucial tool for regulators and enforcement agencies to combat illegal robocalls. It has been used in almost every enforcement action filed by regulators against robocalling operations.
However, successfully completing a traceback often takes several hours or days, with substantial effort from the ITG and the participating carriers.
The manual and time-consuming nature of the traceback process significantly limits its effectiveness.
Although the volume of illegal robocalls targeting US subscribers is estimated to be in the hundreds of millions per year, less than 3,000 tracebacks were completed over eleven months in 2022 <cit.>.
By developing an automated, secure, and scalable traceback system, we can swiftly uncover the source of such calls, deter bad actors, and empower stakeholders to protect phone users from illegal robocalls.
§.§ Cryptographic Primitives
This section introduces the cryptographic primitives that serve as the building blocks for our protocol.
Witness Encryption Based on Signatures A Witness Encryption scheme based on Signatures (WES) was recently proposed in <cit.> and <cit.>. These are encryption schemes where the encryption key is a tuple of a signature verification key (denoted ) and a string chosen by the encryptor (denoted ℓ). The decryption key is a valid signature (denoted σ) on the string, such that the signature can be verified by the verification key.
More specifically, a witness encryption based on signatures has two algorithms - ., ., where .((, ℓ), m) →, and .(σ, ) → m. m denotes the plaintext, (, ℓ) corresponds to the encryption key, and σ = (, ℓ) corresponds to the decryption key if .(, ℓ, σ) = 1. Here (···) and .(···) are the sign and verify procedures for the signature scheme.
We acknowledge that using signature verification keys to encrypt messages and using signatures to decrypt ciphertexts is not intuitive, and the notation can be confusing. Observe that we denote witness encryption and decryption keys as tuples containing signing and verification keys, while the signature keys are written as single variables.
<cit.> and <cit.> show that it is possible to construct such WES schemes efficiently based on BLS signatures <cit.>.
Group Signatures Group signatures <cit.> are a cryptographic primitive that allows group members to anonymously sign messages on behalf of the group.
A designated authority, the group manager, generates a common public key and issues a unique group member signing key _i for each group member i. Any signature signed by any _j in the group will verify with . The group manager can also deanonymize signatures and identify the signer. Group signatures allow for anonymity while maintaining accountability.
Oblivious PRF A pseudorandom function (PRF) F_k is a keyed function whose outputs look random to anyone without the secret key k. An oblivious pseudorandom function (OPRF) <cit.> is a two-party protocol where a server holds a secret key k for the PRF, and a client holds a secret input x to be evaluated. At the end of the protocol, the client learns F_k(x) while the server learns nothing.
§ PROBLEM STATEMENT
We begin this section by identifying the major stakeholders and adversaries impacting . We also specify the functional and security requirements which we aim to achieve.
§.§ Stakeholders and Adversaries
The ecosystem involves various stakeholders with distinct roles and interests, including service providers, the ITG, subscribers, and law enforcement agencies. Each group's goals and actions follow.
Providers They route phone calls through the telephone network. By law <cit.>, they are required to maintain the CDRs of each call and actively participate in the traceback process. They prioritize efficient record insertion and complete and correct traceback responses. Providers also desire the confidentiality of their customers, peering partners, and traffic volumes.
Subscribers Subscribers initiate and receive phone calls. They also report fraudulent calls to authorities or their service provider. Subscribers seek to minimize receiving illegal robocalls and expect confidentiality for their call records.
Industry Traceback Group The ITG oversees the tracing of illegal calls to their source by working with providers.
They prioritize swift and accurate responses to traceback requests.
Regulatory and Law Enforcement Agencies (LEAs) These entities work in conjunction with industry stakeholders to maintain secure and lawful communication networks. Their responsibilities include investigating suspected illegal calls, enforcing compliance, public education, and policy development. LEAs often submit traceback requests to the ITG, emphasizing the need for a timely response.
Adversaries
Adversaries may seek partial or full call records of one, many, or all subscribers. They may also seek privileged information about providers or the network structure. They may also aim to violate the integrity or availability of to prevent detection or investigation of illegal calls.
Adversaries can include outside parties like private investigators<cit.>, identity thieves, or even foreign intelligence agencies<cit.>. Insiders, including subscribers, providers, regulators and LEAs, and operators of entities may also
behave dishonestly at any point. We design such that no single compromised entity alone can violate its security properties, and in many cases is resilient against collusion by more than one malicious entity.
§.§ Requirements
The main objective of is to enable secure and efficient traceback given a valid request containing source and destination telephone numbers along with the call timestamp.
Functional Requirements To achieve this objective, the system is designed with the following key requirements:
* Resilience: A valid traceback request returns all available records for a call.
* Precision: A valid traceback request only returns relevant records. Malicious or incorrect records are still “relevant” if they match the traceback request.
* Scalability: must handle effectively arbitrary call volumes. For all entities, cost should scale linearly in the number of calls and/or participants (as appropriate).
* Efficiency: All operations should perform comparably to similar non-secure approaches. The financial costs should be a minor fraction of the total network revenue.
* Information Gain: Every traceback request should provide information to an investigator. A traceback request will result in one or more of the following:
* Identify the originating provider for the call.
* Reveal at least one claimed non-originating provider and shorten manual traceback.
* Provide direct evidence that one or more providers act in bad faith (e.g., submitting false or contradictory records or no records for a call).
In settings where is mandatory, all of these properties are obvious. Either one obtains a complete and consistent traceback, or at least one provider is violating the mandate. In partial deployment, these properties will still hold if at least one on-path provider participates.
Security Requirements
The guiding principle of secure traceback should be that no entity gains information about subscribers or providers in the absence of an authorized traceback request, even in the presence of a compromised entity.
Additionally, no entity can provide false information without risk of detection and accountability. More formally, this mandates the following principles:
* Trace authorization: An entity can only trace a call
they have definite knowledge of, and they must also have
explicit authorization from a third party.
* Call confidentiality: No entity should determine source, destination, time, or route details about a call they do not already have without authorization for a traceback.
* Trade secret protection: No party should learn aggregate information about a provider's call volumes and peering relations except those revealed by an authorized, valid traceback or an authorized accountability request.
* Record integrity: Only authorized parties may contribute records.
* Record accountability: It must be possible to identify the contributor of a traceback record.
CDR metadata are inherently sensitive. They can disclose significant details about the communicators' relationships, potentially making them targets of focused observation <cit.> or widespread surveillance by intelligence agencies <cit.>. It is imperative to protect the privacy of individual callers. In the phone network, call paths constantly change, and no single entity has complete knowledge of a given call's path. This opaque visibility protects the network from abuse as call paths reveal information about inter-carrier peering relationships. CDRs could be used to construct call graphs. A direct connection between two network nodes trivially signifies a inter-carrier peering agreement. The phone network is a competitive environment; even partial knowledge of inter-carrier relationship or traffic patterns can be exploited for unfair practices such as undercutting prices, manipulating market dynamics, or degrading services on certain routes to impact competitors. An automated traceback solution must ensure confidentiality of inter-carrier relationship and network trends. Additionally, any collaborative computer system has other security concerns, including compromised parties, data breaches, insider threats and collusion. We therefore formulate that a secure traceback system should meet the following security requirements:
* Record confidentiality: The route (the “hops”) of a call can only be viewed by authorized parties.
* Privacy of individual callers: Information about a call (sender and receiver, time) can only be learned by the carriers involved in routing the call and no one else.
* Blind network trends and provider relationships:
Confidentiality of inter-carrier relationship information and network volume information should be maintained.
* Carrier Anonymity and Accountability: No single entity can link a provider to their contributions to a traceback request from their records alone. However, if tracing reveals that contributions are malformed, the responsible carriers should be identified. This requirement is necessary for blinding network trends.
The insight to these requirements is that no new information other than what the parties already know should be revealed.
§ OUR APPROACH
In the previous section, we specified requirements for secure traceback.
To show how satisfies those requirements, in this section
we will describe a functional but insecure strawman solution and iteratively improve it until it meets all of the security requirements.
§.§ Overview
An Insecure Strawman Approach
To enable traceback, we first introduce a central Record Store() that collects and stores Call Detail Records (CDRs) from in a database . Any P_i in a call path, for instance, P_1 → P_2 → P_3 → P_4, already keeps a CDR for each call they originate, transmit, or terminate. We model a CDR as a tuple (, , _i, P_i-1, P_i, P_i+1) and further divide it into two parts: = (_i) and = (P_i-1P_iP_i+1),
where in , and are source and destination telephone numbers common to all providers in the call path, _i(unique to P_i) is the time at which P_i receives the call and = (P_i-1 P_iP_i+1) are the previous hop, current hop and next hop respectively. Phone call setup takes time to traverse through the network, and we assume an upper-bound setup time . Any CDR pertinent to the same call will have a ^* in the range of [_i - , _i + ].
To enable traceback, each P_i contributes by sending command (, P_i, , ) to the .
The will then add the record to their database .
Later, if a party wishes to trace a certain call with = (_i), they can send
the command to , who will fetch all hops that have = (^*), ∀^* ∈ [_i - , _i + ].
Modeling a _i for a _i as (P_i-1P_iP_i+1) allows _i attest to their upstream and downstream 's involvement in the call. This means that given a _2 from only P_2, we know the path P_1 → P_2 → P_3, so a traceback does not necessarily require records from P_1 and P_3. This design decision helps in partial deployment.
In the event of conflicting hops, for e.g., say P_2 submits _2 = (P_1 P_2 P_3) indicating P_1 and P_3 as its previous and next hops. P_1 submits _1 = (P_4 P_1 P_3). P_3 submits _3 = (P_1 P_3 P_6). In this case, an investigator cannot tell if P_2 is misbehaving or P_1 and P_3 are misbehaving. Therefore the investigator will go to each of these and have them show their corresponding call records to identify and punish the misbehaving (s).
This strawman solution trivially meets the functional requirements of the system, but none of the security requirements we described in Sec. <ref>. Indeed, since records are stored in the clear for , no confidentiality is guaranteed to subscribers or . Furthermore, traceback could be done by any party with access to the records.
Toward Record Confidentiality
To achieve record confidentiality, the first natural step is to encrypt the and the . Assume all use a shared public key () to encrypt the and using an IND-CPA secure encryption scheme.
This means that will store a set of ciphertexts, and hence cannot learn anything about the CDR content.
This guarantees the confidentiality of the records but unfortunately prevents the tracing process. Suppose an authorized party P_j wants to trace call = (_j), they must send an encryption of under to . However, the cannot find a matching record in the database, since the encryption scheme is not deterministic.
Alternatively, the sends all the ciphertexts in the database to P_j, and the latter decrypts each ciphertext until it finds the call records pertinent to their call. This approach is inefficient and loses call confidentiality for other calls.
To solve this problem, we introduce a deterministic index to identify ciphertexts related to a given . Now upon receiving this index, the can return exactly one ciphertext to the . We elaborate on this below.
Adding Pseudorandom Labels to the Database
To identify the correct ciphertexts, we index each entry with a label, , that can be computed only with the knowledge of . When a P_i sends their contribution, they will send a pair (_i, ctx_i) to the where ctx_i is an encryption of _i by P_i. Later, when a party P_j wants to trace a call =(_j), they can use this information to compute , and will be able to identify the ctx that is indexed with . Note that P_j will compute all ^* for = (^*) ∀^* ∈ [ - , + ] to retrieve all possible ciphertexts that belong to the call as specified earlier.
What function should we use to compute ?
Perhaps the most natural candidate would be a hash function, i.e., = H().
However, this approach jeopardizes the confidentiality of the records once again. Indeed, anyone who gets access to the database maintained by can “check” if a certain call () took place by simply computing the hash of the call details and checking for that label in .
Adding a large nonce as input to the hash function i.e. = H(nonce) is not helpful since during trace the will have to guess the nonce and this is infeasible in polynomial time.
This attack suggests that the label should not be computed using a public function that anyone can compute. Pseudorandom Functions (PRF) are the perfect candidate. They are deterministic, just like hash functions, but they can be evaluated only with the knowledge of a key.
A label can be computed as = F_k(), where k is a key known only by the and F is a PRF.
Hence, no one else, except carriers, can compute labels.
However, this solution is not robust in our threat model, where could collude with .
Indeed, it would be sufficient for only one to leak the PRF secret key k to expose records.
We solve this problem using a cryptographic tool called an Oblivious PRF <cit.> (OPRF).
In an OPRF, the PRF is evaluated through a protocol between two parties: a server, who knows the key k, and a client, who knows the input x. At the end of the protocol, the client learns only the output of the PRF, while the server learns nothing.
In our system, we introduce a new party called the Traceback Authority (denoted ), which holds the secret key of the PRF and allows the to evaluate the PRF to compute labels. The , however, does not learn anything about the .
To contribute a record, P_i will interact with the to compute _i from , and then compute _i=H(_i).
Next, P_i will encrypt the _i under into ctx_i (as specified earlier) and submit (_i, ctx_i) to . The lookup index _i is a hash of the _i so that if a record and/or the OPRF key is ever compromised, the _i is not directly exposed.
Traceback would work as follows: An authorized party P_j who wants to trace first obtains the label _j from , then sends _j = H(_j) to . will use _j to identify and return the corresponding ciphertext.
However, there is still a problem:
Recall that all ciphertexts are encrypted under the same key. Thus any (potentially unauthorized) colluding with the can potentially decrypt all ciphertexts trivially.
Conversely, if each encrypts its record using a unique key, the party attempting to perform a traceback will obtain ciphertexts under different keys, requiring all to help with decryption. This defeats the purpose of the system.
Towards Encrypting Records
One potential solution is to have encrypt using the 's public key. Thereafter, during the traceback, the retrieves the ciphertexts from the and interacts with the to decrypt the ciphertexts.
While this is a viable solution, we want to formally enforce the following properties:
* Knowledge of call: A party can trace a call only if they were part of the call i.e they already know the . The party must be a in the call path.
* Trace Authorization: A party can trace a call only if they were authorized by the to trace that particular call.
To this end we use an asymetric encryption scheme called “witness encryption” that allows carriers to encrypt the such that only with the knowledge of the and an authorization from the can they decrypt the ciphertext.
In witness encryption, the encryption key is a verification key for a signature scheme and an arbitrary string ℓ (chosen by the encryptor). A ciphertext can be decrypted only with the knowledge of the string ℓ and a signature on ℓ that verifies under .
In our case, we replace ℓ with to enforce property (1). We enforce property (2) by requiring a signature on signed by the .
Now to contribute records, P_i will compute _i, _i = H(_i) and encrypt _i using (, _i) as the encryption key. Then P_i will send (_i, ctx_i) to . Using hash digests as indices instead of s further enforces property (1) above. Since the is not part of the call path, it should not know the .
Once the ciphertexts are retrieved from the , P_i must request authorization from the , in our case, a signature on . This construction ensures that a ciphertext related to a call can be decrypted only by someone who knows a valid signature on the corresponding .
Adding Carrier Anonymity and Accountability Recall that to contribute, each sends authenticated encrypted records signed under their unique public key to . The can map their contributions to their identity, potentially learning trends about their activities. On the other hand, we cannot simply have the submit their records anonymously since we still need to hold them accountable for malformed or falsified contributions.
To protect carriers' business privacy but hold them accountable, we replace the regular signature scheme (used for authentication) with an anonymous group signature scheme.
Group signatures are anonymous signatures that can be validated on behalf of a group – in our case, the group of all carriers. More importantly, we choose group signatures instead of primitives like ring signatures because group signatures are efficient and allow us to trace traitors. Here, we use group signatures only for authenticating contribution requests and not for the witness encryption scheme.
In our system, the plays the role of the group manager and adds carriers to the group by assigning them group secret keys. Moreover, the is also responsible for the deanonymization of the group signatures in case any of the carriers submit bad requests.
Network Layer Anonymity
The group signature scheme for authenticating contribution requests guarantees anonymity at the application layer. Unfortunately, network features like IP addresses may still identify . There are a number of solutions to this problem that are orthogonal to , including proxy services like commercial VPNs. A may still be concerned that IP traffic volume might leak information about call volumes to the proxy. Providers can address this issue by splitting their traffic across multiple proxy services and/or transmit redundant or invalid records as cover traffic.
§.§ Threat Model and Resiliency
mandates the security requirements outlined in Sec <ref>.
The system includes two entities besides providers: the and the .
We assume that the and the do not collude and only one of the two entities may be malicious. We also allow collusion between and the corrupt entity. We show that even when the is malicious and is colluding with providers, none of the security requirements are violated. On the other hand, when the is corrupt, cannot guarantee record accountability.
We can improve the trustworthiness of the with several orthogonal techniques, described below. All of these options are feasible, but they add complexity and cost.
Splitting responsibilities In the architecture of , the is responsible for group management, label generation, and authorizing traceback.
Assigning these jobs to different entities will limit the damage should one be compromised, and our prototype actually already implements them independently.
Distributing Trust
Each of the operations can also be split among multiple entities using existing multiparty computation schemes, including threshold constructions of OPRFs<cit.>, group signatures<cit.>, and BLS signatures
for use in Witness Encryption.
An Additional Layer of Protection In our threat model, we model the record store and the traceback authority as non-colluding entities. Consequently, we prove security of our scheme only when either one of the two entities is corrupt. Consider the case when the is corrupt, and the database stored by the is leaked. This is equivalent to the case that both the and the are corrupt and colluding, and is clearly outside of our threat model. In this case the has access to all the records within . Since we store H() instead of in , the can only decrypt records for which it correctly guesses their s. Thus, we ensure that even if the database is leaked, there is an additional layer of protection that does not allow the to decrypt all the records without brute-forcing.
§.§ Frequently Asked Questions
In this section, we address some of the frequently asked questions that we have encountered.
If automates traceback, why go through all this trouble?
The Public Switched Telephone Network (PSTN) is heterogeneous. Legacy infrastructure drops signatures along the call path, making it ineffective for tracing call origins <cit.>.
Call requests are signed by providers using a JSON Web Token in the SIP INVITE message with an 𝚡5𝚞 field pointing to the signing certificate. Malicious providers can exploit this by setting 𝚡5𝚞 to a timing-out link, increasing latency and forcing call transmission, posing a challenge for providers. Traceback attempts using such signatures reach dead ends, so manual processes are still needed. Additionally, the deployment of remains limited. According to the Robocall Mitigation Database in the US (Feb 7, 2024) <cit.>, among 7,109 providers, only 39.94% have fully implemented it, 23.96% are partially implemented, and 36.11% have no or unknown implementation status. We clarify that does not authenticate caller ID or block robocalls in real-time. Instead, it is a central repository of encrypted CDRs for traceback purposes.
Why can't we just put traceback info in headers
Implementing traceback information in headers faces the same hurdles as S/S.
Why develop a new protocol instead of automating the manual tasks done by groups in ITG? The ITG currently maintains a semi-automated traceback system that sends notifications to who are mandated to respond within 24 hours. Automating the current traceback tasks will require all providers to implement a traceback API that integrates with the ITG systems. While this automates the process, the gain on “traceback throughput”, the number of computable traceback requests per month, remains low as the process is still serial and involves active participation for every traceback request. Tracebacks would run into dead-ends if a single does not cooperate, their portal goes down, responds with misleading information, or partial deployment.
For , this alternative adds an additional cost of maintaining an inbound traceback system. An adversary could exploit vulnerable API implementations of this mandate to access affected ' sensitive data. Note that compromising a carrier's API server exposes its peers and network trends as well as subscriber call history, and well-funded companies who use industry best-practices are regularly breached.
We designed as a centralized distributed system to alleviate serial traceback lookups, enabling providers to be passive entities rather than active for traceback computation. This design choice not only enhances the traceback throughput but also centralizes security management from several thousand providers to two organizations responsible for overseeing . Moreover, if any single entity is compromised, no plaintext data is leaked.
Why require participation from all providers if only the originating provider's records are needed? If only originating providers submit records, traceback fails because malicious carriers likely won't comply. Limiting submissions to the originator and second hop is infeasible because providers cannot determine their sequence in the call path. Thus, requiring all providers to submit records becomes essential to trace back and identify people facilitating bad calls. Having the ability to construct the full call path has added advantages such as trace-forwards to debug call routing or blocking errors.
Who will operate the and ?
The telephone network is the ideal environment for because regulators like the FCC already designate trusted third parties that provide singleton functions. Examples include toll-free numbering, local number portability (LNP) databases, certificate authority governance, and the current traceback clearinghouse, ITG. The FCC periodically solicits applications to serve as these entities and a fair and competitive process among several for-profit enterprises follows. The selected entities can then charge reasonable fees for the service they provide.
For , there are two entities to recruit. The roughly corresponds to an entity like the LNP databases, and many vendors have the technical ability to serve in this role.
The TA performs functions similar to the current ITG, like registering providers to their system and determining if a traceback query is appropriate, so modifying that existing role would provide a straightforward on-ramp to deployment.
What if refuse to participate? must be mandated to be effective. Regulation can compel to participate or have their network access revoked. Unlike , is compatible with all network technologies in use.
§ SYSTEM AND PROTOCOL DESIGN
In this section, we discuss 's system architecture and detailed protocol.
§.§ Architecture
Figure <ref> shows a high-level diagram of 's system architecture.
There are three kinds of entities in our system:
Carriers 𝐏_𝐢
A carrier P_i receives calls from either the source () or from a previous carrier (P_i-1) and forwards the call to either the destination () or the next carrier P_i+1. A call is identified by its = () where is the time at which the call reaches the carrier.
Record Store ()
The maintains a database that stores encryptions of the hops associated with a call. Recall that a hop is a tuple := (P_i-1P_iP_i+1).
Traceback Authority ()
The has the following functions:
* Authorizing Trace Requests: The provides the signatures that enable a carrier to decrypt records retrieved from the . These signatures are computed on the call labels.
* Managing ' Anonymous Authentication:
The manages the group signatures for the carriers. This consists of adding legitimate to the group and providing them with credential to sign on behalf of the group. The is responsible for accountability, it can deanonymize signatures in case of misbehavior and hold the corresponding entity responsible.
* Generating Pseudorandom Labels: The interacts with carriers to compute s.
§.§ Protocol Overview
consists of four protocols: , , , and , which we describe in detail below.
Cryptographic Primitives
As discussed in Sec. <ref>,
uses a witness encryption scheme for signatures (WES)<cit.>, a group signature scheme<cit.>, an oblivious PRF protocol<cit.>, signature schemes, and a hash function. The notations used in our protocols are described in Table <ref>. We present the protocol in detail in Appendix <ref>. Below we present an overview of the protocol.
Protocol
The protocol is described in Fig. <ref>.
This protocol is run by the and carriers to set up their keys.
TA Setup:
The sets up (1) the group with a group master key and secret key. (2) the PRF key for the oblivious PRF and announces a public key (denoted _) corresponding to the PRF key. (3) two signature key pairs (_T, _T) and (_R, _R) and announces _T and _R to all entities. Here, signatures using _T will be used to decrypt the witness encryption ciphertexts, and _R will be used to authorize trace requests.
Carrier 𝐏_𝐢 Setup:
Each (denoted P_i) joins the system by first interacting with the to get a distinct group signing key _i, which they can use to sign anonymously on behalf of the group. They generate a regular signing key pair (_i, _i) for authenticated communication with the and (during trace).
Finally the initializes a database and sets up a signature key pair (_, _) and announces _
Protocol
record the s with the using the protocol. We consider the case of submitting a single to the in Fig. <ref>. Each in the call path parses the CDR into =() and = P_i-1P_iP_i+1 as defined in Sec <ref>. To contribute call records, the P_i anonymously submits a witness encryption of the message = (P_i-1P_iP_i+1) to , with a pseudorandom label associated with it and a group signature for authentication.
The ciphertext and the label leak no information about the call thus providing confidentiality of the call, and the group signature leaks no information about the sending this information, thus providing anonymity to the carrier.
We elaborate on how the label, the encryption, and the signature are computed below.
The protocol consists of two phases: the label generation phase and the submission phase. In the label generation phase, the label is computed with the help of the , using an oblivious PRF protocol. The acts as a server and holds a PRF key, and the acts as a client with the input.
We assume that each is associated with an epoch, truncated to nearest centisecond, denoted .
The uses = (ep) as input and the output of the protocol (denoted ) is learned only by the . We note that with the help of _, the can compute an efficient pairing check to verify that the output received from the is indeed correct, and that the PRF key k was used to compute the as detailed in Fig. <ref>.
The then computes = H(). Recall that is used instead of to index the ciphertexts to prevent the from trivially decrypting all ciphertexts in the case that the database of ciphertexts and s are leaked.
In the submission phase, the prepares the encryption as follows. P_i samples a λ-bit uniform from ^λ and encrypts with the WES scheme using (_T, ) as the encryption key to get a ciphertext ct_1. The WES scheme ensures that the ciphertext can only be decrypted using a signature on signed using _T by the . P_i further computes _2 = ⊕ H(), where H is modeled as a random oracle.
Note that we require as input to the hash function to extract the in the proof of security.
Finally, the signs the message ((ct_1, ct_2), ) using the group signature scheme, obtains σ, and sends the resulting tuple ((ct_1, ct_2), , σ) to the . Upon receiving a submission request, validates the group signature σ and stores the tuple in the database; if verification fails, the request is dropped.
Protocol
To trace a call, the must retrieve all the s corresponding to a call from the . Fig. <ref> illustrates the sequence of events in the trace protocol. Initially, the needs to obtain the labels corresponding to these calls. For this purpose, the computes the labels with the assistance of the using the OPRF protocol described above. We assume an upper limit on the duration of a call setup from the source to the destination. The computes epochs corresponding to the timestamps * ∈ [ - , + ] and computes the labels associated with the for each of these epochs.
Before we describe how the gets the full trace, we note that a malicious colluding with the could potentially compute labels on arbitrary and check with if such labels exist in the store, revealing information about the existence of a call between a certain initiator and recipient. If the attempts to perform this attack for a specific initiator and recipient, we call it a “targeted attack”. If the adversary's goal is to map the network by trying to compute labels on arbitrary , we refer to such attacks as “grinding attacks”.
To mitigate the grinding attack, we implement rate-limiting on the number of requests a can make. For this purpose, when the interacts with the to compute a , the maintains a count of requests made by a and will not authorize further requests if a certain limit is exceeded. To give authorization on this request, the will sign = H() using _R, and this signature σ_R serves as an authorization for traceback.
Moreover, consider the case that the is malicious (and the is honest) and is colluding with a , then they can easily mount the grinding attack described above by computing arbitrary labels and requesting the corresponding from the . To mitigate this, we will also require the to implement rate-limiting on trace requests. This will ensure that a colluding with a cannot mount grinding attacks.
The requests the for the ciphertexts corresponding to the . The first checks that the signature σ_R is verifiable using _R. It rejects the request if this is not the case.
The identifies the ciphertexts corresponding to the in and sends _1, _2, , σ_, where σ_ = (_RS, (_1, _2, )).
The then asks the for a signature on the and uses these signatures to decrypt the ciphertexts and compute the s.
Protocol
Some may provide wrong hops to frame others. Since the encrypted hops are anonymously submitted to the , we need a mechanism to catch the malicious s. Our group signature protocol allows the to open any group signature and reveal the that signed a message. Thus, if a trace seems malformed, the submits all ciphertexts, hops, and signatures for the call retrieved from the and sends it to the . The runs a function that outputs the set of faulty hops. The then identifies the signatures that correspond to these hops and deanonymizes them to return the set of that submitted malformed/wrong hops.
§.§ Traceback Validation
Once a has s, they must assemble the hops into the complete path. We call this step “Traceback Validation” because it will identify the correct path or detect inconsistencies or missing records that should be manually investigated.
We validate a traceback by inserting decrypted records into a directed multi-graph.
In a multi-graph, two nodes can be connected by multiple edges.
Each record is a graph with three nodes: _i-1, _i and _i+1; and two edges: _i-1→_i, and _i→_i+1.
A traceback is the multi-graph union of such individual sub-graphs(hops).
Ideal Scenario Each _i contributes a valid record for a given call. Figure <ref>(Full Path) visualizes this ideal scenario. Let deg_in and deg_out denotes in and out degrees respectively. The originating _1 has deg_in=0 and deg_out=2. Here, one edge of deg_out (denoted by solid line) indicates its assertion to P_2 while the other edge (denoted by dashed-line) indicates P_2's assertion to P_1. The terminating _4 has deg_in=2 and deg_out=0 for reasons symmetrical to one provided for the originating provider. Finally, each transit (_2 and _3) has deg_in=2 and deg_out=2.
Any deviation from the ideal case helps us determine “faulty hops”—hops that are conflicting.
Determining Faulty Hops We detect faulty hops by checking four properties formulated from the ideal scenario:
Origin invariant: A call can have only one originator. This property holds if there is exactly one node in the directed multi-graph with deg_in=0 and deg_out∈{1, 2}. Otherwise, the originating record is missing, or some _i submitted malformed records. Note that deg_out = 1 does not necessarily imply a malicious originator since true originators will still have deg_out = 1 if there is a missing record from their downstream . If there is more than one node with deg_out = 2, then there are conflicting originators; in this case, we construct different call paths for each originator.
Terminating invariant: A call can have only one terminating provider. This property holds if there is exactly one node with deg_in∈{1, 2} and deg_out = 0. Otherwise, the terminating record is missing, or a _i submitted malformed records. This is symmetric to the origin invariant except that values for deg_in and deg_out are swapped.
Transit invariant: This property identifies all transit providers and validates for nodes having deg_in∈{1, 2} and deg_out∈{1, 2}.
Connectivity invariant: This property determines if the full call path can be recovered. It holds if there is a path between every pair of vertices in the traceback graph.
Figure <ref> presents the detailed algorithm for determining the faulty hops from the decrypted records and the ability to reconstruct the call path.
Traceback Robustness and Partial Deployment
In a scenario where all parties are honest, we derive all the benefits of the scheme. Importantly, even in partial deployment scenarios where only certain providers submit their records, our scheme can identify the call originator under certain conditions. For example, if the originating provider is the sole contributor for a call, the system can still successfully identify the call origin without the contributions from any other provider in the call path.
If the second hop provider participates, our system can identify the origin even if no other provider participates. If there are ever conflicting origin claims, we initiate a manual investigation to identify the source and punish the dishonest party. If any other intermediate party participates honestly, we can still reduce manual traceback time. With the call path recovery algorithm, there may be cases where we can precisely identify the bad actor. However, this is an “added bonus” and not the main goal of the scheme.
The tracing algorithm relies on a best-effort strategy, and we believe that societal incentives, including civil or criminal liability, will motivate the entities to behave honestly. In Sec. <ref>, we present an evaluation of in partial deployment.
§.§ Security of
In this section we provide informal arguments that achieves the security properties outlined in Sec <ref>. The formal proof in the UC framework is in Appendix <ref> our extended technical report <cit.> .
[Informal] Assuming the CPA security of the witness-encryption scheme, the unforgeability of the signature scheme, the security of the group signature scheme, the security of the OPRF protocol, and secure hash functions, achieves record confidentiality, the privacy of individual caller, blinds network trends and associations. Moreover if the is honest, additionally achieves accountability.
Trace Authorization Since a requires a signature from the to decrypt the ciphertexts, and a needs to know the to request this signature, only authorized parties can perform a trace successfully.
Call Confidentiality Recall that the , information of the call is used only to compute the label. Since the computation of the label is through an OPRF, we guarantee that the does not learn the and of the call. Since the OPRF output is pseudorandom, the label by itself will also not reveal any information about the caller and the callee of the call.
Trade Secret Protection Recall that each is encrypted and stored at the . Decrypting the encrypted requires knowledge of the labels and a signature on the , information accessible only to entities involved in the call or those accurately guessing the source, destination, and call time. The unforgeability property of the signature scheme ensures that an entity cannot forge a signature on behalf of the , preventing unauthorized decryption of the s. Additionally, the CPA security of the WES scheme safeguards the contents of these ciphertexts.
As previously described, a malicious might attempt to guess arbitrary call details, create corresponding labels, and decrypt records stored at the , i.e. they try to mount a grinding attack. To mitigate this risk, we implement rate-limiting on such requests by having the restrict the number of authorizations granted to each .
Record integrity Since all contributed records are signed using a group signature, and the verifies this signature before adding the record to the database, we ensure that only authorized users can contribute records.
Record Accountability We achieve anonymity since each submission to the does not include any identifier of the . They are instead signed using the group signature scheme, ensuring that the is anonymous within the group. When a is misbehaving (e.g. by sending a malformed ) the group signature can be opened by thus revealing the that signed the submitted .
We note that if the is malicious they may just not reveal any identity and accountability may not be guaranteed. But even if the is malicious they cannot frame an honest as the sender of the record.
§ IMPLEMENTATION
This section describes the prototype implementation of and how we obtained CDR data to evaluate it.
§.§ Prototype Implementation
We describe a prototype implementation for each component of , which enables us to evaluate its performance.
Traceback Authority We implement the as an HTTP server that uses BLS Signatures <cit.> to compute authorization signatures. For this function, we exposed an endpoint for authorizing trace requests. We set up our group signature scheme using short group signatures <cit.> implemented by IBM's <cit.> and exposed an endpoint for opening signatures. Finally, we used the elliptic curve ( Python package) for our OPRF protocol <cit.>. The exposes an API endpoint for label generation.
Record Store The is an HTTP server with a database for storing records. We use the columnar database. The exposes endpoints for contribution and traceback queries.
Carrier We implement a carrier as a process that runs the protocol and interacts with other components in the system. For performance, we implemented the witness encryption scheme in C++ using the elliptic curve library<cit.> and wrote Python bindings using .
§.§ Data Generation
Because CDRs are data protected by US laws, they are unobtainable. We are unaware of work that models contemporary PSTN call records so we develop a PSTN model to algorithmically generate data. We first generate a graph to represent the 's peering relationships. We then model a social graph of telephone users and assign users to carriers. We then generate CDRs for calls between users. While we believe our model is reasonably accurate, in Section <ref>, we show that even if call volumes are significantly higher, will be practical.
Telephone Network We use an iterative graph generation algorithm to construct a network consistent with real-world telephone topology. We have identified three properties that a reasonable model generator must consider:
Preferential Attachment: New carriers prioritize connecting with larger carriers that handle a significant traffic volume, so providers with wider coverage generally acquire more new customers. In our model, the number of carrier connections is proportional to its current degree.
Market fitness: Smaller providers can attract new customers but rarely surpass larger providers' market share. This feature enables us to mirror real-world scenarios, such as AT&T maintaining a higher market share even as the network evolves.
Inter-carrier agreements: Represents financial agreements, such as mutual compensation for handling each other's traffic, rates for different types of traffic, and billing arrangements.
We use the Bianconi-Barabasi model <cit.> to achieve these properties. Our network consists of N nodes, each labeled P_i representing a unique carrier node. The weight of an edge between any two carrier nodes signifies the inter-carrier agreement amount, which we use in the shortest path computation. We assume each carrier node seeks to minimize the cost of transmitting call connections, a notion that aligns with real-world practices.
Subscribers Network We model subscribers' social interactions using a scale-free network. We create a total of S subscribers, each represented by a phone number in the NPA-NXX-XXXX format. We allocate phone numbers to subscribers based on each carrier's market share. We constructed a Barabási-Albert <cit.> graph denoted as G_s=(V_s, E_s) for the subscribers. Since is primarily interested in scenarios where the caller and the called party belong to different carrier networks, we minimized the probability that neighboring nodes of a given subscriber s_i are on the same network as s_i.
CDR generation Each edge in the social network G_s represents a call between two subscribers s_i and s_j.
For each call s_is_j, we represent the call path as the shortest path between their respective providers in the topology. Each hop within the call path represents a CDR record.
§ EVALUATION
0.004
0.003
0.282
0.009
3.152
2.887
4.541
0.140
0.113
0.102
0.196
0.010
1.473
1.341
2.055
0.072
10000
[group-separator=,]
10000
[group-separator=,]
0.073
0.066
0.166
0.007
int(1000/)
[group-separator=,]
1.693
1.477
6.573
0.323
2.479
2.171
3.904
0.143
0.847
0.780
1.064
0.039
4.143
3.708
5.980
0.173
int(1000/)
[group-separator=,]
0.419
0.376
0.615
0.023
0.419
0.376
0.615
0.023
2.310
2.118
2.865
0.098
int(1000/)
[group-separator=,]
0.147
0.134
0.193
0.009
For to succeed, runtime performance, queries, and insertions of records need to be fast to handle the volume of call traffic being processed daily by the network. Our prototype implementation allows us to test its performance for each protocol phase. We consider the following metrics: storage growth rate, minimum vCPUs required, time for each protocol, and the minimum bandwidth required as shown in Table <ref>.
gray
Experiment Setup
Our Experiments were run on a Linux virtual machine with 32 vCPU and 64GB of memory. The host was a Super Micro Server with an Intel Xeon Gold 6130, ECC DDR RAM, and 12Gbps SAS drives. In experiment 1, we benchmark individual tasks such as label generation, record encryption and decryption, opening signatures, signing, and verifying signatures. We executed each process in a single thread 1,000 times to obtain the average, minimum, maximum, and standard deviation (SD) of the runtime.
In Experiment 2, we generated a network graph with 7,000<cit.> carriers and simulated R_C = 10,000 calls per second. No entity currently has visibility into the call volumes of all carriers in the United States. As a result, we did not find well-supported statistics on the overall call volume in the United States. We are especially concerned with the number of calls that transit multiple carriers, and, of course, this figure is even less attested. Of the statistics we found, many had no citations, did not describe their methodology, or were otherwise suspect in accuracy. As a result, we settled on the round number of 10,000 calls per second for the North American phone network. This corresponds to roughly 800,000,000 calls per day. We admit that this choice is arbitrary, but as we see later, our system has substantial headroom and is also horizontally scalable. Our evaluation in the following sections considers the and as singletons.
§.§ Protocol Evaluation
Protocol The is less than 10ms for all entities. Storing the identity of group members (providers) at the grows linearly in the number of providers.
Protocol
We measure the time and minimum system requirements to complete the protocol for the label generation and submission phases.
Label generation:
request a PRF evaluation from the . Table <ref> shows that a single label generation takes ms on average (SD = ms). Hence, the can evaluate labels on average on a single vCPU per second.
Bandwidth for label generation: We estimate the minimum bandwidth required to generate labels between and the over an HTTP connection from:
B_w = R_rec· (S_req + S_res) · (1 + O_http)
R_rec is the rate at which records are generated across the network, S_req and S_res are request size, response size, and O_http additional overhead (in percentage) introduced by HTTP, respectively. On average, calls generated by our network have 5 hops[ITG reports that tracebacks usually go through 4 or more hops<cit.>], thus R_rec = 5 · R_C. Each request payload is 32 bytes; thus, S_req = 256 bits, likewise, S_res = S_req since the PRF is length-preserving. We compute overhead using:
O_http = Overhead per Request / Batch Size
We measured the average overhead of about 652 bytes per request for the minimal headers when using HTTP/1.1. The group signature forms a significant fraction of this overhead. The can handle label-generation requests even with substantial network overhead since 25 Mbps is vastly below nominal internet throughput.
Record Submission:
We measure the time it takes to contribute 1 CDR record. From Table <ref>, contributing 1 CDR takes ms on average with a SD of ms. A provider can process records in a second on a single vCPU. At a rate of 50,000 records per second[For an average of 5 hops per call, 10,000 calls per second correspond to 50,000 CDRs per second], a minimum of 208 vCPUs are required to encrypt all records. This may seem high, but recall we estimated 7,000 providers, so the CPU requirements are small in practice for .
Bandwidth for record submission: The record submission request payload is 1,900 bytes in size (15,200 bits), comprising a label, ciphertext, and signature. Using Equation <ref>, we estimate the minimum bandwidth for submitting records as 800 Mbps. Therefore, with 800 Mbps the can handle submission requests for the entire network.
Once the receives the submission request, it verifies the signature and inserts it into the database. Table <ref> shows that verifying a group signature takes ms on average with a SD of ms. Hence, the can verify submissions per core per second, thus requiring a minimum of 24 vCPUs to validate contribution requests.
Record Store
Our second experiment evaluates the growth of storage and how that affects querying and inserting records. As shown in Figure <ref>, we observed that the time it takes to insert records into the database is independent of the database size, averaging about 24.28ms with a SD of 1.681ms. Note that 23.28ms represents the average time to process a single INSERT statement with 1 VALUE row. Insertion overhead could significantly improve if multiple values are appended to a single insert statement. The can process 43 insert statements in a second on a single vCPU. To cover the rate at calls per second requires a minimum of 233 vCPUs. The database size grows at a rate of 1.5 TB per day, roughly $100 per day. Deployment consideration may require that records are only retained for a given period, after which they get expunged.
Protocol
We consider the protocol for a single traceback.
Label generation is the same as before. Whenever authorization for a traceback is requested, the records this in the database along with the . Storage grows linearly with the number of traceback requests. To decrypt a record, we need a signature on the label. Signing a label takes a mean time of 0.419ms with an SD of 0.023ms. We measured that decrypting a single record takes 0.847ms on average with a SD of 0.039ms. A single vCPU can decrypt 1,180 ciphertexts per second.
Protocol
In the protocol, we determine the faulty hops from the decrypted records as described in section <ref>. We measured the runtime for analyzing the records and determining the faulty hops to be 0.052ms. Opening a signature takes ms on average with a SD of ms as shown in Table <ref>.
§.§ Traceback Evaluation
Throughput We estimate the compute time for one traceback as the sum of the following components: 1) label generation, 2) trace authorization, 3) retrieving records from , and 4) decrypting records. In Section <ref>, we generate all labels in the range = {(^*) |∀* ∈ [ - t_max, + t_max]}. Assuming t_max = 10 seconds and ep is in seconds, we retrieve records for 21 labels (2· t_max + 1). The estimated average compute time for one traceback is 0.75 seconds. Without communication latency, we can complete close to 3.5 million tracebacks in a month, increasing the current throughput by a multiplicative factor of 11,520. Communication latency would reduce this number, but the result remains a significant fraction of the current abuse.
Partial Deployment
was explicitly designed to support partial deployment because of lessons learned from S/S and proposals like RPKI to secure Internet communications. Given that both of those systems failed to meet their goals at the current levels of deployment, it is worth considering how might perform.
First, though, we will need to establish definitions and deployment models. We say that a traceback is successful if the record store has at least one record that identifies the originating hop in the call path.
For this analysis, we call providers that deploy “adopters,” and define “adoption rate” as the fraction of all providers who are adopters.
In S/S, the largest networks tended to be the earliest adopters, and only small networks continue operating without S/S (excluding, of course, legacy networks that are incompatible).
This trend held because regulatory agencies gave smaller networks more time to comply with deployment mandates than larger ones.
Published tracebacks and successful enforcement actions report that the vast majority of illegal calls come from small networks.
We used our network model from Section <ref> to estimate traceback success in partial deployment. We simulate randomly dialed robocalls originating from the smallest r% of carriers and adopters as the largest a% of carriers. We find that if robocalls originate from the bottom 10% of networks, with adoption by only the largest 2% of carriers, can still successfully traceback 27% of all robocalls. When adoption increases to 10%, which is still lower than the current rate of adoption of S/S, traceback success leaps to 55% of robocalls.
Given that robocallers place millions or billions of calls, even at low adoption would produce mountains of evidence that could lead to takedown of illegal robocalling operations. Of course, these results are likely sensitive to our model choices. Still, even if this estimate is wrong by an order of magnitude, tracing even a small fraction of the thousands or millions of robocalls is a vast improvement compared to 300 per month we see today.
§ DEPLOYMENT CONSIDERATIONS
In this section, we discuss practical deployment concerns relating to incentives, engineering concerns, and the trace authorization process.
Deployment Incentives When integrating a new security mandate into an existing system, it is essential to consider incentives. How will a provider benefit from deploying ? Eliminating bulk illegal calls aligns with providers’ interests since these calls are often unanswered and do not generate revenue. However, this motivation alone may not be enough. Fortunately, the S/S framework has shown that regulatory mandates can drive widespread network changes.
Engineering Concerns In our discussions with practitioners, we identified engineering concerns that, while challenging, are manageable. We anticipate that carriers will face hurdles integrating with billing systems and managing new cryptographic infrastructure. However, we anticipate that deployment will still be easier than S/S because it involves only latency-insensitive backends. Jäger itself will face typical site reliability engineering challenges, such as replication and public key infrastructure issues like governance. The industry has successfully navigated similar challenges with S/S, so we can have confidence that these engineering concerns are surmountable.
Rate Limiting and Authorization Our performance evaluation assumed that traceback authorization would be instantaneous. In reality, there will be some processing time. Human intervention may be required to review requests
before or after they are system-signed. In some cases, the trace authorizer could
automatically approve requests, such as requests from honeypots for calls to their own numbers. Given that honeypots receive thousands of calls daily <cit.>, scalable traceback will significantly enhance investigative processes.
§ RELATED WORK
Telecom fraud <cit.> is a long-standing problem <cit.> that continues to impact carriers <cit.> and subscribers <cit.>.
Most fraud schemes stem from inadequate authentication mechanisms <cit.> and security flaws in legacy telecom signaling protocols <cit.>.
Attempts to address these problems through protocol enhancements <cit.> and defenses <cit.> have had limited success. Illegal robocalling, a form of telecom fraud, frustrates phone users, carriers, and regulators.
Over the past decade, widespread adoption of VoIP technology has led to a surge of scams <cit.> perpetrated using robocalls.
To study the operational characteristics of robocallers, researchers have employed a wide range of techniques such as CDR data mining <cit.>, machine learning <cit.>, audio processing <cit.>, carrier collaboration <cit.>, and reputation scoring <cit.>.
Mitigation techniques based on caller ID authentication <cit.>, spam filtering <cit.>, call-blocking apps <cit.>, and increased penalties <cit.> have been proposed.
However, they have failed to significantly deter bad actors from originating illegal robocalls.
In Dec 2019, the US Congress passed the TRACED Act <cit.> to protect consumers from illegal robocalls. Consequently, the FCC designated the Industry Traceback Group (ITG) <cit.> to track down entities responsible for originating illegal robocall traffic using traceback.
Tracebacks remain invaluable in uncovering and prosecuting numerous illegal robocalling operations <cit.>.
However, its effectiveness is limited since it is a manual, iterative, and time-consuming process. Each traceback requires cooperation among carriers spanning multiple days to pinpoint the source of illegal robocall traffic.
Network traceback methods like packet marking and router logging <cit.> are ineffective to traceback phone calls <cit.> due to general IP traceback limitations.
Our automated traceback technique addresses these challenges and encompasses all transit carriers.
Notably, does not require modifications to existing infrastructure, making it compatible with other protocols.
§ CONCLUSION
In this paper, we described the design of , a distributed system to facilitate automatic call traceback. facilitates the anonymous-but-traceable submission of encrypted call records to a central source, which after vetting from an authorizer allows traceback only by parties with information about a call to be traced. We demonstrate that despite the expensive cryptographic primitives and coordination cost, the system is practical today with modest hardware and low latency.
In so doing, we show that represents a powerful new tool to combat telephone abuse.
We thank our anonymous shepherd and reviewers for their support of the paper.
This material is based upon work supported by the National Science Foundation under Award No. CNS-2142930. Funds from the 2020 Internet Defense Prize also supported portions of this work.
abbrv
§ IDEAL FUNCTIONALITY FOR
In this section we formalize the security properties of in the UC framework <cit.>. We define an ideal functionality (Figure <ref>) that captures the correctness and the security properties of the system.
The functionality maintains a database
and provides the following interface:
* : Enables carriers to register with the system. Since this is public information, the identity of the carrier is leaked to the adversary.
* : Allows carriers to submit records to the system. Recall that in the real world, an adversary can always learn when a record is submitted but does not learn the contents of the record, nor the identity of the carrier that submits the record. Therefore, the only information that is leaked to the adversary is the which gives an indication of what time a call record was submitted. This captures the anonymity and the confidentiality guarantees.
* : Allows the adversary to delete or add records to the database.
* : Enables carriers to retrieve the s relevant to a specific call. All the s that are currently in the database along with any that the adversary wants to append are returned to the carrier. Before sending these s to the carrier, the functionality sends command to the , and only if the responds with (, ) are these s sent to the carrier. This captures the trace authorization requirement and ensures that a carrier cannot request a trace too many times and hence rate-limits their requests.
* : Allows a carrier to deanonymize the sender of s that are malformed or conflicting. If the is honest, the functionality runs a predicate to determine the malicious/conflicting hops and returns the identities of the corresponding carriers. On the other hand if the is malicious, the adversary is allowed to return the identities of any of the carriers. This captures the property that accountability is guaranteed as long as the is honest.
§ THE PROTOCOL
We need the following ingredients:
* Group signatures as defined in <cit.>
* An OPRF scheme
* Witness Encryption scheme for signatures.
In the presentation of below we will instantiate the OPRF scheme using the 2HashDH OPRF scheme of Jarecki et. al. <cit.> and the witness encryption scheme presented in <cit.><cit.>.
consists of four protocols: , , , and protocols, which we describe in detail below.
Before we describe the protocol we present details of the OPRF scheme of <cit.> in Figure <ref> and the Witness Encryption scheme of <cit.> in Figure <ref>.
§.§
The Traceback Authority does:
* Generate BLS signature keys (_T, _T) .(1^λ) and (_R, _R) .(1^λ)
* Generate OPRF key k _q and announce the corresponding public key _ = g^k
* Run the algorithm of the group signature scheme and announce (, _0) which are the group manager public key and the initial group information.
* The initializes a counter _i corresponding to each P_i.
Each provider P_i does:
* Run the interactive joining protocol with and receive _i
* Generate signing keys (_i, _i) .(1^λ) and announce _i.
The Record Store does:
* Initialize the database
* Generate signing keys (_, _) .(1^λ) and announce _
§.§
Each provider P_i with input () and = (P_i-1P_iP_i+1) does:
* Compute = () and run the OPRF protocol with using inputs = () as follows:
* Pick r_q and compute a = H_1()^r and send a to
* The computes b = a^k and sends it back to P_i
* P_i checks the following pairing equation to verify the OPRF was evaluated correctly: e(_, H_1()) = e(g, b^1/r)[The groups are pairing friendly, and hence we can verify the correctness of the OPRF via pairing equations <cit.>].
* Output = H_2(_, , b^1/r )
* The provider then encrypts the as follows:
* Sample a random key {0,1}^λ
* Encrypt as as _1 = .((_T, ), )
* Compute _2 = H_3() ⊕
* The provider signs the ciphertexts and the H() using the group signature scheme: σ = (_i, (_1, _2, H())) and sends (H(), (_1, ), σ) to
The Record Store upon receiving (H(), (_1, ), σ) does the following:
* Check (, (H(), (_1, )), σ) = 1.
* If yes, write (H(), (_1, ), σ) to the database. Else ignore the message.
§.§
Each provider P_i with input () does:
* For * ∈ [ - , + ] compute ^* = (^*)
* For each * compute ^* as above with inputs ^* = (^*) by running the OPRF protocol with the
* Request a signature on = H(^*) from .
* The checks if _i > T, if yes, reject the request, else the computes σ_R = (_R, ) and sends it back to P_i.
* Send (^*, σ_R) to and receive the corresponding records back ((^*, _1^*, _2^*, σ^*), σ_) if they exist.
* Verify (_,((^*, _1^*, _2^*, σ^*), σ_)) = 1. If not, reject.
* Request a signature on each ^* from by sending σ_i = (_i, ^*) to the . The compute σ_T = (_T, ^*) and sends it to P_i.
* Decrypt the ciphertexts as follows:
* Compute ^* = .(σ_T, _1^*)
* Compute ^* = H_3(^*^*) ⊕_2
* Append ^* to _
§.§
If a ^* is malformed or conflicts with another record, the provider P_i can request the to open the corresponding signature to deanonymize that provider and hold them accountable.
* P_i sends ,^*, _1^*, _2^*, σ^*), the list of {} to the
* The runs algorithm to determine if the provider is misbehaving and needs to be deanonymized.
* The computes P_j^* = (, (^*, _1^*, _2^*, σ^*)) and returns P_j^*
§ FORMAL PROOFS OF SECURITY OF
In this section we will present the formal proofs of security. We will consider three cases of corruption: (1) Only a subset of the providers are corrupt (2) The record store is corrupt and can collude with any of the providers and (3) the traceback authority is corrupt and can collude with any of the providers.
For completeness we first show a simulator where no parties are corrupt.
Case 0: No entities are corrupt
Simulate the Traceback Authority:
* Generate BLS signature keys (_T, _T) .(1^λ) and (_R, _R) .(1^λ)
* Generate OPRF key k _q and announce the corresponding public key _ = g^k
* Run the algorithm of the group signature scheme and announce (, _0) which are the group manager public key and the initial group information.
Simulate honest providers: Upon receiving (, P_i) from : Generate signing keys (_i, _i) .(1^λ) and announce _i.
Simulating honest contributions: Upon receiving (,) from , just store (, (·)) in a database and return (, ) to .
Honest trace request:
* Upon receiving (,) from , send ∅ back to
* Upon receiving (, P_i) from , send (, P_i, ) back to .
§.§ Case 1: Only a subset of the providers are corrupt and the and are honest
To prove UC security we need to show that there exists a simulator that produces a transcript in the ideal world that is indistinguishable from the real world. Below we present the simulator for the case when only a subset of the providers are malicious and all other entities are honest.
Simulate the Traceback Authority:
* Generate BLS signature keys (_T, _T) .(1^λ) and (_R, _R) .(1^λ)
* Generate OPRF key k _q and announce the corresponding public key _ = g^k
* Run the algorithm of the group signature scheme and announce (, _0) which are the group manager public key and the initial group information.
Simulate honest providers: Upon receiving (, P_i) from : Generate signing keys (_i, _i) .(1^λ) and announce _i.
Malicious provider P_j: Simulate the interactive joining protocol with P_j, and send (, P_i) to .
Simulating honest contributions: The simulator does not need to simulate interactions between honest providers and and , since this is not in the view of the malicious providers. Therefore, upon receiving (, ) from , just store (, (·)) in a database and return (, ) to .
Simulating random oracle invocations:
* Upon receiving input x for random oracle H_1, check if (x, y) exists in _1. If yes return y, else sample a random y, store (x,y) ∈_1 and return y.
* Upon receiving input x for random oracle H_2, check if (x, y) exists in _2. If yes return y, else sample a random y, store (x,y) ∈_2 and return y.
* Upon receiving input x for random oracle H_3, check if (x, y) exists in _3. If yes return y, else sample a random y, store (x,y) ∈_3 and return y.
Simulating malicious contributions:
* Upon receiving a on behalf of from the adversary, compute b = a^k and send it back to the .
* Upon receiving (, _1, _2, σ) from :
* Check (x, ) exists in Q_2. If not, abort with _2.
* Else parse x as (_, , b^*)
* Check that (, y) exists in Q_1. If not, abort with _1.
* Check that e(_, y) = e(g, b^*). If not, abort with
* Check ((^*), z) exists in _3. If not, abort with _3.
* Else compute ^* = z ⊕_2
* If all checks pass, compute P_j^* = (_i, (, _1, _2, σ)). If P_j^* corresponds to that of an honest party, abort with .
* Send (, , ) on behalf of P_j^* to .
* Receive (, ) from and store (, (P_j, , )) in .
We consider two cases: 1) an honest trace request 2) a malicious trace request
* Honest trace request:
* Upon receiving (,) from , send ∅ back to
* Upon receiving (, P_i) from , send (, P_i, ) back to .
* Malicious trace request:
* Upon receiving (^*, σ^*) from (on behalf of P_j^*):
* if σ^* verifies under a honest party P_i's abort with
* If ^*, (·) exists in the list of ciphertexts, send (^*, (_1, _2, σ)) to . Else:
* Check (x, ) exists in Q_2. If not, abort with _2.
* Else parse x as (_, , b^*)
* Check that (, y) exists in Q_1. If not, abort with _1.
* Check that e(_, y) = e(g, b^*). If not, abort with
* Send (, , P_j^*) on behalf of P_j^* to
* Receive ({}) from . If any of the correspond to that of a malicious contribution, send {, P_j, , } to .
* Upon receiving from , check that requesting provider has not requested too many records and send (, ) to .
* Receive _ from .
* Encrypt each of the using the corresponding and _T in the following way:
* Sample a random key {0,1}^λ
* Compute _1 = .((, _T), )
* Sample a random _2. Set H_3() = _2⊕
* Using the group signature scheme, compute σ on behalf of P_i, where P_i is the honest sender of the record.
* Compute σ_RS = ((^*, _1^*, _2^*, σ^*))
* Send the (^*, _1^*, _2^*, σ^*), σ_RS that correspond to ^* to the adversary.
Upon receiving request from for a particular record, (^*, _1^*, _2^*, σ^*), σ_RS, check that σ_RS is a signature that was computed by the simulator, if not abort with _2. Else send , , _ to and output whatever returns.
Proof By Hybrids Now to prove that the simulated world and the real world are indistinguishable we proceed via a sequence of hybrids, starting from the real world until we reach the ideal world. We show that each of these hybrids are indistinguishable and therefore the real world and the simulated world are indistinguishable.
_0 This is the real world protocol
_1 This hybrid is identical to the previous hybrid except that the simulator may abort with . By the non-frameability property of the group signature scheme, the simulator aborts with negligible probability and therefore this hybrid is indistinguishable from the previous one.
_2 This hybrid is identical to the previous hybrid except that the simulator may abort with _1. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_3 This hybrid is identical to the previous hybrid except that the simulator may abort with _2.Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_4 This hybrid is identical to the previous hybrid except that the simulator may abort with _3. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_5 This hybrid is identical to the previous hybrid except that the simulator may abort with . Since we use unforgeable signatures, this event occurs with negligible probability and therefore the two hybrids are indistinguishable.
_6 This hybrid is identical to the previous hybrid except that the simulator may abort with _2. Since we use unforgeable signatures, this event occurs with negligible probability and therefore the two hybrids are indistinguishable.
Since this hybrid is identical to the simulated world, we have shown that the real world and ideal world are indistinguishable, and that concludes the proof of security for the case when only a subset of carriers are corrupt.
§.§ Case 2: and subset of providers are corrupt, and is honest
We present the simulator for this case below:
The adversary runs the algorithms of the . Receive _T, _, (, _0) from
Simulate honest providers: Upon receiving (, P_i) from :
* Generate signing keys (_i, _i) .(1^λ) and announce _i.
* Interact with to run the protocol and learn _i.
Malicious provider P_j: Upon receiving (, P_j^*) from on behalf of , send (, P_j^*) to .
Simulating honest contributions: Note that the simulator only simulates the computation of the label, since that is the only interaction with the adversary. Since the is honest the simulator does not need to compute the ciphertexts or interact with . Therefore, upon receiving (, ) from :
* Compute a label as follows:
* Sample random r _q and compute a = H_1(0)^r and send a to on behalf of .
* Receive b from .
* Check that e(_, H_1(0)) = e(g, b^1/r)
* Output = H_2(_, 0, b^1/r) and store (, ).
Simulating random oracle invocations:
* Upon receiving input x for random oracle H_1, check if (x, y) exists in _1. If yes return y, else sample a random y, store (x,y) ∈_1 and return y.
* Upon receiving input x for random oracle H_2, check if (x, y) exists in _2. If yes return y, else sample a random y, store (x,y) ∈_2 and return y.
* Upon receiving input x for random oracle H_3, check if (x, y) exists in _3. If yes return y, else sample a random y, store (x,y) ∈_3 and return y.
Simulating malicious contributions:
* Upon receiving (, _1, _2, σ) from :
* Check (x, ) exists in Q_2. If not, abort with _2.
* Else parse x as (_, , b^*)
* Check that (, y) exists in Q_1. If not, abort with _1.
* Check that e(_, y) = e(g, b^*). If not, abort with
* Check ((^*), z) exists in _3. If not, abort with _3.
* Else compute ^* = z ⊕_2
* Send (, , ) on behalf of to .
* Receive (, ) from and store (, (, , )) in .
We consider two cases: 1) an honest trace request 2) a malicious trace request
* Honest trace request: Upon receiving (,) from
* If (, (, , )) ∈
* Sample random r _q and compute a = H_1()^r and send a to
* Receive b from .
* Check that e(_, H_1()) = e(g, b^1/r).
* Output ^* = H_2(_, , b^1/r)
* Request for a signature on ^*.
* If no signature received send ∅ to . And upon receiving (, P_i) from , return (, )
* Else Decrypt _1, _2 corresponding to ^* if it exists and send , , , to . And upon receiving (, P_i) from , return (, )
* If no ciphertexts exist corresponding to this ^* abort with failure .
* If (, (, , )) ∉,
* simulate the computation of a label as in honest contribution
* Sample a random string ^* .
* Request for a signature on ^*. If a signature is received, send (, ) to , else send (, ) upon receiving (, P_i) from .
* Malicious trace request:
* Upon receiving (^*, σ_i, σ_R) from (on behalf of P_j^*):
* If σ_i corresponds to that of an honest party, abort with .
* If ^*, (·) exists in the list of ciphertexts, send (^*, (_1, _2, σ)) to . Else:
* Check (x, ) exists in Q_2. If not, abort with _2.
* Else parse x as (_, , b^*)
* Check that (, y) exists in Q_1. If not, abort with _1.
* Check that e(_, y) = e(g, b^*). If not, abort with
* Send (, , P_j^*) on behalf of P_j^* to
* Receive ({}) from . If any of the correspond to that of a malicious contribution, send {, P_j, , } to .
* Upon receiving from , send
(, ) to .
* Receive _ from .
* Encrypt each of the using the corresponding and _T in the following way:
* Sample a random key {0,1}^λ
* Compute _1 = .((, _T), )
* Sample a random _2. Set H_3() = _2⊕
* Using the group signature scheme, compute σ on behalf of P_i, where P_i is the honest sender of the record.
* Compute σ_RS = ((^*, _1^*, _2^*, σ^*))
* Send the (^*, _1^*, _2^*, σ^*), σ_RS that correspond to ^* to the adversary.
Upon receiving (, , _) request from send request to and output whatever returns.
* If the adversary opens a signature submitted by a malicious party as an honest party's identity abort with .
* If the adversary opens a record (^*, _1^*, _2^*, σ^*), σ_RS where σ_RS was not computed by the simulator abort with _2.
Proof By Hybrids Now to prove that the simulated world and the real world are indistinguishable we proceed via a sequence of hybrids, starting from the real world until we reach the ideal world. We show that each of these hybrids are indistinguishable and therefore the real world and the simulated world are indistinguishable.
_0 This is the real world protocol
_1 This hybrid is identical to the previous hybrid except that the simulator may abort with . By the non-frameability property of the group signature scheme, the simulator aborts with negligible probability and therefore this hybrid is indistinguishable from the previous one.
_2 This hybrid is identical to the previous hybrid except that the simulator may abort with _1. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_3 This hybrid is identical to the previous hybrid except that the simulator may abort with _2.Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_4 This hybrid is identical to the previous hybrid except that the simulator may abort with _3. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_5 This hybrid is identical to the previous hybrid except that the simulator may abort with . Since we use unforgeable signatures, this event occurs with negligible probability and therefore the two hybrids are indistinguishable.
_6 This hybrid is identical to the previous hybrid except that the simulator may abort with . Since we use a verifiable OPRF scheme this occurs with negligible probability.
_7 This hybrid is identical to the previous hybrid except that the simulator simulates the OPRF calls using 0 as input instead of the . By the obliviousness property of the underlying OPRF scheme, these two hybrids are indistinguishable.
_8 This hybrid is identical to the previous hybrid except that the simulator may abort with _2. Since we use unforgeable signatures, this event occurs with negligible probability and therefore the two hybrids are indistinguishable.
Since this hybrid is identical to the simulated world, we have shown that the real world and ideal world are indistinguishable, and that concludes the proof of security for the case when only a subset of carriers are corrupt.
§.§ Case 3: and a subset of the providers are corrupt, and is honest
Simulate the Traceback Authority:
* Generate BLS signature keys (_T, _T) .(1^λ) and (_R, _R) .(1^λ)
* Generate OPRF key k _q and announce the corresponding public key _ = g^k
* Run the algorithm of the group signature scheme and announce (, _0) which are the group manager public key and the initial group information.
Simulate honest providers: Upon receiving (, P_i) from : Generate signing keys (_i, _i) .(1^λ) and announce _i.
Malicious provider P_j: Simulate the interactive joining protocol with P_j, and send (, P_i) to .
Simulating honest contributions: Upon receiving (, ) from ,
* Sample a random
* Send a random and compute _1 = .((_T, ), )
* Sample a random string _2
* Compute a group signature σ on behalf of some honest party P_i
* Send (_1, _2, , σ) to and send (, ) to . Store (, , _1, _2).
Simulating random oracle invocations:
* Upon receiving input x for random oracle H_1, check if (x, y) exists in _1. If yes return y, else sample a random y, store (x,y) ∈_1 and return y.
* Upon receiving input x for random oracle H_2, check if (x, y) exists in _2. If yes return y, else sample a random y, store (x,y) ∈_2 and return y.
* Upon receiving input x for random oracle H_3, check if (x, y) exists in _3. If yes return y, else sample a random y, store (x,y) ∈_3 and return y.
Simulating malicious contributions: Note that since the is corrupt, the simulator only needs to simulate the label generation.
* Upon receiving a on behalf of from the adversary, compute b = a^k and send it back to the .
We consider two cases: 1) an honest trace request 2) a malicious trace request
* Honest trace request:
* Upon receiving (,, ) from ,
* Retrieve that corresponds to if it exists.
* Send (, σ_R) to and receive (, _1, _2, σ). If no such record was received, send (, (, ) to )
* If no exists, compute = H_2(_,, H_1()^k) and send to . If any _1, _2, ,σ received, first check if σ corresponds to that of an honest party, if this is the case abort the simulation with else decrypt the ciphertexts to retrieve the and send (, (, (P_j^*, , ))) to .
Upon receiving , send , P_j^*, , to .
* Upon receiving (, P_i) from , send (, P_i, ) back to .
* Malicious trace request:
* Upon receiving a query for H_3 on , send (, , P_i) to and receive back _ = {, }.
* Retrieve (_1, _2, , σ) that correspond to and check that this is the same that was encrypted in _1. If yes, compute z = _2 ⊕ and send z in response and store ((), z) in _3. If not, sample a random z {0,1}^λ and send z in response.
Upon receiving request from for a particular record, send , , _ to and output whatever returns.
Proof By Hybrids Now to prove that the simulated world and the real world are indistinguishable we proceed via a sequence of hybrids, starting from the real world until we reach the ideal world. We show that each of these hybrids are indistinguishable and therefore the real world and the simulated world are indistinguishable.
_0 This is the real world protocol
_1 This hybrid is identical to the previous hybrid except that the simulator may abort with . By the non-frameability property of the group signature scheme, the simulator aborts with negligible probability and therefore this hybrid is indistinguishable from the previous one.
_2 This hybrid is identical to the previous hybrid except that the simulator may abort with _1. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_3 This hybrid is identical to the previous hybrid except that the simulator may abort with _2.Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_4 This hybrid is identical to the previous hybrid except that the simulator may abort with _3. Since we use a random oracle and require the adversary to use the RO, the probability of this event occurring is negligible, and therefore this hybrid is indistinguishable from the previous one.
_5 This hybrid is identical to the previous hybrid except that the simulator may abort with . Since we use unforgeable signatures, this event occurs with negligible probability and therefore the two hybrids are indistinguishable.
_6 This hybrid is identical to the previous hybrid except that the simulator simulates the honest contributions by sampling a random . By the pseudorandomness property of the underlying OPRF scheme, these two hybrids are indistinguishable.
_7 This hybrid is identical to the previous hybrid except that the ciphertext _2 is a randomly sampled string. By the perfect security of the stream cipher (OTP), these two hybrids are indistinguishable.
_8 This hybrid is identical to the previous hybrid except that the group signature corresponds to that of a random honest party. By the anonymity guarantees of the underlying group signature scheme, these two hybrids are indistinguishable.
Since this hybrid is identical to the simulated world, we have shown that the real world and ideal world are indistinguishable, and that concludes the proof of security for the case when only a subset of carriers are corrupt.
§ ARTIFACT APPENDIX
§.§ Abstract
We developed a prototype of the system and conducted a thorough performance evaluation. The artifact is composed of four key components: Group Membership Management, Label Generation, Trace Authorization, and Record Storage. Each of these components was containerized using Docker, and we orchestrated them together with Docker Compose. Additionally, we integrated auxiliary services, including a web-based GUI, to facilitate interaction with the database.
We generated a dataset of Call Detail Records and evaluated 's performance. Our experimental results indicate that incurs minimal computational and bandwidth overhead per call, with these costs scaling linearly with the increase in call volume.
§.§ Description & Requirements
The prototype comprises four integral components:
Membership Management
The Group Manager (GM) oversees membership management. This component enables the GM to issue new group membership keys or revoke existing ones, and it also facilitates the tracing of traitors. Within our implementation, the assumes the role of the GM.
Label Generation
Label generation is controlled by the Label Manager (LM). The LM collaborates with providers to evaluate pseudorandom functions using the Oblivious Pseudorandom Function protocol. In our system, the also fulfills the role of the LM.
Trace Authorization
This component is responsible for generating authorization signatures required to decrypt ciphertexts. The acts as the Trace Authorizer in our implementation.
Record Storage
The Record Storage component stores ciphertexts and provides the results of match trace queries.
All the above components are implemented in Python. However, for performance optimization, we implemented the witness encryption in C++ and created Python bindings to integrate the library into our prototype.
§.§.§ Security, privacy, and ethical concerns
None
§.§.§ How to access
We have archived the witness encryption and the prototype into a zip file, which is publicly accessible on Zenodo at the following link: https://zenodo.org/doi/10.5281/zenodo.12733869https://zenodo.org/doi/10.5281/zenodo.12733869. Additionally, we maintain an active version of the artifact in our GitHub repositories. The source code for Witness Encryption can be found at https://github.com/wspr-ncsu/BLS-Witness-Encryptionhttps://github.com/wspr-ncsu/BLS-Witness-Encryption, while the prototype is available at https://github.com/wspr-ncsu/jaegerhttps://github.com/wspr-ncsu/jaeger.
§.§.§ Hardware dependencies
Running does not necessitate any specific hardware requirements. However, to achieve results comparable to those presented in the paper, our experiments were conducted on a Linux virtual machine equipped with 32 vCPUs and 64 GB of memory. The underlying host was a Super Micro Server featuring an Intel Xeon Gold 6130 processor, ECC DDR RAM, and 12 Gbps SAS drives.
§.§.§ Software dependencies
For ease of setup, we recommend configuring the project using Docker. If you prefer not to use Docker, our repositories provide detailed instructions on how to set up the project without it. For the remainder of this artifact appendix, we will focus exclusively on the setup and execution of experiments using Docker.
§.§.§ Benchmarks
None
§.§ Set Up
§.§.§ Installation
You need to install the appropriate version of Docker based on your operating system. If your Docker installation does not include the Docker Compose plugin, be sure to install Docker Compose separately. Additionally, download the prototype source code from either the GitHub repository or Zenodo.
We have published the Docker image on Docker Hub as . If this image is not available, navigate to the root directory and build the Docker image using the following command:
Note that the option does not necessarily need to be . We use to align with the image on dockerhub.
§.§.§ Basic test
Generate secret keys for label generation, group master and public keys for group management, as well as private and public keys for BLS signatures and witness encryption by running the following command:
The option instructs the script to generate all necessary keys. If you only need to generate keys for specific components, use the following options: for label generation, for group management, and for trace authorization/witness encryption. After running the command, verify that the and files have been created and that the variables within are populated with the appropriate keys.
§.§ Evaluation Workflow
§.§.§ Major claims
(C1):
The average runtime for the following operations are as follows: Label generation takes 0.073 ms, the contribution protocol takes 4.143 ms, trace authorization takes 0.419 ms, decryption takes 0.847 ms, opening a signature takes 0.147 ms, and verifying a group signature takes 2.310 ms. Experiment (E1) substantiates these performance metrics.
§.§.§ Experiments
(E1): Benchmark operations in Table <ref>.
Preparation: Run the Docker Compose command to start the services, and then log in to the Docker container by executing:
Execution: To benchmark the operations, run the following command:
This will display the results on the console and create a folder inside the project root.
Results:
The file contains a summary of the benchmarks, while records the individual runs. We used to determine the mean, min, max, and standard deviations. To aggregate the benchmark results from , as shown in Table <ref> (in the paper), run . This script generates a CSV file with the aggregated results.
(E2): Determine Bandwidth, Storage Growth, and Query Performance as illustrated in Fig. <ref>.
Preparation: Data generation
* Run Docker Compose to start the services, and log in to the Docker container as demonstrated in E1.
* Generate telephone and social network data by running the command: . The option specifies the number of carriers, specifies the number of subscribers, determines whether CDRs should be generated, and skips all prompts. Note that in the paper, is set to 7000 and is set to 300M.
* View the Generated Dataset: We will connect to the ClickHouse database using a web browser. Ensure that your browser has network access to both ports 5521 and 8123. We have added a UI service that allows you to connect to the database. Visit in your browser.
* Enter as the ClickHouse URL, as the Username, and as the Password, then click the button. Once successful, click the link.
* On the home page, select in the database field, which will load the tables. You can then click on any table to view its Details, Schema, or preview rows.
* If you prefer to run your own SQL queries, click on the new file icon/button with the orange background. Type in the query field and click the button.
Execution: To run the contributions protocol, execute the command: . The option specifies the number of batches, which is the number of times you wish to run the contribution experiment (it defaults to ). After each batch, database stats are measured and stored in the folder. The option is required and specifies the number of records per batch.
Results: The results are saved in and . To generate the fetch and insert query performance graph shown in Fig. <ref> from the results in , run . This creates a PNG file at .
(E3): Run a Traceback (Optional)
Preparation: Start the Docker services and log in to the Docker container as described in E1. Visit in your browser, and execute the SQL command . Refer to the E2 preparation section for instructions on executing the SQL query. The results from this query are the calls whose ciphertexts have been submitted to the RS.
Execution: Run the following command: . Replace , , and with the values from any row in the query results obtained in the preparation step above.
Results: This command will:
* Generate call labels within the [ - , + ] range
* Request authorization signatures from the TA
* Retrieve ciphertexts from the RS
* Decrypt and analyze the records to determine the origin and call path.
* Display the results to your console.
§.§ Notes on Reusability
Our implementation includes Docker services defined in the file. Once the compose-up command is running, the following services are exposed via :
* The Group Management server runs on . The implementation is defined in .
* The Label Generation server runs on . The implementation is defined in .
* The Trace Authorization server runs on . The implementation is defined in .
* The Record Store server runs on . The implementation is defined in .
For more information on the commands available for customizing our implementation, please refer to our GitHub repository.
§.§ Version
Based on the LaTeX template for Artifact Evaluation V20220926.
|
http://arxiv.org/abs/2409.03504v1 | 20240905131801 | HGAMN: Heterogeneous Graph Attention Matching Network for Multilingual POI Retrieval at Baidu Maps | [
"Jizhou Huang",
"Haifeng Wang",
"Yibo Sun",
"Miao Fan",
"Zhengjie Huang",
"Chunyuan Yuan",
"Yawen Li"
] | cs.IR | [
"cs.IR"
] |
Baidu Inc., Beijing, China
^†Beijing University of Posts and Telecommunications
huangjizhou01, wanghaifeng, sunyibo, fanmiao, huangzhengjie, [email protected];[email protected]
§ ABSTRACT
The increasing interest in international travel has raised the demand of retrieving point of interests (POIs) in multiple languages. This is even superior to find local venues such as restaurants and scenic spots in unfamiliar languages when traveling abroad. Multilingual POI retrieval, enabling users to find desired POIs in a demanded language using queries in numerous languages, has become an indispensable feature of today's global map applications such as Baidu Maps. This task is non-trivial because of two key challenges: (1) visiting sparsity and (2) multilingual query-POI matching. To this end, we propose a Heterogeneous Graph Attention Matching Network (HGAMN) to concurrently address both challenges. Specifically, we construct a heterogeneous graph that contains two types of nodes: POI node and query node using the search logs of Baidu Maps. First, to alleviate challenge #1, we construct edges between different POI nodes to link the low-frequency POIs with the high-frequency ones, which enables the transfer of knowledge from the latter to the former. Second, to mitigate challenge #2, we construct edges between POI and query nodes based on the co-occurrences between queries and POIs, where queries in different languages and formulations can be aggregated for individual POIs. Moreover, we develop an attention-based network to jointly learn node representations of the heterogeneous graph and further design a cross-attention module to fuse the representations of both types of nodes for query-POI relevance scoring. In this way, the relevance ranking between multilingual queries and POIs with different popularity can be better handled. Extensive experiments conducted on large-scale real-world datasets from Baidu Maps demonstrate the superiority and effectiveness of HGAMN. In addition, HGAMN has already been deployed in production at Baidu Maps, and it successfully keeps serving hundreds of millions of requests every day. Compared with the previously deployed model, HGAMN achieves significant performance improvement, which confirms that HGAMN is a practical and robust solution for large-scale real-world multilingual POI retrieval service.
<ccs2012>
<concept>
<concept_id>10002951.10003227.10003245</concept_id>
<concept_desc>Information systems Mobile information processing systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003325</concept_id>
<concept_desc>Information systems Information retrieval query processing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Mobile information processing systems
[500]Information systems Information retrieval query processing
HGAMN: Heterogeneous Graph Attention Matching Network for Multilingual POI Retrieval at Baidu Maps
Jizhou Huang,
Haifeng Wang,
Yibo Sun,
Miao Fan,
Zhengjie Huang,
Chunyuan Yuan,
Yawen Li
===========================================================================================================
§ INTRODUCTION
As one of the key components of the search engines in almost all global map applications, such as Baidu Maps, multilingual POI retrieval plays a significant role in providing on-demand map services as the retrieved results directly influence the success or failure of routing and navigation, and hence impact the long-term user experience. For the 169 million Chinese tourists who traveled abroad in 2019 <cit.>, Baidu Maps, which covers more than 150 million POIs in 200 countries and territories worldwide, is their prior choice to find specific locations and navigate to desired destinations. Figure <ref> shows an example of the multilingual POI retrieval feature at Baidu Maps, where the query “TokyoΧ5854” consists of an English word “Tokyo” and a Chinese character “Χ5854”, meanwhile the name of the retrieved POI is composed of English words “Tokyo Tower” or Japanese words “67714EAC30BF30EF30FC”. To enable users who are traveling abroad to obtain their desired POIs effectively when finding local venues in unfamiliar languages and areas, it is crucial for a multilingual POI retriever to fill the gaps between queries and POIs in different languages and formulations.
To build an effective multilingual POI retriever in both the academic and industrial fields, we must address two key challenges:
* Visiting Sparsity.
To the best of our knowledge, existing approaches on multilingual POI retrieval for industrial use mainly leverage large-scale user click logs for query-POI relevance scoring.
However, the average visits of 150 million POIs at Baidu Maps are highly sparse.
We empirically study a large-scale search log of Baidu Maps, which contains billions of search records. Statistics show that only 6.4% of the POIs have been clicked by one or more users.
The effectiveness of a POI retrieval model would significantly decline when handling the majority of POIs that have sparse click logs.
* Multilingual Query-POI Matching. In real applications, most of the users search the overseas POIs by their native languages, which are more likely to be inconsistent with the languages of the target POIs. For example, a Chinese user may search the “Tokyo Tower” located in Japan using queries composed of Chinese words, meanwhile, the information of this POI is probably maintained in Japanese or English. As a result, a simple literal matching method cannot meet the demands of such cross-language retrieval. Moreover, queries are sometimes mixed keyboard inputs of multi-languages (e.g., English and Japanese, Chinese and Pinyin Alphabets), which further necessitates multilingual POI retrieval.
In this paper, we present our recent efforts in designing and implementing an effective multilingual POI retrieval framework, which has already been deployed in production at Baidu Maps and has achieved great success in addressing both problems, as illustrated by Figure <ref>.
The framework can provide a data sparsity-tolerant multilingual POI retrieval function, which facilitates tens of millions of users to find their desired POIs every day.
This new framework is powered by a Heterogeneous Graph Attention Matching Network (HGAMN).
Specifically, we first construct a heterogeneous graph that contains two types of nodes: POI node and query node using the search logs of Baidu Maps.
In this graph, to alleviate the visiting sparsity problem, we construct edges between different POI nodes to
link the low-frequency POIs with the high-frequency ones, which enables the transfer of knowledge from the latter to the former.
To address the multilingual query-POI matching challenge, we construct edges between POI and query nodes based on the co-occurrences between queries and POIs, where queries in different languages and formulations can be aggregated for individual POIs.
Upon the constructed graph, we design an attention-based network to learn the representations of POI and query nodes.
Then, we use a multi-source information learning module to learn the location and multilingual text representations of the queries and POIs.
Finally, we fuse the node representations of a POI and its linked queries via a cross-attention module and use the fused representation to calculate the relevance score between the user's query and a candidate POI.
To facilitate the model training, we apply an in-batch negative sampling strategy <cit.> to produce more sample pairs and increase the number of training examples.
We evaluate HGAMN both offline and online using large-scale real-world datasets. For offline evaluation, the training and test sets consist of tens of millions of search records for several months, covering hundreds of cities and tens of millions of POIs worldwide. Experimental results show that HGAMN achieves substantial (absolute) improvements compared with several mainstream methods. For online evaluation, we launch our framework online to serve a portion of the search traffic at Baidu Maps. A/B testing was conducted between HGAMN and the previously deployed models. Experimental results show that the improvements are consistent with those obtained by the offline evaluation.
The main contributions can be summarized as follows:
* Potential impact: We propose an end-to-end neural framework, named HGAMN, as an industrial solution to the multilingual POI retrieval task in global map applications. In addition, this framework has already been deployed in production at Baidu Maps, and it successfully keeps serving hundreds of millions of POI search requests every day.
* Novelty: The design of HGAMN is driven by the novel idea that addresses the data sparsity problem and the multilingual matching problem by enhancing the representations of POIs via a heterogeneous graph.
* Technical quality: We evaluate HGAMN both offline and online using large-scale real-world datasets. Extensive experimental results show that our framework achieves significant improvements on multiple evaluation metrics compared with several mainstream methods.
* Reproducibility:
We have made the source code publicly available at <https://github.com/PaddlePaddle/Research/tree/master/ST_DM/KDD2021-HGAMN/>.
§ HGAMN
HGAMN consists of three modules: multi-source information learning module, heterogeneous graph learning module, and POI ranker module. First, we feed a query, the candidate POIs, and the historical queries to the multi-source information learning module to learn the text and location representations of them. Then, we construct the heterogeneous graph of different POIs and historical queries. The constructed graph enables us to learn the POI representations from it by the heterogeneous graph learning module. Finally, we calculate the relevance score between the representations of the query and the candidate POIs by the POI ranker module. Figure <ref> shows the architecture of HGAMN. Subsequently, we introduce them in detail.
§.§ Multi-Source Information Learning
Unlike traditional text retrieval, POI retrieval in map services mainly measures the relevance between a query and POIs rather than plain text. Besides its name, a typical POI also contains other multi-sourced and heterogeneous information such as the address, category, and GPS coordinates. Utilizing such information can facilitate retrieving more relevant POIs. Here we mainly introduce the location and text representations of a query q ∈𝒬 and a POI P_i ∈𝒫, where 𝒬 and 𝒫 denote a set of query and POI, respectively.
§.§.§ GPS Encoding
POI's GPS coordinates are numerical pairs consisting of longitude and latitude. However, in the online system, the coordinates are usually stored as a Geohash string for its better properties: (1) it is easy to be used to index the POI and (2) it is convenient to be used to calculate the distance of two POIs.
Instead of directly taking this numerical pair as a 2-dimensional feature vector, we use the Geohash algorithm <cit.> to encode the geographic coordinates into a short string of letters and digits. Specifically, given the latitude x_v and longitude y_v of a POI, the Geohash algorithm is performed as follows:
s_GPS = 𝐆𝐞𝐨𝐡𝐚𝐬𝐡((x_v, y_v)) ,
where the length |s_GPS| ∈ [1, 12].
Given the Geohash string s_GPS=“wx4g09np9p”, we split the string to character sequence and add `[PAD]' at the beginning of the sequence if its length is less than 12, i.e., X = [`[PAD]', `[PAD]', `w', `x', `4', `g', `0', `9', `n', `p', `9', `p' ]. Then, we transform them into character embeddings 𝐗∈ℝ^12 × d_c, where d_c = 64 is the dimension of the character embedding.
An essential property of Geohash string is that POIs with a longer common prefix are closer to each other in geographic distance. Thus, the Geohash string is order-sensitive. To encode this kind of property, we utilize the bidirectional gated recurrent unit to encode the character embeddings, which is formulated by:
𝐡_t = [𝐆𝐑𝐔(𝐗_t); 𝐆𝐑𝐔(𝐗_t)] .
The last state 𝐡_12 is used as the representation of the POI's GPS. We use this module to transform each POI's GPS coordinates into an embedding, and obtain an embedding matrix 𝐆∈ℝ^|𝒫| × d, where |𝒫| denotes the size of 𝒫.
We regard a query's location as the place where the user is typing in the query. Similarly, we can obtain the query's location representation 𝐆_u according to the user's GPS coordinates.
§.§.§ Text Encoding
For multilingual POI retrieval in map services, the text data such as queries, POI names, and POI addresses are critical for improving the retrieval performance.
To better handle the multilingual matching problem, we take a sequence (such as a query or POI name) consisting of multilingual characters and alphabets as input and adopt a pre-trained language model to obtain its representation.
Specifically, we use the pre-trained language model ERNIE <cit.> as the basic component, which shows better performance on extracting multilingual features and semantic information.
We directly utilize ERNIE to obtain q's text representation 𝐪 by:
𝐪 = 𝐄𝐑𝐍𝐈𝐄([c_1, c_2, …, c_L]) ,
where [c_1, c_2, …, c_L] is the character sequence of the query.
After analyzing the query logs, we found that a query's location helps retrieve the desired POI because users usually demand the nearest target. To utilize such location features, we combine a query's location representation with its text representation. Thus, the final representation of a query is represented as: 𝐪 = 𝐪 + 𝐆_u.
For each POI P_i, we use 𝐐_P_i to denote the matrix of the representations of the top-4 queries associated with it, i.e., 𝐐_P_i = [𝐪_1, 𝐪_2, 𝐪_3, 𝐪_4].
For each POI P_i, we concatenate its name and address as a long character sequence and apply ERNIE to extract its text representation. Similarly, we combine P_i's location and text representations to obtain its final representation 𝐏_i ∈ℝ^1 × d by:
𝐏_i = 𝐄𝐑𝐍𝐈𝐄([x_1, x_2, …, x_L, a_1, a_2, …, a_L]) + 𝐆_i ,
where [x_1, x_2, …, x_L] is the character sequence of P_i's name and [a_1, a_2, …, a_L] is the character sequence of P_i's address. 𝐆_i is the GPS encoding of P_i. We stack each POI's embedding together as a POI embedding matrix 𝐏∈ℝ^|𝒫| × d.
§.§ Heterogeneous Graph Learning
Here, we introduce how to construct the heterogeneous graph from search logs and how to learn POI representations from the heterogeneous graph.
§.§.§ Graph Construction
Multilingual POI retrieval faces the visiting sparsity and multilingual matching problems. To alleviate the visiting sparsity problem, we construct edges between different POI nodes to link the low-frequency POIs with the high-frequency ones, which enables the transfer of knowledge from the latter to the former.
Furthermore, to mitigate the multilingual matching problem, we construct edges between POI and query nodes based on the co-occurrences between queries and POIs. Thus, the graph can aggregate queries in different languages and formulations for each POI.
As shown by Figure <ref>, both types of nodes and edges constitute a heterogeneous graph 𝒢(𝒱, ℰ).
The construction of the heterogeneous graph is as follows. A user's search behaviors produce a visited POI sequence in search logs. We extract relations of POI-POI from the historical search sequences. A search sequence is a period of time that consists of “a sequence of interactions” for the similar information need <cit.>, which can reflect the similarities of successive POIs. To capture the similarity between two POIs, we define the co-occurrence frequency of them in the search sequences as the graph's edge.
To extract this kind of relation, a 2-gram sliding window is perform on the search sequences. We apply the pointwise mutual information (PMI) to calculate the weight of the edges by:
𝐀^pp_ij = PMI(P_i, P_j) = log Pr(P_i, P_j)/Pr(P_i) · Pr(P_j) ,
Pr(P_i, P_j) = #W(P_i, P_j)/#W ,
Pr(P_i) = #W(P_i)/#W ,
where #W(P_i, P_j) is the number of sliding windows that contain both P_i and P_j. #W denotes the number of sliding windows, and #W(P_i) is the number of sliding windows that contain P_i.
After typing in a query, a user would click on the desired POI from a list of ranked POIs that the POI search engine suggested.
This process produces a large-scale query-POI pairs where the multilingual expressions of each POI can not only effectively mitigate the multilingual matching problem, but also bridge the semantic gap between queries and POIs. For example, users usually make spelling errors or use abbreviations, which would lead to poor results when directly matching query and POI text information. Motivated by this observation, we try to model the relations between historical queries and POIs.
Specifically, we select the top-4 searched queries for each POI and connect an edge for every POI and its historical query nodes for POI-Query relations. In this way, we can build connections between POIs and Queries. Formally, the adjacency matrix can be formulated as follows:
𝐀^pq_ij = c_i,j/∑_k=1^|𝒬_P_i| c_i,k ,
where c_i,j is the frequency of query-POI pair (q_j, P_i), q_j ∈𝒬_P_i.
§.§.§ Heterogeneous Graph Learning
To learn representations of POIs and queries from the heterogeneous graph, we use attention-based graph neural network to aggregate neighbors for generating a distributed representation of each node based on the heterogeneous graph, which enables us to learn a high-level hidden representation for each vertex.
As shown by Figure <ref>, there are two types of nodes (POI node and query node) and two types of edges (POI-POI correlation and POI-Query semantic relation) in the graph. For a certain node n_i, its initial embedding: 𝐧_i ∈ℝ^d_n, is randomly initialized by a uniform distribution.
First, we introduce how to produce edge embeddings for the heterogeneous graph. We apply an aggregator proposed in GraphSAGE <cit.> to integrate neighbor node embeddings for edge embedding 𝐞_j:
𝐞^(k)_j = σ( 𝐦𝐚𝐱({𝐖^(k)𝐧_t, n_t∈𝒩_j } ) ) ,
where 𝐖^(k) is the trainable weight of k-th layer, σ(·) is the sigmoid activation function. 𝒩_j is the neighbor set of edge e_j.
Suppose the node n_i has m edges connected with it, we concatenate edge embeddings for the node n_i as 𝐄_i ∈ℝ^m × d_e:
𝐄_i = (𝐞_i,1, 𝐞_i,2, …, 𝐞_i,m) .
Next, we apply the cross attention mechanism to fuse 𝐄_i into a vector ẽ_i ∈ℝ^d_e by:
α_i = 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(𝐧_i tanh(𝐖_r 𝐄^T_i) ⊙𝐀^(r)_i ) ,
ẽ_i = α_i^T𝐄_i ,
where ⊙ denotes the element-wise multiplication operation. α_i∈ℝ^m is the coefficients. 𝐖_r ∈ℝ^d_n × d_e is a trainable weight for edge type r. 𝐀^(r)_i∈{𝐀_i,1:m^pp, 𝐀_i,1:m^pq} denotes the weights of m edges which belong to the type r and connect with n_i.
Then, the overall node representation of n_i can be computed by:
𝐧_i = 𝐧_i + 𝐖_1ẽ_i + 𝐖_2 𝐧^'_i ,
where 𝐖_1 ∈ℝ^d_n × d_e, 𝐖_2 ∈ℝ^d_n × d are two trainable parameters and 𝐧^'_i ∈{𝐐_P_i,𝐏_i }.
𝐐_P_i and 𝐏_i denote P_i's associated query representations and its POI embedding, respectively.
Finally, the graph learning module produces the representations of all POIs: 𝐏∈ℝ^|𝒫| × d_n, and the representations of all queries: 𝐐∈ℝ^|𝒬| × d_n.
§.§ POI Ranker
The POI ranker module calculates the relevance between a query q and a candidate POI P_i based on the learned representations.
This module also considers P_i's historical queries 𝒬_P_i when predicting the relevance since 𝒬_P_i conveys substantial evidence to bridge the semantic gap between P_i and q.
Both P_i and 𝒬_P_i contain essential information for calculating the relevance, but their importance is different. How to automatically determine the importance of them for measuring the relevance is still a challenge.
In this paper, we apply an attention module to automatically determine their importance and fuse them as a feature vector. Specifically, we regard the representation of q as the key, while regard the representations of P_i and 𝒬_P_i as the value. We stack the representations of P_i and 𝒬_P_i as a new matrix 𝐌 = [𝐏_i, 𝐐_P_i ]. Each attention weight ϕ_k is defined as follows:
s_k = 𝐖_4tanh ([𝐪; 𝐌_k] 𝐖_3 + b) ,
ϕ_k = exp(s_k)/∑_j=1^|𝐌| exp(s_j) ,
where 𝐖_3 ∈ℝ^2d_n × d_n and 𝐖_4 ∈ℝ^1 × d_n are trainable matrices.
We use the attention weight to fuse the representations of P_i and 𝒬_P_i by:
𝐦 = ∑_k=1^|𝐌|ϕ_k 𝐌_k ,
where 𝐦∈ℝ^d_n is the fused POI representation.
Finally, we concatenate 𝐪 with 𝐦, and feed them into the output softmax layer for relevance calculation by:
Pr(c_i|q, P_i, 𝒢) = 𝐬𝐨𝐟𝐭𝐦𝐚𝐱([𝐪; 𝐦] 𝐖_v) ,
where 𝐖_v ∈ℝ^2d_n × 2 is the trainable parameter, and Pr(c_i|q, P_i, 𝒢) is the probability vector of a category c_i ∈{0, 1}. The category 1 (0) indicates that P_i is relevant (irrelevant) to q. We use the output probability of category 1 as the score for ranking.
§.§ Model Training
We train the model in a supervised manner by minimizing the cross-entropy loss of relevance classification described above, whose loss function is defined as follows:
ℒ = - ∑_i=1^|𝒫| y_i𝐥𝐨𝐠 Pr(c_i|q, P_i, 𝒢) ,
where |𝒫| denotes the amount of total training POIs, and y_i is the label of the instance POI P_i.
To increase the number of training instances inside each batch and improve the computing efficiency, we apply an in-batch negative sampling strategy <cit.>. Specifically, assuming that we have B queries in a mini-batch, each one is associated with a relevant POI. Let 𝐐̂ and 𝐏̂ be the (B × d) matrix of query and POI embeddings in a batch of size B. 𝐒 = 𝐐̂𝐏̂^T is a (B × B) matrix of similarity scores, where each row corresponds to a query, paired with B POIs. In this way, we reuse computation and effectively train on B^2 (q_m, P_n) query-POI pairs in each batch. Any (q_m, P_n) pair is a positive example when m = n, and negative otherwise. This procedure creates B training instances in each batch, where there are B - 1 negative POIs for each query.
§ EXPERIMENTS
To thoroughly test HGAMN, we conduct extensive experiments in both offline and online settings.
§.§ Comparison Models
We evaluate HGAMN against the following four groups of methods. Furthermore, to understand the relative importance of several facets of HGAMN, variations of this model with different settings are implemented for comparison.
§.§.§ Text Matching Group
* DSSM <cit.> is a widely-used text matching model in which a deep neural network is employed to predict the relevance between keywords and documents. In our experiments, for all DSSM based models, we treat queries as keywords while POI name and POI address as documents.
* ARC-I <cit.> uses pre-trained word embeddings to represent the text. It then uses a convolutional network to learn the semantic features and feeds the feature vectors to a multi-layer perceptron for prediction.
* Conv-DSSM <cit.> extends DSSM by adding extra convolutional layers to extract sentence-level features from n-gram word representations.
§.§.§ Query-POI Matching Group
* DPSM <cit.> is a POI latent semantic model based on neural networks, which extracts query and POI semantic features for the similarity calculation.
* PALM <cit.> is an attention-based neural network. It uses semantic similarity and geographic correlation to quantify the query-POI relevance.
§.§.§ Our Model and Its Variants
* HGAMN is the complete model defined in Section <ref>. In this setting, we use it independently as a POI retriever to return the desired POIs.
* HGAMN w/o POI-POI Graph. In this setting, we remove the edges between different POIs in the graph learning module described in Section <ref>. The removed part is designed to mitigate the visiting sparsity problem.
* HGAMN w/o POI-Query Graph. In this setting, we remove the edges between different POIs and queries in the graph learning module described in Section <ref>. The removed part is designed to mitigate the multilingual matching problem.
* HGAMN w/o Heterogeneous Graph. In this setting, we remove the entire graph learning module described in Section <ref> and directly use the query and POI's representations described in section <ref> for calculation.
§.§.§ Online Model Group
* LTR is the basic model for online multilingual POI retrieval system at Baidu Maps <cit.>. It adopts GBRank <cit.> as the specific learning-to-rank model. This model mainly uses heuristic features, including the popularity of POIs, the demographic information on users, and the spatial-temporal features of each POI, such as the frequency of search on specific types of POIs at different times and locations.
* LTR + HGAMN is trained with all the features employed by LTR and the similarity feature computed by HGAMN. It is expensive to directly deploy HGAMN online to serve hundreds of millions of requests every day. For this reason, we instead use the feature generated by HGAMN offline as one of the features fed to the LTR model.
§.§ Offline Evaluation
§.§.§ Dataset
The services of Baidu Maps cover over 200 countries and territories worldwide, where the sessions on POI search dominate about 80% search traffic. A POI search session refers to a sequence of interactions between a user and the POI search engine.
We collect a large number of POI search sessions from the search logs of international services at Baidu Maps for offline evaluation. Each example of the dataset consists of the query typed by the user, the POI list that the POI search engine suggested, and the exact POI that the user clicked. Table <ref> shows the statistics of the large-scale dataset sampled from one-month search logs for model training (abbr. Train), hyper-parameter tuning (abbr. Valid), and performance testing (abbr. Test).
§.§.§ Evaluation Metrics
We use several widely-used metrics in information retrieval for offline performance evaluation.
The first group of metrics, Success Rate (SR) at Top-K (SR@K), is the coarse metric that denotes the average percentage of ground-truth POIs ranked at or above the position K in the ranked list provided by a POI retriever. Because of the limited space for display on mobile phones, Baidu Maps can mostly display 3 POIs on the first screen when the input keyboard is launched and at most 10 POIs when the input keyboard is closed. Therefore, we consider SR@1, SR@3, and SR@10 for offline evaluation.
Another group of fine-grained metrics, including Mean Reciprocal Rank (MRR) and normalized Discounted Cumulative Gain at Top-K (nDCG@K), concerns more about the exact position where a POI retriever arranges the ground-truth POI in the returned list. We consider nDCG@1, nDCG@3, and nDCG@10 for offline evaluation due to the display limitations on mobile phones.
§.§.§ Model Configuration
The dimensionality of POI and query embeddings d is set to 128. The sequence length of query text, POI name, and POI address is set to 30. The POI graph learning module consists of two graph attention layers with output dimensionality of d=128 and d'=256, respectively. The number of heads in the multi-head attention K is chosen from {1, 2, …, 10 }, and finally, set to 4.
During training, we use Adam optimizer <cit.>, with the learning rate initialized to 0.001 and gradually decreased during the process of training.
To prevent overfitting, we use the dropout strategy with a dropout rate of 0.5. The maximum training epoch is set to 40, and the batch size of the training set is set to 64.
§.§.§ Experimental Results
In this section, we evaluate the effectiveness of HGAMN for the multilingual POI retrieval task. Table <ref> shows the performance of offline assessments on the models mentioned in Section <ref>. From the results, we can see that the proposed model HGAMN significantly outperforms all baseline methods on the large-scale real-world dataset. Specifically, we have the following observations.
(1) HGAMN significantly outperforms all conventional text retrieval methods (i.e., DSSM, ARC-I, and Conv-DSSM). Furthermore, the “HGAMN w/o POI-POI Graph” model also achieves better performance compared with these methods. The main reason is that the POI-Query graph is able to model multilingual features between a POI and its historical queries, which enables us to mitigate the gap between a query and the candidate POIs.
(2) Compared with recently proposed neural-based POI retrieval methods (i.e., DPSM and PALM), HGAMN achieves better performance. Although these methods combine geographic or spatial-temporal features with text representations for POI retrieval, they do not take the POI visiting sparsity problem into account, which is a critical problem in industrial map services. The POI-POI graph builds connections between low-frequency POIs and their similar high-frequency ones, which is able to transfer the abundant supervisory signals from high-frequency POIs to facilitate learning better representations of the low-frequency POIs. The results verify that HGAMN is able to effectively relieve this problem.
(3) After removing the POI-POI graph and POI-Query graph separately (“HGAMN w/o POI-POI Graph” and “HGAMN w/o POI-Query Graph”), the performance of HGAMN decays considerably compared with the complete model. This indicates that both components in HGAMN are essential for multilingual POI retrieval, and they are complementary to each other.
(4) In the last section of Table <ref>, we observe that the LTR model outperforms the single HGAMN model. The reason is that the LTR model is one of the typical industrial ranking models based on a large set of time-proven high-quality features <cit.>.
However, after adding the feature computed by HGAMN into the LTR model, the “LTR + HGAMN” model achieves significant improvements. Since it is challenging to create a new feature that is able to significantly improve the overall performance of industrial ranking models, the improvements made by “LTR + HGAMN” further confirm the effectiveness of HGAMN. This shows that HGAMN can not only be used as an individual ranking model, but also be used to obtain a single strong feature that is robust to an industrial ranking framework.
§.§ Online A/B Testing
§.§.§ Traffic of Data
Before being launched in production, we would routinely deploy the new model online and make it randomly serve 5% traffic of the POI search. During the A/B testing period, we monitor the performance of the new model and compare it with the previously deployed models. This period conventionally lasts for at least one week.
§.§.§ Experimental Results
We use SR@1, SR@3, and SR@10 as the metrics for online evaluation, which are also adopted by offline evaluation. Table <ref> shows the experimental results of the online A/B testing on different models mentioned in Section <ref>. All models were selected by the test set for offline evaluations, and we launched the best-performed ones. They are tested by 5% search traffic of Baidu Maps.
Compared with the offline evaluation results, we gain lower results on SR@1, SR@3, and SR@10 in the online A/B testing. The main reason is that the POI lists returned by our POI search engine might be ignored entirely by a small proportion of users, mainly because they prefer directly typing in the full names of their desired POIs and then click the search button. In this interactive mode, Baidu Maps will directly provide users with the relevant POIs. This results in a phenomenon that none of the returned POIs were clicked, which may lead to much lower performance on Success Rate (SR). However, the relative improvements of these models are consistent with those obtained by the offline evaluation.
§ DISCUSSION
Here we explore the reason why HGAMN is able to boost both the offline and online performance of the POI search engine. Generally speaking, as an end-to-end framework for multilingual POI retrieval, HGAMN can produce an intermediate feature vector, i.e., the graph-based representation of a POI (denoted by “HGAMN”). The probability from the classifier, taking the vector as input, is a reliable indicator to decide the rank order of candidate POIs in the model. The significance of this indicator has already been proved by the experimental results of both offline and online evaluations, which are reported by Table <ref> and Table <ref>, respectively.
Moreover, we are curious about how much this probability from the intermediate vector, as a feature, can contribute to the GBRank-based multilingual POI retrieval model LTR, which has kept serving online in the search engine of international service at Baidu Maps. From the perspective of industrial practice, we need to figure out the relative importance of a proposed feature among all features leveraged by the GBRank model for multilingual POI retrieval.
GBRank can provide a score that indicates how useful a feature was in constructing the boosted decision trees within the model, which can help investigate the impact of different features (e.g., <cit.>). The more a feature is used to make critical decisions for decision trees, the higher its relative importance is allocated.
Figure <ref> illustrates the weights of the top-10 most important features in “LTR + HGAMN”, and the total weight of them is 71.67%.
Among all features, the importance of the graph-based representation (i.e., f1, colored in green) ranks 1^st with the weight of 16.89%.
This further demonstrates that HGAMN is able to significantly improve the effectiveness of multilingual POI retrieval.
§ RELATED WORK
Here we briefly review the closely related work in the fields of text retrieval, POI retrieval, and heterogeneous graph neural network.
§.§ Text Retrieval
Text retrieval aims to provide the most relevant documents for a query <cit.>. There are three conventional categories of methods for text retrieval: pointwise (such as logistic regression <cit.>), pairwise (such as RankSVM <cit.> and RankBoost <cit.>), and listwise (such as ListNet <cit.> and AdaRank <cit.>). The major difference between them lies in the input document space, output space, and loss function. These methods require manually designed features. However, such features may be sparse and insufficient to effectively encode the semantic information of queries and documents. Moreover, designing effective features is usually time-consuming and heavily relies on expert knowledge in particular areas <cit.>.
With the rapid development of deep learning, researchers adopt neural networks to automatically learn representations for text retrieval. For example, huang2013learning propose a DSSM model to map the query and the document into a semantic space and treat the similarity between two embeddings as the relevance score. Subsequently, Conv-DSSM <cit.> and LSTM-DSSM <cit.> are proposed to improve the ability of semantic feature extraction of DSSM. pang2016text propose to model text matching as the problem of image recognition and employ a convolutional neural network to extract matching features. DeepRank <cit.> further simulates the human judgment process to capture important features.
Multilingual POI retrieval task is different from text retrieval task in that it requires not only capturing semantic similarities between text data but also addressing the cross-language matching problem and textual-geographic matching problem (i.e., computing the relevance between a query and a POI by taking both text data and geolocations into consideration), which are generally not required in text retrieval task.
§.§ POI Retrieval
There is a growing body of work that explores and assesses POI retrieval <cit.>. Here, we briefly review recent attempts on applying neural networks to address this task. To address the mistyping or an alias inquiry problem, zhao2019poi propose a POI latent semantic model based on deep learning, which can effectively extract query and POI features for similarity calculation. p4ac-kdd21 propose a personalized POI retrieval model which also has the ability to provide time- and geography-aware results. Furthermore, geographic information <cit.> and spatial-temporal factors <cit.> have been considered in recent work.
However, little work has considered the problems of visiting sparsity and multilingual query-POI matching, which are two main challenges that must be tackled for POI retrieval in global map applications such as Baidu Maps. To address both problems, we first encode a POI's multi-source information to enrich its representation. Then, we establish the relations among different POIs and queries by constructing a heterogeneous graph. Finally, we produce enhanced representations of queries and POIs via the heterogeneous graph, which has a significant effect on POI retrieval performance.
§.§ Heterogeneous Graph Neural Network
The heterogeneous graph, which constitutes multiple types of nodes or edges, is ubiquitous in real-world applications. Previous studies <cit.> focus on different aspects of the heterogeneous graph to learn node representations. For example, sun2018joint propose a meta-graph-based network embedding model, which simultaneously considers the hidden relations of all meta information of a meta-graph. wang2019heterogeneous propose a heterogeneous graph neural network, which utilizes hierarchical attention, including node-level and semantic-level attentions, to learn node representations from meta-path based neighbors. cen2019representation propose a unified attributed multiplex heterogeneous network to solve the multiplex heterogeneous graph embedding problem with both transductive and inductive settings. zhu2020hgcn propose a heterogeneous graph convolution network to directly learn the complex relational hierarchy, potential incompatible semantics, and node-context relational semantics.
Inspired by the recent success in heterogeneous graph representation learning, we build a heterogeneous graph from search logs, which links the low-frequency POIs with the high-frequency ones and aggregates queries in different languages and formulations for individual POIs. As a result, the visiting sparsity and multilingual matching problem can be effectively alleviated by enhancing the representations of queries and POIs via the heterogeneous graph.
§ CONCLUSIONS AND FUTURE WORK
This paper presents an industrial solution to the multilingual POI search for international services at Baidu Maps. We propose a heterogeneous graph attention matching network (HGAMN) to address the visiting sparsity and multilingual query-POI matching problems. HGAMN is composed of three modules: (1) a multi-source information learning module, which learns the text and location representations of the multilingual query, POI name, and POI address; (2) a heterogeneous graph learning module, which constructs the connections of different POIs and historical queries, and learns the node representations from the heterogeneous graph; and (3) a POI ranker module, which calculates the relevance between a query and candidate POIs.
We conduct both offline and online evaluations using large-scale real-world datasets. The experimental results show that HGAMN achieves significant improvements over several mainstream approaches, which demonstrates the effectiveness of enhancing the representations of queries and POIs via the heterogeneous graph to improve multilingual POI retrieval.
The user input habits and preferences are not taken into account in this paper. In the future, we intend to utilize these kinds of vital information for personalized POI searches. In addition, previous studies have shown that context <cit.> and explanation <cit.> can bring significant improvements in recommendation effectiveness and increase user satisfaction. As future work, we plan to investigate whether multilingual POI retrieval could benefit from the adoption of such factors.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02157v1 | 20240903180000 | An Earth-Mass Planet and a Brown Dwarf in Orbit Around a White Dwarf | [
"Keming Zhang",
"Weicheng Zang",
"Kareem El-Badry",
"Jessica R. Lu",
"Joshua S. Bloom",
"Eric Agol",
"B. Scott Gaudi",
"Quinn Konopacky",
"Natalie LeBaron",
"Shude Mao",
"Sean Terry"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
12pt
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
Terrestrial planets born beyond 1–3 AU have been theorized to avoid being engulfed during the red-giant phases of their host stars. Nevertheless, only a few gas-giant planets have been observed around white dwarfs (WDs) — the end product left behind by a red giant.
Here we report on evidence that the lens system that produced the microlensing event KMT-2020-BLG-0414 is composed of a WD orbited by an Earth-mass planet and a brown dwarf (BD) companion, as shown by the non-detection of the lens flux using Keck Adaptive Optics (AO).
From microlensing orbital motion constraints, we determine the planet to be a 1.9±0.2 Earth-mass (M_⊕) planet at a physical separation of 2.1±0.2 au from the WD during the event.
By considering the system evolutionary history, we determine the BD companion to have a projected separation of 22 au from the WD, and reject an alternative model that places the BD at 0.2 au.
Given planetary orbital expansion during the final evolutionary stages of the host star, this Earth-mass planet may have existed in an initial orbit close to 1 au, thereby offering
a glimpse into the possible survival of planet Earth in the distant future.
The ultra-high-magnification nature of the microlensing event KMT-2020-BLG-0414
(KB200414 hereafter) has previously prompted intensive photometric follow-up observations around the peak of the event on July 11, 2020.
Modeling of the densely-sampled light curve subsequently revealed a three-body lens system consisting of a low-mass-ratio planet (q∼10^-5) and a brown dwarf companion orbiting a sub-solar-mass host star <cit.>.
Owing to intrinsic microlensing degeneracies <cit.>, there exist four distinct models that explain the light-curve data equally well. Among the four models, the projected separation for the brown dwarf companion can be very close (∼0.2 au) or very wide (∼20 au), and the lens-source relative proper motion can be either in the north-east or south-east (NE/SE) directions, which is associated with distinct microlensing parallax constraints.
On the other hand, the planet properties are consistent across the four models, all of which indicate an approximately Earth-mass planet at a projected separation of around 1–2 au.
For KB200414, the mass of the primary lens star (Table 1) as inferred from the finite-source and microlensing-parallax effects indicates that it is either a main-sequence (MS) star or a WD stellar remnant.
A MS lens star is expected to have a similar apparent brightness to the microlensing source star, whose apparent brightness is known from the magnification profile.
On the other hand, a WD lens is expected to be fainter by 6–8 magnitudes, making it practically undetectable under the glare of the source star.
Therefore, the two scenarios could be distinguished by measuring the total brightness at the event location prior to, or long after the event.
OGLE-III pre-event imaging (Figure 1a) measured the total brightness at the event location to be I_ base = 18.46 ± 0.09, which implies a total blended flux of I∼19.3 on top of the unmagnified source star brightness of I∼19.1.
This blended light was originally reported by ref.<cit.> as consistent with the expected MS lens brightness (Table 1),
but may also be attributed nearby field stars that cannot be resolved with seeing limited imaging.
To further constrain the lens brightness, we observed the location of KB200414 in the K-short infrared pass-band (; 2.146 μm) with laser-guide-star AO <cit.> on the Keck-II telescope on May 25, 2023 (UT), approximately three years after the peak of the event.
In our Keck images (Figure 1c), we measure a total brightness of =16.99±0.03 at the event location within a circular aperture of radius 0.2^'', which closely matches the infrared source brightness ranging from =16.95±0.06 to =17.08±0.06 for the four degenerate solutions (see methods).
Our high-angular-resolution imaging reveals that the blended light in OGLE-III pre-event imaging arose primarily from field stars within 0.5^'' to the west and north-west directions (Figure 1b/c).
As shown in Table 1, our aperture photometry constrains any excess flux above the source flux to be at least around two magnitudes fainter (at the 3-sigma level) than the expected brightness of the lens star if it were on the main sequence.
Therefore, we reject the MS hypothesis and conclude that the primary lens star, i.e. the planet host, must be a WD.
The conclusion that the primary lens is a WD calls for a re-examination of the four degenerate light-curve models. We find that the two south-east (SE) solutions are unlikely as both of them would require an extremely-low-mass (ELM) WD below 0.3 M_⊙.
ELM WDs (e.g. <cit.>) are a rare class of WDs formed exclusively through binary interactions, where the companion star strips away the stellar envelope from the ELM WD progenitor via either common envelope evolution or stable mass transfer, before the progenitor star could initiate helium burning (e.g. <cit.>).
We can immediately rule out the existence of such massive companions to the lens star, as the light-curve models constrain the total lens mass as opposed to the primary lens mass in the case of close-in binaries.
It is also difficult to attribute the formation of an ELM WD to the close-in brown dwarf companion under the close-SE model, as binary evolutionary models <cit.> predict that a brown dwarf companion could only eject the envelope of the WD progenitor if it spiraled in to a much closer (≲ 0.01 au) orbit or first interacted with the progenitor when it was an AGB star, when the core mass have grown to more than ∼0.5M_⊙.
On the other hand, the two NE models do not require a compact binary formation (ELM WD) interpretation.
As the finite age of the universe limits the lowest mass WD to form via the single star evolution,
we impose a host-mass lower limit of M>0.45M_⊙ based on WD population statistics (e.g. <cit.>), which serves as a Bayesian prior that refines the lens system properties.
Under this additional constraint, both the close-NE and wide-NE solutions indicate an approximately 1.7–1.9M_⊕ planet at a projected separation around 2.1 au, with a host mass near 0.5M_⊙ (Table 2).
The planet mass is consistent with a rocky composition, and the corresponding planet size would be merely 20% greater than Earth's radius from mass-radius relationships (e.g. <cit.>). Furthermore, we infer from WD initial-final mass relations <cit.> that the progenitor (MS) mass is likely around 1–2 M_⊙.
We then infer the planet's physical separation from its projected separation using orbital motion effects <cit.> in the light-curve models (Extended Data Table 1; see Methods). We adopt a log-uniform prior on the physical separation, and model the planet orbit for different assumed eccentricities.
As illustrated in Figure 2, the posterior distribution for the physical separation is bimodal, which reflects two distinctly allowed orbital configurations (Extended Data Figure 1).
The planet is most likely near greatest elongation in a significantly inclined orbit, which implies that the physical separation is near the projected separation.
Alternatively, the planet is near conjunction on a nearly edge-on orbit, which implies a physical separation of ≳ 10 au.
The former scenario is substantially favored for eccentricities up to e<0.2, for which we may place an upper limit to the physical separation at 2.3 au with 80–90% confidence.
Due to tidal circularization during the host-star red-giant phases (e.g. <cit.>), we consider it reasonable to assume that the current planet orbit indeed has low eccentricity.
For e<0.2, the close-orbit case (d∼2.1 au) is formally favored by a Bayes Factor of around 5–10, which only constitutes substantial but not strong evidence <cit.>.
Therefore, the extent to which the wide-orbit case (d≳ 10 au) may be ruled out is sensitive to the adopted physical separation prior, which is complicated by the fact that the population of terrestrial planets at such separations remains largely unexplored.
Canonical planet formation theory expects terrestrial planets to form predominantly within the water ice line at around 3 au for a sun-like star (e.g. <cit.>).
However, processes such as planet-planet gravitational interactions during early stages of planet formation could scatter low-mass planets to very wide separations or outright eject them <cit.>.
Statistics from short time-scale (t_ E≲0.5 day) microlensing events indicate that wide-orbit (≳10 au) and free-floating low-mass planets (FFP) combined are at least as abundant as the known population of close-orbit planets <cit.>,
but current follow-up observations are insufficient to distinguish between the two scenarios <cit.>.
Therefore, if a considerable fraction of such microlensing FFP candidates are confirmed to be bound planets in the future (via direct detection of host star), then it becomes more likely for KB200414Lb to have a wide orbit than currently inferred.
Similarly (but for a different reason), the brown dwarf companion takes on either a very close or very wide projected separation, which would indicate distinct evolutionary histories (Figure 3).
To end up in a close-in orbit of ≳0.2 au under the close-NE model, the BD companion would likely have gone through a period of common envelope evolution with the WD progenitor and successfully ejected the stellar envelope.
However, most known post-common-envelope binaries (PCEBs) have orbits smaller than 0.01 au <cit.>. Several WD binaries with MS companions are known with separations of order 0.2 au that are suspected to be PCEBs <cit.>.
Models are only able to explain these wider PCEBs if mass transfer was first initiated during the AGB phase of the progenitor of the WD, when its envelope is expected to be loosely bound and little gravitational energy is required to unbind it <cit.>.
Under this scenario, the BD initial orbit around the MS host is expected at 3–6 au <cit.>.
Nevertheless, even if this CEE pathway remains valid for a substantially less massive BD, long-term orbital stability for the system (e.g. <cit.>) would require the planet to be on an initially wide orbit (d≳10 au), which is already disfavored by the planet orbital model.
Given the combination of evidences against the close-NE model, we conclude that the wide-NE model (Scenario 2; Figure 3) is the most favored scenario, where both the planet and the brown dwarf avoided interacting with the WD progenitor.
In this case, this system may provide a possible glimpse into the distant future of our solar system.
While Venus will eventually be engulfed and Mars will most certainly survive, the final fate of the Earth is rather uncertain and critically depends on the stellar-mass-loss rate during the solar RGB phase <cit.>, which remains poorly constrained <cit.>.
Certain models predict that the Earth may be engulfed during the solar tip-RGB phase due to tidal interactions and dynamical drag <cit.>.
Nevertheless, if Earth had indeed survived, then its orbit is expected to expand to around twice its current size, comparable to the current orbit for KB200414Lb.
Therefore, the Earth-mass planet KB200414Lb likely represents a similar yet more fortuitous future compared to our own planet Earth.
§ METHODS
§.§ Observations
We observed the location of the planetary microlensing <cit.> event KB200414 <cit.> using the wide mode of the NIRC2 camera on the Keck-II telescope on May 25, 2023 (UT) under program U152 (PI: J.S. Bloom; Science-PI: K. Zhang). The pixel scale is 0.04^''/pixel with a 40^'' by 40^'' field of view. Five deep images are taken with 30 seconds of exposure per image for relative photometry on the target. Two shallow images are taken each with 15 seconds of total integration time, which consist of fifteen co-adds of 0.5-second exposures. The shallow image has a brighter saturation limit and is used for calibration to the VVV photometric system. The shallow and deep images are non-linearity corrected <cit.>, sky-subtracted, flat-fielded, and averaged into two master images.
We identify the target in the Keck image by transforming the magnified source location in the CFHT image (Figure 1b) into the Keck frame. A linear transformation between the two frames is derived using ten reference stars listed in Supplementary Table 1, resulting in a residual standard error of 22.6 mas. We unequivocally identify the Keck star located at (502.43, 559.02) as the event location, which has an nominal offset from the CFHT source location of 22.0 ± 22.6 mas, i.e., within one pixel in the Keck image.
We then perform aperture photometry with a radius of five pixels (0.2^'') on the two stacked images using the photutils package <cit.>. Eleven relatively isolated stars in the shallow image with 12.5<K_s, VVV<15.5 are calibrated to VVV DR4 aperture photometry <cit.>, which results in a zero-point uncertainty of 0.03 mag. We then calibrated the deep image to the shallow image, which results in a calibrated target brightness of =16.99±0.03.
Given the lens-source relative proper motion of ∼8 mas/year (Table 2), we may expect the lens-source separation to be ∼24 mas at the time of the Keck observations, much smaller than the ∼80 mas Keck PSF. Therefore, the target flux includes the combined flux from the lens and source stars. We note that the OGLE blended light may be attributed to four stars within 0.5^'' to the west and north-west directions, which has a total brightness of ≃16.8. This is comparable to the source star brightness (see section below), which is also the case for the OGLE I-band blend.
§.§ Flux Constraints
The source-star brightness was only measured in the V and I bands and slightly differs across models.
Since the follow-up observations were performed in the band, we first convert the I-band source brightness to the band from its intrinsic (I-) color and reddening E(I-).
To derive the extinction and reddening, we construct a (I-) vs color-magnitude diagram (CMD) by cross matching OGLE-III and VVV catalog stars located within 2^' of the location of KB200414 (Supplementary Figure 1). The VVV photometry is calibrated to 2MASS.
We measure the centroid of the red giant clump as (I-, )_ cl = (2.49 ± 0.01, 13.06 ± 0.02).
For the intrinsic centroid of the red giant clump, we adopt
(I - , )_ cl,0 = (1.46±0.04, 12.89±0.04)
<cit.>, which implies
E(I-)=1.03±0.04 and
A_ = 0.17±0.04.
We also cross check the extinction in color space. Using the OGLE extinction calculator, we derive reddening E(V-I)=0.972 and E(J-)=0.316 <cit.> towards the sight-line of KB200414. Adopting the extinction law of ref. <cit.>, we have A_Ks=0.528· E(J-)=0.17, which is in agreement with the CMD analysis.
We then derive the intrinsic (I-) source color from its intrinsic (V-I) color, which was reported as (V-I)_ S,0=0.84±0.03 in ref. <cit.>. Using color-color relations <cit.> and the zero-point offset from to standard K of 0.04 mag <cit.>, we derive (I-)_ S,0=1.06 ± 0.04 and thus (I-)_ S=(I-)_ S,0+E(I-)=2.09 ± 0.06, which is used to convert the I-band source brightness (see Table 4 of ref. <cit.>) into the source brightness listed in Table 1.
We derive the expected brightness for hypothetical main-sequence lenses using the MESA <cit.> Isochrones and Stellar Tracks (MIST; <cit.>). The apparent brightness depends on the mass, distance, age, metalicity, and extinction experienced by the lens star. To rule out all possible main-sequence lenses, we must consider stellar properties that leads to the faintest brightness.
Therefore, we adopt metal-rich ([Fe/H]=0.5) isochrones and consider the faintest possible brightness over 100 Myr and 10 Gyr of age.
The mass and distance of the primary lens star is derived from the angular Einstein radius and the microlensing parallax as constrained by the light-curve models and source star properties. We directly adopt the published light-curve models of <cit.> in the form of raw MCMC chains.
We searched for additional degenerate models using a machine-learning algorithm <cit.>, which did not yield new solutions but recovered the existing ones.
Note that the lens properties originally reported in Table 5 of <cit.> applied both a Galactic model and rejected parameter samples that would result in the MS lens being brighter than the blend flux of I=18.9.
Since we have rejected the hypothesis that the primary lens is a MS star, we simply adopt a uniform prior, which results in slightly different reported values.
The angular Einstein radius is defined as
θ_E=√(κ M_ Lπ_ rel),
where π_ rel=π_ L-π_ S is the lens-source relative parallax and
κ=4Gc^2 au≃8.144 mas/M_⊙.
The microlensing parallax is defined as the lens-source relative parallax in units of the angular Einstein radius
π_ E=π_ relθ_ E=√(π_ relκ M_ L).
Therefore, the lens mass is derived as M_ L=θ_ E/κ/π_ E whereas the lens parallax is π_ L=π_ rel-π_ S=π_ E·θ_ E-π_ S.
For the source parallax, we adopt a source distance of D_ S=8.0±0.8 kpc, which is derived using the triaxial G2 Galactic Bulge model originally adapted in <cit.> for microlensing population studies.
Following ref. <cit.>, we derive the extinction experienced by the lens star (regardless of MS/WD) as
A_(D_L)=∫^D_L_0a_× n_d(D)dD,
where n_d(D) is the dust density at D, and a_ is the extinction in units of mag kpc^-3 dust. We adopt an exponential Galactic dust distribution model where, in cylindrical coordinates,
n_d(D)∝exp(|z(D)|z_d-R(D)R_d),
where
z(D)=z_⊙+Dsin b≃ z_⊙+Db,
R(D)=√((R_⊙-Dcos b cos l)^2+(Dcos bsin l)^2)≃ |R_⊙-D|.
The dust length scales are adopted as (R_D, z_d) = (3.2, 0.1) <cit.> and the location of the Sun is adopted as (R_⊙, z⊙) = (8.3, 0.023) kpc <cit.>. The extinction constant is derived as a_ = 0.67 by considering A_(D_ S) = 0.17 and D_ S = 8 kpc. The minimum expected brightness for MS lenses consistent with the light-curve models is reported in Table 1, with lens extinctions in the range of 0.03–0.06 mag.
§.§ White Dwarf Properties
The age of the universe limits the lowest mass white dwarf that could be formed via single star evolution.
CO white dwarfs are known to have a mass distribution sharply centered around 0.59 M_⊙, which drops off quickly for lower masses with essentially no WD found with M<0.45M_⊙ except for ELM/Helium WDs <cit.>. We therefore impose a host-mass lower limit of M>0.45M_⊙ as a Bayesian prior to further refine properties of the planetary system.
As low mass (∼0.5M_⊙) WDs are already strongly favored by the light-curve models, the inferred host mass is relatively insensitive to the specific WD mass prior adopted, so long as some form of prior is applied to reject the regime of M<0.45M_⊙ where singular WDs are extremely uncommon.
We derive the expected CO WD brightness for the two NE solutions using the isochrone for 0.54M_⊙ DA WDs under the BaSTI stellar evolution model <cit.>. We consider the possible WD brightness under a uniform cooling age distribution over 0.1 Gyr to 10 Gyr. We apply the same extinction scheme as for main-sequence lenses, which results in an expected WD lens brightness of ∼24. As such, it would be favorable to directly observe the WD lens at the first light of the thirty-meter-class telescopes (est. 2030), at which point it will be separated from the glare of the source star by around 80 mas. It may also be possible to detect the WD lens with JWST.
§.§ Orbital Model
Here, we infer the planet's physical separation (d) and semi-major axis (a) from its projected separation (s) by leveraging the microlensing orbital motion effect, which was included in the light-curve models originally published by <cit.>.
The microlensing orbital motion effect considers the projected separation (s) and the relative angle (α) as changing linearly in time and is parametrized as (, ).
Since the planetary light-curve feature occurred during a short 7-day window, this linear parameterization is likely sufficient, which we later validate by examining how much (, ) is actually predicted to change during this time frame.
We convert the planet orbital motion parameters (, ) for the North-East models to physical units under the host-mass lower limit, which are approximately α̇=0.3±0.1 rad yr^-1 and ṡ=0.0±0.1 au yr^-1 (see Extended Data Table 1).
Note that <cit.> only considered orbital motion for the planet, but not for the BD. They estimated that doing so would require an additional 𝒪(10^6) CPU hours for each degenerate model. Moreover, they suggested that BD orbital motion is not expected to make a pronounced impact on the light curve, as the light-curve anomaly associated with the BD is less than half a day in duration.
We consider an orbital model with six parameters: host mass (M), semi-major axis (a), eccentricity (e), inclination (i), argument of periapsis (ω), and the reference phase (ϕ_0), which is defined as the difference between the reference time (t_0) in the light-curve model and the time of periastron (t_ peri), and normalized to the orbital period (P): ϕ_0=(t_0-t_ peri)/P. This parametrization allows the orbital model to become invariant to the orbital period and host mass, which we use to scale the orbital model as a separate step.
The physical separation and semi-major axis are deterministically related to the projected separation and orbital elements (e, i, ϕ_0, ω) via d=s/f(θ) and a=s/g(θ), where θ is a shorthand for the aforementioned orbital elements.
We first transform samples from the projected separation posterior (Table 2) into the physical separation posterior without the orbital motion constraints.
To this end, we sample a dense grid of orbital elements from a uniform prior for ω and ϕ_0, and a sine prior for i, which facilitates an isotropic prior on the orbital plane.
We sample distinct eccentricities over [0, 0.5] at a step size of 0.1.
We then evaluate a grid of transforming factors f(θ) and g(θ) from the grid of orbital elements using the exoplanet package <cit.>. Finally, we acquire (M,s) samples from the light-curve posterior (Table 2) and apply the grid of transforming factors to derive samples of the physical separation. We then apply the same procedure for the semi-major axis.
Formally, we have applied a change of variables, where the physical separation posterior is related to the projected separation posterior (from the light-curve model) via
p(d,θ)=p(s,θ)|∂ s∂ d|=p(s,θ)· f(θ)=p(s,θ)·sd,
where p(s,θ) is a shorthand for p(s=d· f(θ),θ).
From the above equation, we may interpret p(d,θ) as a posterior distribution, where p(s,θ) is the (partial) likelihood of the projected separation and the physical separation prior is given by p(d)≃1/d, namely a log-uniform distribution. We have verified the log-uniform prior numerically given its importance in interpreting the final results.
We may write the above intermediate posterior as p(d,θ|s), since it only accounts for the projected separation measurement, not the orbital motion measurements. Observe that the full (taking into account all of s,) and intermediate posteriors follows the same joint distribution p(d,θ,),
p(d,θ|)∝ p(d,θ,)∝ p(d,θ|s)· p(|M,s,θ).
Therefore, we may convert samples from the intermediate posterior to the full posterior with an importance weight of p(|M,s,θ), namely the partial likelihood of the orbital motion constraints.
The predicted orbital motion is derived using finite difference on the aforementioned orbital element grid, which requires knowledge of the orbital period.
The host-mass associated with the projected separation (the M,s samples from Table 2) underlying each parameter combination is used to derive the orbital period via Kepler's third law.
Therefore, our approach natively accounts for the covariance between M and s, which circumvents the difficulty that s is an observable whereas M is a model parameter.
Therefore, we expect this novel approach to be useful for microlensing follow-up analysis in the future.
We first validate the linear orbital motion assumption by examining the extent to which () is predicted to change during the planetary light-curve feature. We found that they change by merely 𝒪(10^-3) au yr^-1 and 𝒪(10^-5) rad yr^-1, which implies that linear orbital motion is a sufficient parametrization.
As we discuss in the main text, the bi-modality of the physical separation represents two distinct regions of orbital space that are allowed under the orbital model.
In Extended Data Figure 1, we visualize the marginal likelihood p(|i,ϕ_0) for the inclination and reference phase, under different eccentricities.
To ease interpretation and without substantial loss of generality for mildly eccentric orbits, we fix the argument of periapsis to ω=π/2 such that periastron and apastron occur at conjunction.
We may see that the planet is either near greatest elongation in a significantly inclined orbit, or near conjunction on a nearly edge-on orbit, with the former substantially favored.
To interpret the origins of this degeneracy (bi-modality), let us first observe that if the planet were on a circular, face-on orbit, then given a≃2.1 au and M≃0.5M_⊙, and we may expect a constant ≃1.5 rad yr^-1 from Kepler's third law, which is much greater than the measured ≃0.3 rad yr^-1. Therefore, the orbit must be substantially inclined. Furthermore, the measured is close to zero, which indicates that the planet is either near conjunction or longest elongation, which are the two locations where the projected separation remains stationary. If the planet were near conjunction, then its physical separation would greatly exceed the projected separation, which leads to much longer orbital period that serves to reduce .
This also explains why the conjunction scenario is favored at apastron (Extended Data Figure 1), where the planet's angular velocity is also intrinsically smaller.
§ TABLES
§ FIGURES
§ DATA AVAILABILITY STATEMENT
The reduced Keck images are available at https://zenodo.org/records/13128167. The raw data will be available on the Keck Observatory Archive (https://koa.ipac.caltech.edu/) after the 18-month proprietary period.
§ ACKNOWLEDGMENTS
K.Z. is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program.
K.Z. and J.S.B. were partially supported by the Gordon and Betty Moore Foundation and a grant from the National Science Foundation (award #2206744).
W.Z. acknowledges the support from the Harvard-Smithsonian Center for Astrophysics through the CfA Fellowship.
W.Z. and S.M. acknowledge support by the National Natural Science Foundation of China (Grant No. 12133005).
J.R.L. acknowledges support from the National Science Foundation under grant No. 1909641 and the Heising-Simons Foundation under grant No. 2022-3542.
W.Z. thanks Hanyue Wang for fruitful discussions on the Keck proposal.
This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.
Data transfer from the host site to KASI was supported by the Korea Research Environment Open NETwork (KREONET).
Some of the data presented herein were obtained at Keck Observatory, which is a private 501(c)3 non-profit organization operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the Native Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the TAP member institutes.
Partly based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site.
§ AUTHOR CONTRIBUTIONS
K.Z. reduced the Keck data, developed the probabilistic framework for inferring the planet's physical separation, led the overall analysis and interpretation, and wrote the manuscript. K.Z., K.E.B, and E.A. developed the interpretation of the system evolutionary history. K.Z. and W.Z. conceived of the observations and led the writing of the Keck proposal. W.Z. contributed to the extinction and lens light analysis. K.Z. and J.S.B. obtained the observing time as the Science-PI and PI of Keck program U152. J.R.L., S.T., J.S.B, and N.L. contributed to observing. All co-authors participated in discussions and contributed to the revision of the manuscript.
§ COMPETING INTERESTS
We declare no competing interests.
§ CORRESPONDING AUTHOR
Correspondence and requests for materials should be addressed to Keming Zhang
([email protected]) or Weicheng Zang ([email protected]).
10
url<#>1urlprefixURL
zang_earth-mass_2021
authorZang, W. et al.
titleAn Earth-mass planet in a time of COVID-19: KMT-2020-BLG-0414Lb.
journal volume21, pages239 (year2021).
dominik_binary_1999
authorDominik, M.
titleThe binary gravitational lens and its extreme cases.
journal volume349, pages108–125 (year1999).
jiang_ogle-2003-blg-238_2004
authorJiang, G. et al.
titleOGLE-2003-BLG-238: Microlensing Mass Estimate of an Isolated Star*.
journal volume617, pages1307 (year2004).
poindexter_systematic_2005
authorPoindexter, S. et al.
titleSystematic Analysis of 22 Microlensing Parallax Candidates.
journal volume633, pages914 (year2005).
wizinowich_w_2006
authorWizinowich, P. L. et al.
titleThe W. M. Keck Observatory Laser Guide Star Adaptive Optics System: Overview.
journal volume118, pages297–309 (year2006).
van_dam_w_2006
authorvan Dam, M. A. et al.
titleThe W. M. Keck Observatory Laser Guide Star Adaptive Optics System: Performance Characterization.
journal volume118, pages310–318 (year2006).
brown_elm_2020
authorBrown, W. R. et al.
titleThe ELM Survey. VIII. Ninety-eight Double White Dwarf Binaries.
journal volume889, pages49 (year2020).
brown_elm_2022
authorBrown, W. R., authorKilic, M., authorKosakowski, A. & authorGianninas, A.
titleThe ELM Survey. IX. A Complete Sample of Low-mass White Dwarf Binaries in the SDSS Footprint.
journal volume933, pages94 (year2022).
sun_formation_2018
authorSun, M. & authorArras, P.
titleFormation of Extremely Low-mass White Dwarf Binaries.
journal volume858, pages14 (year2018).
li_formation_2019
authorLi, Z., authorChen, X., authorChen, H.-L. & authorHan, Z.
titleFormation of Extremely Low-mass White Dwarfs in Double Degenerates.
journal volume871, pages148 (year2019).
zorotovic_post-common-envelope_2010
authorZorotovic, M., authorSchreiber, M. R., authorGänsicke, B. T. & authorNebot Gómez-Morán, A.
titlePost-common-envelope binaries from SDSS. IX: Constraining the common-envelope efficiency.
journal volume520, pagesA86 (year2010).
belloni_formation_2024
authorBelloni, D., authorZorotovic, M., authorSchreiber, M. R., authorParsons, S. G. & authorGarbutt, J. A.
titleFormation of long-period post-common-envelope binaries I. No extra energy is needed to explain oxygen-neon white dwarfs paired with AFGK-type main-sequence stars
journal volume686, pagesA61 (year2024).
falcon_gravitational_2010
authorFalcon, R. E., authorWinget, D. E., authorMontgomery, M. H. & authorWilliams, K. A.
titleA Gravitational Redshift Determination of the Mean Mass of White Dwarfs. DA Stars.
journal volume712, pages585–595 (year2010).
kilic_100_2020
authorKilic, M. et al.
titleThe 100 pc White Dwarf Sample in the SDSS Footprint.
journal volume898, pages84 (year2020).
chen_probabilistic_2016
authorChen, J. & authorKipping, D.
titlePROBABILISTIC FORECASTING OF THE MASSES AND RADII OF OTHER WORLDS.
journal volume834, pages17 (year2016).
otegi_revisited_2020
authorOtegi, J. F., authorBouchy, F. & authorHelled, R.
titleRevisited mass-radius relations for exoplanets below 120 M_⊕.
journal volume634, pagesA43 (year2020).
cummings_white_2018
authorCummings, J. D., authorKalirai, J. S., authorTremblay, P.-E., authorRamirez-Ruiz, E. & authorChoi, J.
titleThe White Dwarf Initial–Final Mass Relation for Progenitor Stars from 0.85 to 7.5 M_⊙.
journal volume866, pages21 (year2018).
dong_microlensing_2009
authorDong, S. et al.
titleMicrolensing Event MOA-2007-BLG-400: Exhuming the Buried Signature of a Cool, Jovian-Mass Planet.
journal volume698, pages1826–1837 (year2009).
skowron_binary_2011
authorSkowron, J. et al.
titleBINARY MICROLENSING EVENT OGLE-2009-BLG-020 GIVES VERIFIABLE MASS, DISTANCE, AND ORBIT PREDICTIONS.
journal volume738, pages87 (year2011).
jackson_tidal_2008
authorJackson, B., authorGreenberg, R. & authorBarnes, R.
titleTidal Evolution of Close-in Extrasolar Planets.
journal volume678, pages1396 (year2008).
jones_properties_2014
authorJones, M. I., authorJenkins, J. S., authorBluhm, P., authorRojo, P. & authorMelo, C. H. F.
titleThe properties of planets around giant stars.
journal volume566, pagesA113 (year2014).
kass_bayes_1995
authorKass, R. E. & authorRaftery, A. E.
titleBayes Factors.
journalJ. Am. Stat. Assoc. volume90, pages773–795 (year1995).
ida_toward_2004
authorIda, S. & authorLin, D. N. C.
titleToward a Deterministic Model of Planetary Formation. I. A Desert in the Mass and Semimajor Axis Distributions of Extrasolar Planets.
journal volume604, pages388 (year2004).
rasio_dynamical_1996
authorRasio, F. A. & authorFord, E. B.
titleDynamical Instabilities and the Formation of Extrasolar Planetary Systems.
journalScience volume274, pages954–956 (year1996).
gould_free-floating_2022
authorGould, A. et al.
titleFREE-FLOATING PLANETS, THE EINSTEIN DESERT, AND 'OUMUAMUA.
journalJ. Kor. Astron. Soc. volume55, pages173–194 (year2022).
sumi_free-floating_2023
authorSumi, T. et al.
titleFree-Floating planet Mass Function from MOA-II 9-year survey towards the Galactic Bulge
journal volume166, pages108 (year2023).
mroz_free-floating_2023
authorMróz, P., authorBan, M., authorMarty, P. & authorPoleski, R.
titleFree-floating or wide-orbit? Keck adaptive-optics observations reveal no host stars near free-floating planet candidates
journal volume167, pages40 (year2023).
nelson_minimum_2018
authorNelson, L., authorSchwab, J., authorRistic, M. & authorRappaport, S.
titleMinimum Orbital Period of Precataclysmic Variables.
journal volume866, pages88 (year2018).
kruse_koi-3278_2014
authorKruse, E. & authorAgol, E.
titleKOI-3278: A Self-Lensing Binary Star System.
journalScience volume344, pages275–277 (year2014).
kawahara_discovery_2018
authorKawahara, H. et al.
titleDiscovery of Three Self-lensing Binaries from Kepler.
journal volume155, pages144 (year2018).
yamaguchi_wide_2024
authorYamaguchi, N. et al.
titleWide post-common envelope binaries containing ultramassive white dwarfs: evidence for efficient envelope ejection in massive asymptotic giant branch stars.
journal volume527, pages11719–11739 (year2024).
izzard_post-agb_2018
authorIzzard, R. G. & authorJermyn, A. S.
titlePost-AGB Discs from Common-Envelope Evolution.
journalGalaxies volume6, pages97 (year2018).
deck_first-order_2013
authorDeck, K. M., authorPayne, M. & authorHolman, M. J.
titleFIRST-ORDER RESONANCE OVERLAP AND THE STABILITY OF CLOSE TWO-PLANET SYSTEMS.
journal volume774, pages129 (year2013).
guo_effects_2016
authorGuo, J., authorLin, L., authorBai, C. & authorLiu, J.
titleThe effects of solar Reimers $\eta$on the final destinies of Venus, the Earth, and Mars.
journal volume361, pages122 (year2016).
mcdonald_mass-loss_2015
authorMcDonald, I. & authorZijlstra, A. A.
titleMass-loss on the red giant branch: the value and metallicity dependence of Reimers' η in globular clusters.
journal volume448, pages502–521 (year2015).
schroder_distant_2008
authorSchröder, K.-P. & authorConnon Smith, R.
titleDistant future of the Sun and Earth revisited.
journal volume386, pages155–163 (year2008).
lanza_residual_2023
authorLanza, A. F., authorLebreton, Y. & authorSallard, C.
titleResidual eccentricity of an Earth-like planet orbiting a red giant Sun.
journal volume674, pagesA176 (year2023).
mao_gravitational_1991
authorMao, S. & authorPaczyński, B.
titleGravitational Microlensing by Double Stars and Planetary Systems.
journal volume374, pagesL37 (year1991).
gould_discovering_1992
authorGould, A. & authorLoeb, A.
titleDiscovering planetary systems through gravitational microlenses.
journal volume396, pages104–114 (year1992).
metchev_palomarkeck_2009
authorMetchev, S. A. & authorHillenbrand, L. A.
titleTHE PALOMAR/KECK ADAPTIVE OPTICS SURVEY OF YOUNG SOLAR ANALOGS: EVIDENCE FOR A UNIVERSAL COMPANION MASS FUNCTION.
journal volume181, pages62 (year2009).
bradley_astropyphotutils_2022
authorBradley, L. et al.
titleastropy/photutils: 1.5.0
journalZenodo doidoi:10.5281/zenodo.10967176 (year2022).
minniti_vista_2010
authorMinniti, D. et al.
titleVISTA Variables in the Via Lactea (VVV): The public ESO near-IR variability survey of the Milky Way.
journal volume15, pages433–443 (year2010).
nataf_reddening_2013
authorNataf, D. M. et al.
titleREDDENING AND EXTINCTION TOWARD THE GALACTIC BULGE FROM OGLE-III: THE INNER MILKY WAY'S RV ∼ 2.5 EXTINCTION CURVE*.
journal volume769, pages88 (year2013).
nataf_interstellar_2016
authorNataf, D. M. et al.
titleInterstellar extinction curve variations towards the inner Milky Way: a challenge to observational cosmology.
journal volume456, pages2692–2706 (year2016).
gonzalez_reddening_2012
authorGonzalez, O. A. et al.
titleReddening and metallicity maps of the Milky Way bulge from VVV and 2MASS. II. The complete high resolution extinction map and implications for Galactic bulge studies.
journal volume543, pagesA13 (year2012).
nishiyama_interstellar_2009
authorNishiyama, S. et al.
titleINTERSTELLAR EXTINCTION LAW TOWARD THE GALACTIC CENTER III: J, H, KS BANDS IN THE 2MASS AND THE MKO SYSTEMS, AND 3.6, 4.5, 5.8, 8.0 μm IN THE SPITZER/IRAC SYSTEM.
journal volume696, pages1407 (year2009).
bessell_jhklm_1988
authorBessell, M. S. & authorBrett, J. M.
titleJHKLM Photometry: Standard Systems, Passbands, and Intrinsic Colors.
journal volume100, pages1134 (year1988).
carpenter_color_2001
authorCarpenter, J. M.
titleColor Transformations for the 2MASS Second Incremental Data Release.
journal volume121, pages2851–2871 (year2001).
paxton_modules_2011
authorPaxton, B. et al.
titleModules for Experiments in Stellar Astrophysics (MESA).
journal volume192, pages3 (year2011).
choi_mesa_2016
authorChoi, J. et al.
titleMesa Isochrones and Stellar Tracks (MIST). I. Solar-scaled Models.
journal volume823, pages102 (year2016).
dotter_mesa_2016
authorDotter, A.
titleMESA Isochrones and Stellar Tracks (MIST) 0: Methods for the Construction of Stellar Isochrones.
journal volume222, pages8 (year2016).
zhang_real-time_2021
authorZhang, K. et al.
titleReal-time Likelihood-free Inference of Roman Binary Microlensing Events with Amortized Neural Posterior Estimation.
journal volume161, pages262 (year2021).
zhang_ubiquitous_2022
authorZhang, K., authorGaudi, B. S. & authorBloom, J. S.
titleA ubiquitous unifying degeneracy in two-body microlensing systems.
journal volume6, pages782–787 (year2022).
zhu_toward_2017
authorZhu, W. et al.
titleToward a Galactic Distribution of Planets. I. Methodology and Planet Sensitivities of the 2015 High-cadence Spitzer Microlens Sample.
journal volume154, pages210 (year2017).
yang_kmt-2021-blg-0171lb_2022
authorYang, H. et al.
titleKMT-2021-BLG-0171Lb and KMT-2021-BLG-1689Lb: two microlensing planets in the KMTNet high-cadence fields with followup observations.
journal volume516, pages1894–1909 (year2022).
li_three-dimensional_2018
authorLi, L. et al.
titleThree-dimensional Structure of the Milky Way Dust: Modeling of LAMOST Data.
journal volume858, pages75 (year2018).
gillessen_monitoring_2009
authorGillessen, S. et al.
titleMONITORING STELLAR ORBITS AROUND THE MASSIVE BLACK HOLE IN THE GALACTIC CENTER.
journal volume692, pages1075 (year2009).
maiz-apellaniz_spatial_2001
authorMaíz-Apellániz, J.
titleThe Spatial Distribution of O-B5 Stars in the Solar Neighborhood as Measured by Hipparcos*.
journal volume121, pages2737 (year2001).
salaris_large_2010
authorSalaris, M., authorCassisi, S., authorPietrinferni, A., authorKowalski, P. M. & authorIsern, J.
titleA LARGE STELLAR EVOLUTION DATABASE FOR POPULATION SYNTHESIS STUDIES. VI. WHITE DWARF COOLING SEQUENCES.
journal volume716, pages1241 (year2010).
exoplanet:joss
authorForeman-Mackey, D. et al.
titleexoplanet: Gradient-based probabilistic inference for exoplanet data & other astronomical time series.
journalarXiv e-prints pagesarXiv:2105.01994.
|
http://arxiv.org/abs/2409.02109v1 | 20240903175931 | The Atacama Cosmology Telescope: Multi-probe cosmology with unWISE galaxies and ACT DR6 CMB lensing | [
"Gerrit S. Farren",
"Alex Krolewski",
"Frank J. Qu",
"Simone Ferraro",
"Erminia Calabrese",
"Jo Dunkley",
"Carmen Embil Villagra",
"J. Colin Hill",
"Joshua Kim",
"Mathew S. Madhavacheril",
"Kavilan Moodley",
"Lyman A. Page",
"Bruce Partridge",
"Neelima Sehgal",
"Blake D. Sherwin",
"Cristóbal Sifón",
"Suzanne T. Staggs",
"Alexander Van Engelen",
"Edward J. Wollack"
] | astro-ph.CO | [
"astro-ph.CO"
] |
figures/rcases.}unWISE x ACT DR6 3×2pt cosmologyFarren, Krolewski, Qu, Ferraro et al.0000-0001-5704-1127]Gerrit S. FarrenDAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK0000-0003-2183-7021]Alex KrolewskiPerimeter Institute for Theoretical Physics, 31 Caroline St. North, Waterloo, ON NL2 2Y5, Canada0000-0001-7805-1068]Frank J. QuDAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK0000-0003-4992-7854]Simone FerraroPhysics Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USABerkeley Center for Cosmological Physics, University of California, Berkeley, CA 947200000-0003-0837-0068]Erminia CalabreseSchool of Physics and Astronomy, Cardiff University, The Parade, Cardiff, Wales CF24 3AA, UK0000-0002-7450-2586]Jo DunkleyJoseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ, USA 08544Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ USA 085440009-0001-3987-7104]Carmen Embil VillagraDAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UK0000-0002-9539-0835]J. Colin HillDepartment of Physics, Columbia University, 538 West 120th Street, New York, NY, USA 100270000-0002-0935-3270]Joshua KimDepartment of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA0000-0001-6740-5350]Mathew S. MadhavacherilDepartment of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA, USA 191040000-0001-6606-7142]Kavilan MoodleyAstrophysics Research Centre, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa0000-0002-9828-3525]Lyman A. PageJoseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ, USA 085440000-0001-6541-9265]Bruce PartridgeDepartment of Physics and Astronomy, Haverford College, Haverford PA, USA 190410000-0002-9674-4527]Neelima SehgalPhysics and Astronomy Department, Stony Brook University, Stony Brook, NY USA 117940000-0002-4495-1356]Blake D. SherwinDAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK0000-0002-8149-1352]Cristóbal SifónInstituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059, Valparaíso, Chile0000-0002-7020-7301]Suzanne T. StaggsJoseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ, USA 085440000-0002-3495-158X]Alexander Van EngelenSchool of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA0000-0002-7567-4451]Edward J. WollackNASA/Goddard Space Flight Center, Greenbelt, MD, USA 20771Gerrit S. [email protected]§ ABSTRACT
We present a joint analysis of the CMB lensing power spectra measured from the Data Release 6 of the Atacama Cosmology Telescope and Planck PR4, cross-correlations between the ACT and Planck lensing reconstruction and galaxy clustering from unWISE, and the unWISE clustering auto-spectrum. We obtain 1.5% constraints on the matter density fluctuations at late times parametrised by the best constrained parameter combination S_8^ 3x2pt≡σ_8 (Ω_m/0.3)^0.4 = 0.815 ± 0.012. The commonly used S_8 ≡σ_8 (Ω_m/0.3)^0.5 parameter is constrained to S_8 = 0.816±0.015. In combination with baryon acoustic oscillation (BAO) measurements we find σ_8=0.815± 0.012. We also present sound-horizon-independent estimates of the present day Hubble rate of H_0=66.4^+3.2_-3.7 from our large scale structure data alone and H_0=64.3^+2.1_-2.4 in combination with uncalibrated supernovae from . Using parametric estimates of the evolution of matter density fluctuations, we place constraints on cosmic structure in a range of high redshifts typically inaccessible with cross-correlation analyses. Combining lensing cross- and auto-correlations, we derive a 3.3% constraint on the integrated matter density fluctuations above z=2.4, one of the tightest constraints in this redshift range and fully consistent with a ΛCDM model fit to the primary CMB from Planck. Finally, combining with primary CMB observations and using the extended low redshift coverage of these combined data sets we derive constraints on a variety of extensions to the ΛCDM model including massive neutrinos, spatial curvature, and dark energy. We find in flat ΛCDM ∑ m_ν<0.12 eV at 95% confidence using the LSS data, BAO measurements from SDSS and primary CMB observations.
§ INTRODUCTION
Measurements of the matter density fluctuations at low redshifts inform our understanding of the formation of cosmic structure, probe the nature of dark matter and dark energy, and constrain the masses of neutrinos. They also provide an important test of the predictions of general relativity. Gravitational lensing observations that are sensitive to the total matter distribution, including the invisible dark matter, have become an indispensable tool for studying cosmic structure. Several lensing related techniques have been developed to study both the weak gravitational lensing of galaxies as well as of the cosmic microwave background (CMB).
Over the past two decades a standard model of cosmology has emerged primarily based on high precision observations of the CMB. Measurements by WMAP first established the now prevailing six parameter ΛCDM model <cit.>. It posits that the Universe is dominated by phenomenological cold dark matter (CDM), is spatially flat, and its expansion is driven by a cosmological constant Λ. These results were sharpened by measurements made by the Planck satellite <cit.>. The model also makes predictions for other cosmological observables, which in recent years have reached increasing precision enabling new tests of this model. Despite the ΛCDM model's overall success, some discrepancies have been observed and several extensions have been put forward. Here we weigh in on some of these discrepancies and constrain extensions beyond the standard model.
One such discrepancy and a primary focus of this work is the amplitude of matter density fluctuations typically parametrised in terms of σ_8, the RMS (root-mean-square) of the linear matter density contrast smoothed on scales of 8, which provides the normalisation of the matter power spectrum. The shape and redshift evolution of the matter power spectrum are predicted from the ΛCDM model.
In recent work <cit.> and <cit.> reconstructed the gravitational lensing field over 9400 deg^2 from new high resolution CMB observations by the Atacama Cosmology Telescope (ACT). They showed percent level constraints on the integrated matter density fluctuations over a wide range of redshifts (z≲ 5) from the CMB lensing power spectrum. The σ_8 constraints are in excellent agreement with model extrapolations from a ΛCDM model fit to observations of the primary fluctuations in the CMB as observed by Planck<cit.>. Their results are also in excellent agreement with CMB lensing measurements from Planck<cit.> and the combination of both measurements yields improved constraints on cosmic structure.
<cit.> used the ACT DR6 and Planck CMB lensing reconstruction together with galaxies detected in imaging data from the Wide-Field Infrared Survey Explorer <cit.> to focus on a lower redshift range, approximately 0.2≲ z ≲ 1.8. Using the correlation between the galaxy distribution in two redshift bins, which acts as a biased tracer of the underlying matter density, and the CMB lensing reconstruction, <cit.> similarly found good agreement with the ΛCDM prediction for σ_8 and CMB lensing auto-correlation results.
Meanwhile, several galaxy weak lensing surveys like the Dark Energy Survey <cit.>, the Kilo-Degree Survey <cit.>, and the Hyper Suprime-Cam <cit.>, among others, have found a 2-3σ lower amplitude of matter density fluctuations, compared to the prediction from Planck primary CMB assuming a standard ΛCDM model. Such surveys typically probe lower redshifts, z ≲ 1, than the CMB lensing work discussed above. Similar results have also been obtained by some other studies of CMB lensing cross-correlations with galaxy surveys, albeit at varying levels of significance <cit.>. While the modelling assumptions and the range of scales probed in these works vary, they mostly sensitive to lower redshifts than the analysis presented in <cit.>.
This motivates a further investigation of the formation of structure at low redshifts. We note that the σ_8 constraints from <cit.> and <cit.> differ from the galaxy weak lensing result not only in terms of the redshift range, but also in the scales probed. As pointed out for example by <cit.> and <cit.> galaxy weak lensing draws significant information from highly non-linear scales which the CMB lensing auto- and cross-correlations are insensitive to. The observed discrepancy (∼2-3σ) with the ΛCDM prediction for σ_8 from Planck may therefore also be explained by a suppression of the matter power spectrum on small scales that exceeds expectations of the baryon feedback induced suppression based on the most recent hydrodynamical simulations <cit.>.
Motivated in part by further investigation of the formation of structure at low redshifts and on linear to mildly non-linear scales we combine the lensing auto-spectrum analysis from <cit.> and <cit.> with the cross-correlation analysis from <cit.>. Throughout the paper we will often refer to this combination as `3x2pt' given that it contains three two-point correlation functions (or rather their harmonic space equivalent, the power spectrum). These are the auto-spectrum of the CMB lensing convergence, C_ℓ^κκ, the cross-correlation between galaxies and CMB lensing, C_ℓ^κ g, and the galaxy auto-correlation, C_ℓ^gg. By contrast we will often refer to the cross-correlation analysis as `2x2pt' (C_ℓ^κ g&C_ℓ^gg). In Sec. <ref> we show constraints on cosmic structure formation from our `3x2pt' data.
Another discrepancy, which has reached ∼5σ with recent data, is between the present day expansion rate, parameterised by the Hubble constant H_0, inferred within the ΛCDM model from the CMB <cit.> and a local measurement based on Cepheid-calibrated supernovae from <cit.>. Meanwhile results from baryon acoustic oscillation (BAO) observations have generally found results consistent with the CMB-derived values <cit.>. CMB and BAO constraints are predominantly informed by the angular size of the sound horizon scale[Here, we do not make a careful distinction between the sound horizon scale relevant for BAO (r_ d) and CMB (r_ s) observations, although to be precise these are defined at the baryon drag epoch and at photon decoupling, respectively.]. This fact has motivated theoretical work to explain the tension by invoking new physics that decreases the physical size of the sound horizon at recombination by approximately 10% (e.g., ). It also motivates new measurements of the Hubble constant that are derived from a different physical scale present in the large-scale structure, namely, the matter-radiation equality scale (with comoving wave-number k_ eq) which sets the turn-over in the matter power spectrum. As pointed out in <cit.> such measurements can be obtained from the CMB lensing power spectrum and other large scale structure (LSS) tracers and we investigate the implications of our data for the H_0-tension in Sec. <ref>.
Beyond modifications to the flat ΛCDM model motivated by these observed discrepancies other extensions are motivated by physical considerations. Given the phenomenological nature of dark energy it is natural to consider departures from the cosmological constant model which allow an equation of state w≠ -1 or evolution in the dark energy equation of state. We consider such models in Sec. <ref>; our data unfortunately only marginally improves on existing constraints on such models derived from the primary CMB, BAO and supernovae. Furthermore, observations of neutrino oscillations require neutrinos to be massive, and while the mass splitting between the neutrino mass eigenstates is well determined the absolute mass scale is poorly constrained <cit.>. In Sec. <ref> we derive constraints on the neutrino mass from the characteristic suppression of the formation of structure on scales smaller than the neutrino free streaming scale which can be probed with our `3x2pt' data. It has been pointed out that those neutrino mass constraints are model dependent, as the effect of massive neutrinos can be mimicked also by some beyond-ΛCDM models <cit.>. Thus we consider massive neutrinos in the context of extended dark energy models in Sec. <ref>. In addition to dark energy and neutrinos, we also revisit the assumption of spatial flatness in Sec. <ref> where our data provides competitive cross-checks on constraints from the primary CMB and BAO.
Before presenting these results in Sec. <ref> we discuss the data used in this analysis in Sec. <ref> including the external data sets we employ in our analysis (Sec. <ref>). In Sec. <ref> we briefly describe how we obtain the covariance for the likelihood analysis described in Sec. <ref>. Finally, in Sec. <ref> we summarise our findings and conclude with an outlook to future work. For the convenience of the reader we also provide an overview of the key results from this work in the following section (Sec. <ref>).
§ KEY RESULTS
The key results from this work are constraints on the amplitude of matter density fluctuations at low redshift parameterised by σ_8. As in CMB lensing autocorrelation analyses <cit.>, CMB lensing cross-correlations <cit.>, or galaxy weak lensing <cit.>, when using the projected large scale structure tracers only, σ_8 is significantly degenerate with the matter density, Ω_m (as can be seen in the left panel of Fig. <ref>). In our case the parameter combination best constrained by the combination of CMB lensing auto-spectrum, galaxy-galaxy clustering auto-power spectra, and the cross-correlation between CMB lensing and galaxies is approximately σ_8 Ω_m^0.4. In analogy with the commonly used parameter S_8 ≡σ_8 (Ω_m/0.3)^0.5, corresponding to the parameter combination best constrained by galaxy weak lensing surveys, we therefore define the parameter S_8^ 3x2pt≡σ_8 (Ω_m/0.3)^0.4.
Using the lensing auto-spectra from ACT and Planck, the respective cross-correlations with the unWISE galaxies, and the galaxy auto-correlation (3x2pt) we constrain this parameter combination to ∼1.5%,
S_8^ 3x2pt≡ σ_8 (Ω_m/0.3)^0.4 = 0.815± 0.012 ( 3x2pt).
Within the ΛCDM model this parameter is similarly well constrained by primary CMB data (S_8^ 3x2pt = 0.826± 0.012 from the primary CMB data sets discussed in Sec. <ref>) and the two constraints are in good agreement as can be seen in the left panel of Fig. <ref>. Even though S_8 differs slightly from the best constrained combination of σ_8 and Ω_m in our work we nevertheless also obtain highly competitive constraints on this parameter combination which can be compared, for example, to results from DES, KiDS and HSC <cit.>. Using again both ACT and Planck CMB lensing auto-spectra, the cross-correlations with unWISE, and the unWISE auto-spectrum we find S_8=0.816± 0.015 ( 3x2pt). While our value of S_8 falls between the typical values preferred by galaxy weak lensing analyses and those predicted from ΛCDM fits to the CMB, it is not in statistically significant tension with either of those data sets (∼1σ and ∼1.5-2.5σ relative to primary CMB and various galaxy weak lensing measurements respectively).
BAO data provides constraints primarily in the Ω_m-H_0 parameter space and can therefore be used to break the degeneracy between σ_8 and Ω_m. By adding BAO data, primarily from the Baryon Oscillation Spectroscopic Survey (BOSS), to our analysis we obtain
σ_8=0.815 ± 0.012 ( 3x2pt + BAO), also a ∼1.5% measurement. This value is again in good agreement with model predictions derived from ΛCDM fits to primary CMB data (σ_8 = 0.8107± 0.0064; the posteriors are shown in the σ_8-Ω_m plane in the right panel of Fig. <ref>).
Following the approach suggested in <cit.> we use our CMB lensing and cross-correlation data to place constraints on the Hubble constant which arise exclusively from the measurement of the angular size of the matter-radiation equality scale imprinted in the turnover of the matter power spectrum. These constraints are independent of the sound horizon scale, knowledge of which is crucial for inferring the Hubble constant from the baryon oscillation feature either in the primary CMB or through BAO observations. We find H_0=66.5^+3.2_-3.7 ( 3x2pt) from CMB lensing, cross-correlation, and galaxy auto-correlation data alone (using both ACT and Planck). When additionally including uncalibrated supernovae from the data set <cit.> to further break the degeneracy between H_0 and Ω_m we obtain H_0=64.3^+2.1_-2.4 ( 3x2pt + SN). This represents a ∼20% improvement over the constraint presented in <cit.> from the lensing auto-spectrum alone. Our results are consistent with the value of H_0 preferred by BAO and primary CMB data (at the ∼1σ level), but in tension (∼3.6σ) with local measurements of H_0 from <cit.>. This is consistent with the results found using similar methods on three-dimensional galaxy clustering data <cit.>.
Posteriors for the key ΛCDM parameters (H_0, Ω_m, σ_8, and S_8^ 3x2pt) probed in our analysis are shown in Fig. <ref>. We show posteriors for the 3x2pt analysis jointly using ACT and Planck lensing data, as well as each lensing data set separately.
Furthermore, we explore a model-agnostic parametrisation of the growth of perturbations. Using the combination of lensing auto- and cross-correlations together with BAO we find tight (≲ 4%) constraints on σ_8(z) in three redshift bins z=0-1.15, z=1.15-2.4, and z>2.4. Our reconstruction of σ_8(z) is in good agreement with predictions within a ΛCDM model fit to the primary CMB from Planck.
In going beyond the standard ΛCDM model we explore various extensions, including non-minimum mass neutrinos, evolving dark energy and curvature. We find constraints on the neutrino mass sum from the combination of our 3x2pt data with BAO and primary CMB data from Planck consistent with results from analyses using only the lensing auto-spectrum. When including both ACT and Planck lensing auto- and cross-spectra we constrain the sum of the neutrinos to
∑ m_ν < 0.124 eV at 95% confidence
( 3x2pt + CMB + BAO).
These constraints are degraded when also considering a time varying dark energy equation of state but by additionally including uncalibrated supernovae from we still obtain a constraint of ∑ m_ν < 0.231 eV (95% C.I.).
We constrain the curvature of the Universe to
-0.011 < Ω_k < 0.004 at 95% confidence
( 3x2pt + CMB)
from the primary CMB and our data alone (without BAO), about 20% tighter than previous results from the primary CMB and the CMB lensing auto-spectrum only <cit.>.
We make our data and likelihood publicly available enabling the community to perform further investigation into models not explored in this work (see Appendix <ref> for details).
§ DATA
In this section we briefly describe the data sets we use in our analysis as well as several external likelihoods we include to break degeneracies and probe both the ΛCDM model as well as potential extensions with all available data.
§.§ CMB lensing power spectrum
Throughout we adopt the CMB lensing power spectrum measurements from ACT DR6 <cit.> and Planck PR4 <cit.>.
The ACT DR6 lensing reconstruction covers 9400 deg^2 of the sky and is signal-dominated on lensing scales of L<150. This reconstruction is based on CMB measurements made between 2017 and 2021 (relying only on the night-time data) at ∼90 and ∼150 GHz and uses CMB scales 600<ℓ<3000. The use of cross-correlation based estimators reduces the sensitivity to the modelling of the instrumental noise <cit.>, and profile hardening is employed to mitigate against extragalactic foregrounds <cit.>. Since the CMB lensing signal is reconstructed using quadratic estimators the power spectrum of the reconstruction is a four-point function containing several biases which need to be subtracted using simulations. The largest of these biases is the Gaussian disconnected bias, which depends on the two-point power spectrum of the observed CMB maps and is thus non-zero even in the absence of lensing. The debiasing is discussed in detail in <cit.>. Since some of the bias corrections as well as the normalisation of the lensing estimator depend weakly on cosmology we implement corrections capturing the dependency of the lensing normalisation and bias subtraction on cosmological parameters (see Appendix <ref> for details). The CMB lensing power spectrum from <cit.> is determined at 2.3% precision, corresponding to a measurement signal-to-noise ratio of 43σ.
Potential sources of systematic biases have been investigated in detail in <cit.> and <cit.> and were found to be comfortably subdominant to statistical uncertainties.
The Planck PR4 lensing analysis <cit.> reconstructs lensing with CMB angular scales from 100≤ℓ≤2048 using the quadratic estimator. This analysis is based on the reprocessed PR4 maps that incorporated around 8% more data compared to the 2018 Planck PR3 release. It also includes pipeline improvements such as optimal (anisotropic) filtering of the input CMB fields resulting in an increase of the overall signal-to-noise ratio by around 20% compared to Planck PR3 <cit.> and a detection of the lensing power spectrum at 42σ.
§.§ CMB lensing-galaxy cross-correlations and galaxy-galaxy auto-correlation
We include measurements of the cross-correlation between CMB lensing (from ACT DR6 and Planck PR4) and galaxies from the unWISE catalog, along with the auto-correlation of the unWISE galaxies <cit.>. The unWISE galaxy catalogue is constructed from three band imaging by the Wide-Field Infrared Survey Explorer (WISE) survey <cit.>, including four years of the post-hibernation NEOWISE phase <cit.>. We employ two colour selected galaxy samples which we refer to as Blue and Green, with broad redshift distributions centred at z ∼ 0.6 and 1.1. These samples are extensively described in <cit.> and <cit.>. To obtain redshift distributions for these samples of galaxies we employ the method of clustering redshifts estimated from cross-correlations with spectroscopic samples from the Sloan Digital Sky Survey (SDSS) (see and for further details).
As with the lensing power spectrum measurements, <cit.> undertook an extensive evaluation of potential systematic biases using different versions of the lensing reconstructions and analysis masks, as well as realistic simulations of the contamination due to extragalactic foregrounds. They did not find any evidence for statistically significant biases.
§.§ External Likelihoods
We also combine our results with external data sets from observations of the primary CMB temperature and polarisation anisotropies, Baryon Acoustic Oscillations, and uncalibrated supernovae.
§.§.§ Primary CMB
We jointly analyse our data with measurements of the temperature and polarisation power spectra of the primary CMB as observed by the Planck satellite. In keeping with <cit.> we adopt the analysis of the Planck PR4 maps based on the likelihood for the small scale (ℓ>30) temperature and polarisation power spectra <cit.>. Additionally, we include the Planck PR3 likelihood for the large scale temperature power spectrum <cit.>. Finally, to include information from Planck's large-scale polarisation data that constrains the optical depth to reionisation we use the likelihood estimated in <cit.> from the maps.
§.§.§ Baryon Acoustic Oscillations
Furthermore, we add observations of the BAO feature. We use a combination of BAO measurements based on the clustering of galaxies with samples spanning redshifts up to z≃ 1, including 6dFGS <cit.>, SDSS DR7 Main Galaxy Sample <cit.>, BOSS DR12 LRGs <cit.>, and eBOSS DR16 LRGs <cit.>. In contrast to earlier work on the ACT DR6 CMB lensing auto-spectrum <cit.>, we additionally include the higher-redshift ELGs <cit.>, Lyman-α forest <cit.>, and quasar samples <cit.> from eBOSS DR16.
As this work was nearing completion the Year-1 BAO results from the Dark Energy Spectroscopic Instrument <cit.> were released <cit.>. These provide improved BAO measurements tightening constraints on the cosmic expansion history. We do not reanalyse all our results with the DESI BAO, but because <cit.> showed that this data favours tight limits on the neutrino mass, we address our neutrino mass constraints also with the recent DESI BAO likelihood (see Sec. <ref>). Given the consistency and similar constraining power on Ω_m from DESI Y1 and earlier BAO data sets in the context of flat ΛCDM we do not expect the choice of BAO likelihoods to impact our constraints on structure formation.
§.§.§ Uncalibrated Supernovae
Finally, we present some constraints which additionally employ `uncalibrated' measurements of the relationship between the apparent brightness of Type IA supernovae and their redshifts from the data set <cit.>. Here, `uncalibrated' refers to the fact that the absolute magnitudes of the supernovae have not been calibrated, e.g., with Cepheid variables or the tip-of-the-red-giant-branch (TRGB) technique, such that only information from the relative (not absolute) distance-redshift relation is included. This data set provides matter density information independently of the primary CMB and BAO, which enables estimates of the Hubble constant independent of the sound horizon scale as proposed in <cit.>. Uncalibrated supernovae also constrain potential dark energy evolution and break degeneracies between evolving dark energy and neutrino mass when considering multiple extensions to the baseline ΛCDM model jointly.
§ 3×2PT COVARIANCE
As in <cit.>, we use a simulation derived covariance for the CMB lensing and lensing cross-correlation measurements. <cit.> described how to obtain Gaussian realisations of the galaxy field that exhibit the correct correlations with lensing reconstruction simulations from <cit.> for ACT and <cit.> for Planck. They also detailed the analytic estimates for the cross-covariance between different galaxy samples and the simulations used to estimate the covariance between ACT and Planck cross-correlations, which are obtained by running the ACT lensing reconstruction pipeline on Planck simulations. For this work we additionally adopt the bias subtracted CMB lensing auto-power spectra measured on the same simulations to estimate the cross-covariance between the cross-correlation and auto-correlation measurements.
We find significant correlations of up to 50-60% between the ACT lensing auto-spectrum and the cross-correlations with the ACT lensing map and slightly smaller (≲40%) between the respective Planck spectra. Correlations between C_ℓ^gg and lensing auto-spectra and between cross- and auto-spectra from ACT and Planck respectively (or vice versa) are small (≲20%).
Furthermore, when analysing the low redshift data only (i.e., not in combination with primary CMB data) we propagate uncertainties in the lensing estimator normalisation due to the uncertainty in the CMB two-point functions into additional contributions to the covariance as described in <cit.> and <cit.> for the cross- and auto-spectra respectively. When combining with primary CMB data, we explicitly correct for errors in the normalisation and reconstruction bias subtraction as described in Appendix <ref>.
§ COSMOLOGICAL ANALYSIS
We obtain cosmological constraints by constructing a Gaussian likelihood
-2 lnℒ∝∑_bb'[ ΔĈ_b^gg; ΔĈ_b^κ g; ΔĈ_b^κκ ]ℂ^-1_b b'[ ΔĈ_b'^gg; ΔĈ_b'^κ g; ΔĈ_b^κκ ]
where the ΔĈ_b^gg, ΔĈ_b^κ g, and ΔĈ_b^κκ are the residuals between our observed galaxy-galaxy, galaxy-CMB lensing, and CMB lensing-CMB lensing spectra, Ĉ_b^gg, Ĉ_b^κ g, and Ĉ_b^κκ, and the respective binned and band window convolved theory spectra, C_b^gg, C_b^κ g, and C_b^κκ. The covariance ℂ has the form
ℂ_b b' = [ ℂ_b b'^gg-gg ℂ_b b'^gg-κ g ℂ_b b'^gg-κκ; (ℂ_b b'^gg-κ g)^T ℂ_b b'^κ g-κ g ℂ_b b'^κ g-κκ; (ℂ_b b'^gg-κκ)^T (ℂ_b b'^κ g-κκ)^T ℂ_b b'^κκ-κκ ]
where ℂ_b b'^gg-gg, ℂ_b b'^κ g-κ g, and ℂ_b b'^gg-κ g are the galaxy auto-spectrum covariance, the galaxy-CMB lensing cross-spectrum covariance, and the cross-covariance between them. ℂ_b b'^gg-κκ and ℂ_b b'^κ g-κκ are the cross-covariance between the galaxy-galaxy and galaxy-CMB lensing spectra on one hand and the lensing auto spectrum on the other hand. Finally, ℂ_b b'^κκ-κκ is the CMB lensing power spectrum covariance. These are estimated from simulations as described above in Sec. <ref>. When combining the lensing power spectrum and cross-spectrum likelihood with that for the CMB anisotropy power spectra, we ignore the covariance between the measured lensing and anisotropy spectra, as these are negligible for DR6 noise sensitivities <cit.>.
We infer cosmological parameters via Markov Chain Monte Carlo (MCMC) methods performing Metropolis-Hastings sampling using the [<https://github.com/CobayaSampler/cobaya>] code <cit.>. We consider MCMC chains to be converged if the Gelman-Rubin statistic <cit.> satisfies R-1 ≤ 0.01 for the cosmological parameters of interest.
We use a hybrid perturbation theory expansion to second order to model the galaxy auto and cross-spectra up to k∼0.3 (see for details). We impose conservative priors on lensing magnification, shot noise, and the parameters of the bias expansion (second order and shear bias) based on simulations. <cit.> showed that these results are insensitive to these prior choices, higher order corrections contribute at most at the few percent level to the signal, and even neglecting all higher order terms leads to only minor shifts in inferred parameters (≪1σ). We also marginalise over uncertainties in the redshift distribution of unWISE galaxies (see for details). The CMB lensing power spectrum is modeled using the non-linear matter power spectrum from (see for details).
§.§ Priors
In our baseline analysis we consider a spatially flat, ΛCDM universe with massive neutrinos of the minimum mass allowed in the normal hierarchy (∑ m_ν = 0.06eV). When analysing only low redshift data (i.e., when not including observations of the primary CMB) our analysis is insensitive to the optical depth to reionisation, τ, and we thus fix it to the mean value obtained in <cit.>. For the remaining five cosmological parameters we adopt the priors from <cit.> sampling the logarithm of the scalar perturbation amplitude, log(10^10 A_s), the primordial spectral tilt, n_s, the physical density in baryons and cold dark matter, Ω_b h^2 and Ω_c h^2, and the angular size of the sound horizon at recombination, θ_ MC. We place flat priors on all parameters except the baryon density and the primordial spectral tilt, which are only weakly constrained by our observations. We choose a prior motivated by Big Bang Nucleosynthesis (BBN) measurements of deuterium abundance from <cit.> for Ω_b h^2 (see Table <ref>). As pointed out in <cit.> the primordial spectral tilt and the amplitude of fluctuations are somewhat degenerate given only a measurement of the projected lensing auto- or cross-spectra. We conservatively adopt a prior centred on but also about five times broader than the n_s-constraint obtained from Planck measurements of the CMB anisotropy power spectra in the ΛCDM model <cit.>, and two times broader than constraints obtained there from various extensions of ΛCDM. To avoid exploring unphysical parts of parameter space we furthermore limit the range over which H_0 may vary to between 40 and 100. This limitation is relevant for some of the CMB lensing auto-spectrum-only runs we perform for comparison; all other analyses do not allow for such a wide range of H_0 values.
We note that these prior choices differ somewhat from those adopted in <cit.>, where the baryon density and the spectral tilt were fixed to the Planck best-fit values. Furthermore, this previous work fixed the parameter combination Ω_m h^3, which within ΛCDM is closely related to θ_ MC<cit.>, to the Planck best-fit value.
When combining our low redshift data sets with primary CMB observations discussed in Sec. <ref> we remove the informative priors on Ω_b h^2 and n_s and additionally also sample the optical depth to reionsiation with a uniform prior. When exploring beyond-ΛCDM extensions we adopt the same priors as in <cit.>. All priors are summarised in Table <ref>.
§ CONSTRAINTS ON COSMOLOGY
In the following section we present our constraints on cosmological parameters obtained from jointly analysing the ACT and Planck lensing auto- and cross-spectra together with the auto-correlation of the unWISE galaxies and in some cases external data as discussed in Sec. <ref>. We begin by outlining our findings in the context of a flat ΛCDM model before discussing potential extensions beyond this model including non-minimum mass neutrinos, dark energy with equation of state w ≠ -1, and spatial curvature.
§.§ Constraints on flat Λ CDM
In the context of a flat ΛCDM cosmology the weak lensing and galaxy clustering is primarily sensitive to the amplitude of matter density fluctuations in the late universe. Different probes derive their information from different scales and redshifts, providing complementary information on the evolution of matter density perturbations. As discussed in <cit.> the lensing cross-correlations primarily derive their information from linear and moderately non-linear scales 0.05 ≲ k ≲ 0.3 and redshifts z ≃ 0.2 - 1.6. By contrast the lensing auto-spectrum is sensitive to a large range of redshifts between approximately 0.5 and 5 and is dominated by linear scales <cit.>. With the combination of these two data sets we can thus probe a wide range of redshifts from z ≃ 0.2 to 5 while focusing mostly on linear and mildly non-linear scales.
These results can be compared for example to recent results from galaxy weak lensing surveys <cit.>. These are largely sensitive to smaller scales, but partially overlap with the lower end of the redshift range probed here <cit.>. As described above, such surveys have consistently found a somewhat smaller amplitude of matter density fluctuations than expected within the flat ΛCDM model fit to the primary CMB data from Planck.
In addition to constraints on matter density fluctuations, our data are also sensitive to the Universe's expansion which sets the distances to the observed structures. Features in the distribution of matter of known physical size can be used to constrain the distance to the observed structures independently of redshift estimates and therefore provide the opportunity to constrain the present day expansion rate of the Universe, H_0. Two such scales are imprinted onto the matter density field in the early universe: The sound horizon scale, the distance a sound wave may travel in the early universe prior to recombination (up to z ≃ 1100), and the matter-radiation equality scale, related to the size of the horizon when the matter and radiation densities become equal (z ≃ 3500).
Our data are not directly sensitive to the former, as it manifests through oscillations in the power spectrum which are largely smoothed out due to the projection over a wide range of redshifts. However, the expansion rate can be estimated from the sound horizon feature using BAO observation from spectroscopic galaxy surveys (see e.g., those discussed in Sec. <ref>). Such measurements suffer from significant degeneracies between H_0 and the global matter density Ω_m. Our data, in which Ω_m and H_0 have a different degeneracy direction, can therefore serve to improve BAO derived H_0 estimates.
On the other hand, our data is directly sensitive to the latter scale which is related to the turnover of the matter power spectrum. It was pointed out in <cit.> that CMB lensing data could therefore be used to obtain constraints on the expansion rate independent of the sound horizon scale. This scale has been the target of several modifications to the ΛCDM model which aim to resolve the tension between CMB derived H_0 estimates (which also depend on the sound horizon) and local measurements using the distance ladder <cit.>. This measurement equally suffers from the H_0-Ω_m degeneracy, but the direction of the degeneracy varies with redshift and with the scales probed. Combining lensing auto- and cross-spectrum information thus partially alleviates this degeneracy, allowing us to derive a constraint on H_0 from our CMB lensing and galaxy clustering data alone. However, the degeneracy is more effectively broken by the addition of uncalibrated supernovae data.
§.§.§ Constraints on structure growth
Within the ΛCDM model the parameter combination best constrained by the combination of CMB lensing auto-spectrum data and our cross-correlation measurements is what we have defined as S_8^ 3x2pt≡σ_8 (Ω_m/0.3)^0.4. With lensing data from ACT alone (ACT DR6 C_ℓ^κκ + ACT DR6 × unWISE C_ℓ^κ g + unWISE C_ℓ^gg) we constrain this parameter combination to ∼1.8%,[Here and for the remainder of the section, we will label the 3x2pt analyses using the ACT DR6 or Planck PR4 CMB lensing data sets (or their combination) as ACT+ unWISE or Planck+ unWISE (ACT+Planck+ unWISE) respectively. In the subsequent sections where we only consider the joint 3x2pt data set using ACT and Planck CMB lensing data we will simply refer to these as `3x2pt'. Meanwhile, we label the primary CMB from Planck PR4 simply as `CMB'.]
S_8^ 3x2pt = 0.819 ± 0.015 ( ACT+ unWISE).
This can be compared to results using Planck PR4 lensing data only
S_8^ 3x2pt = 0.803 ± 0.015 (Planck+ unWISE),
a 1.9% constraint. The combination of ACT and Planck data tightens these constraints by a factor of ∼1.25 to
S_8^ 3x2pt = 0.815 ± 0.012 ( ACT+Planck+ unWISE),
For comparison the best constrained parameters in the lensing-only analysis is S_8^ CMBL≡σ_8 (Ω_m/0.3)^0.25 while the cross-correlation analysis with unWISE using C_ℓ^κ g and C_ℓ^gg best constrains S_8^×≡σ_8 (Ω_m/0.3)^0.45. These parameters are constrained to 2.2% and 1.7% from those analyses respectively (using ACT and Planck data jointly in both cases) and improvements on these parameters by moving to the 3x2pt analysis are small.
For better comparability with galaxy weak lensing surveys, which commonly report the parameter combination S_8≡σ_8 (Ω_m/0.3)^0.5, we also present constraints on this parameter. We find the following constraints
S_8 = 0.820 ± 0.021 ( ACT+ unWISE),
S_8 = 0.806 ± 0.018 (Planck+ unWISE), and
S_8 = 0.816 ± 0.015 ( ACT+Planck+ unWISE).
We show the constraints from our 3x2pt analysis using ACT, Planck or ACT+Planck comparison with primary CMB constraints from Planck in Fig. <ref>.
To directly constrain σ_8 we need to break the degeneracy with Ω_m. We have two ways of doing this: either by adding BAO data which provide constraints primarily in the Ω_m-H_0 parameter space or by adding uncalibrated supernovae which constrain Ω_m. With BAO we find
σ_8 = 0.815 ± 0.013
( ACT+ unWISE+ BAO),
σ_8 = 0.806 ± 0.013
(Planck+ unWISE + BAO), and
σ_8 = 0.815 ± 0.012
( ACT+Planck+ unWISE + BAO)
compared to
σ_8 = 0.796 ± 0.020
( ACT + unWISE + SN),
σ_8 = 0.780 ± 0.019
(Planck + unWISE + SN), and
σ_8 = 0.794 ± 0.016
( ACT+Planck + unWISE + SN)
with supernovae data.
Our constraints in combination with BAO are primarily limited by lack of knowledge of n_s. As noted above we adopted a conservative prior of σ(n_s)=0.02 which is about five times wider than the constraint from Planck<cit.>. We investigate instead adopting a more aggressive prior of σ(n_s)=0.01 corresponding approximately to the largest 1σ-posterior found for various beyond-ΛCDM extensions examined by Planck<cit.>. This tightens the constraints on σ_8
σ_8 = 0.815 ± 0.012
( ACT + unWISE + BAO),
σ_8 = 0.805 ± 0.012
(Planck + unWISE + BAO), and
σ_8 = 0.814 ± 0.010
( ACT+Planck + unWISE + BAO).
As we can see the constraints from ACT and Planck data alone, while affected by the n_s prior, are less sensitive to this choice than the combination of both, which is increasingly limited by our conservative prior and improved by ∼20% when tightening the prior on n_s.
Given the agreement with primary CMB predictions reported in <cit.> and <cit.> we do not expect strong disagreement with Planck constraints from the primary CMB. Indeed, our lensing constraints are consistent with the primary CMB data sets discussed in Sec. <ref>. From the primary CMB we have S_8^ 3x2pt = 0.826 ± 0.012 (S_8=0.830 ± 0.014, σ_8=0.8107 ± 0.0064) about 0.6σ (0.7σ, 0.3σ or 1σ comparing to the analyses with BAO or SN respectively) away from the joint result reported above.
We also combine our data with the primary CMB to further break degeneracies. We find even tighter constraints on σ_8 of
σ_8 = 0.8124 ± 0.0048
( ACT + unWISE + CMB),
σ_8 = 0.8105 ± 0.0047
(Planck + unWISE + CMB), and
σ_8 = 0.8127 ± 0.0044
( ACT+Planck + unWISE + CMB).
This constraint represents a ∼30% improvement over the constraint from Planck primary CMB data alone, demonstrating that with precise knowledge of the scalar spectral index, the matter density and other cosmological parameters our data provides a powerful probe of matter density fluctuations.We also note, however, that this result only represents a marginal, ∼2.5% improvement over the combination of Planck primary CMB data with the ACT and Planck lensing auto-spectra. This is not entirely unexpected given that within the ΛCDM model low redshift structure formation is constrained tightly by the lensing auto-spectrum alone.
§.§.§ Constraints on Hubble expansion
As discussed above our data can be used in two ways to shed light on the expansion rate of the Universe, H_0. Firstly, when combining our data with BAO to break the degeneracy between H_0 and Ω_m we find
H_0 = 67.53± 0.80
( ACT + unWISE + BAO + BBN),
H_0 = 67.24± 0.78
(Planck + unWISE+ BAO + BBN), and
H_0 = 67.35± 0.78
( ACT+Planck + unWISE + BAO + BBN).
We note that these constraints depend sensitively on the BBN prior on the baryon density discussed in Sec. <ref> which is crucial in determining the sound horizon size. These results are consistent with results from the primary CMB (H_0=67.32 ± 0.51 from the Planck primary CMB), approximately 20% tighter than results from BOSS BAO alone (H_0=67.33±0.98 when including the BBN prior used in this work as well; ), and comparable to the constraint from the newer DESI BAO measurements (H_0=68.53±0.80 when calibrating the sound horizon ruler with BBN information; ).Secondly, we can use the fact that, as discussed above, our data are not directly sensitive to the sound horizon to place independent constraints on the Hubble rate through measurements of the matter-radiation equality scale.
From the 3x2pt data alone we find
H_0 = 68.4^+5.2_-6.5
( ACT + unWISE),
H_0 = 65.4^+3.3_-4.0
(Planck + unWISE), and
H_0 = 66.5^+3.2_-3.7
( ACT+Planck + unWISE).
When combining our data with uncalibrated supernovae to further break the H_0-Ω_m degeneracy we obtain
H_0 = 64.8^+2.6_-3.0
( ACT+ unWISE + SN),
H_0 = 63.5^+2.2_-2.4
(Planck+ unWISE + SN), and
H_0 = 64.3^+2.1_-2.4
( ACT+Planck+ unWISE + SN).
The posteriors on H_0 and degeneracies with Ω_m are shown in Fig. <ref>.
§.§.§ Comparison of ΛCDM constraints with external analyses
As discussed in Sec. <ref> our constraints on the amplitude of matter density fluctuations are in excellent agreement with model predictions from a flat ΛCDM model (with minimum neutrino mass, ∑ m_ν = 0.06eV) fit to the primary CMB from Planck (see Sec. <ref> for a detailed description of the data sets used). In Figs. <ref> and <ref> we show these comparisons graphically and additionally show a comparison with independent primary CMB observations from the combination of WMAP and ACT <cit.> (shown in magenta in Figs. <ref> and <ref>).
We also provide comparisons to a wider set of other large scale structure probes. We include results from CMB lensing auto-spectrum analyses, galaxy weak lensing surveys (DES, KiDS and HSC), other cross-correlations with CMB lensing from ACT, SPT, and Planck, and from the three dimensional clustering of galaxies from BOSS and eBOSS. We subsequently briefly introduce the datasets we employ:
* CMBL: These are constraints from the analysis of the auto-spectrum of CMB lensing reconstructions (shown in green in Figs. <ref> and <ref>). These results are mostly sensitive to linear scales at z=1-2but with a broad tail to higher redshifts. CMB lensing primarily constrains the parameter combination σ_8 Ω_m^0.25. In addition to the two CMB lensing auto-spectra which also enter our analysis, Planck PR4 <cit.>, ACT DR6 <cit.>, we also compare to the independent analysis from SPT-3G<cit.>. To make fair comparisons we combine the CMB lensing measurements with BAO information which breaks the degeneracy between σ_8 and Ω_m.
* CMBLX: We also compare our results with cross-correlations between CMB lensing and galaxy surveys (using only the galaxy-CMB lensing cross-correlation and the galaxy-galaxy auto-correlation; 2x2pt). These are shown in red in Figs. <ref> and <ref>. We use results from <cit.> and <cit.> which analysed the cross-correlation between DESI LRG targets and the ACT DR6 lensing reconstruction. <cit.> also present a reanalysis of the DESI LRGs' cross-correlation with Planck PR4 updating the results from <cit.>.
We also include work cross-correlating DES-Y3 galaxy shear (γ) and galaxy densities (δ_g) with a joint SPT and Planck lensing reconstruction based on a much smaller survey footprint than considered here<cit.>. The joint lensing reconstruction from SPT-SZ and Planck is presented in <cit.>. A cross-correlation between DES-Y3 clustering (δ_g) and ACT DR4 lensing <cit.> based on the lensing reconstruction from <cit.> is also included. While all aforementioned analyses use photometric galaxy samples in their cross-correlations, the final cross-correlation study included in our comparisons, <cit.>, jointly models the three dimensional clustering of the spectroscopic BOSS galaxies and their cross-correlation with Planck.
* GL: From galaxy weak lensing surveys we include constraints from `3x2pt'-analyses, combining measurements of galaxy clustering, galaxy shear, and their cross-correlation (shown in blue in Figs. <ref> and <ref>). For DES we adopt the results obtained in <cit.> when fixing the neutrino mass. For KiDS we show results presented in <cit.>. We note that in contrast to the DES analysis the KiDS results are obtained from the combination of projected galaxy shear and galaxy-galaxy lensing measurements with a measurement of the three dimensional clustering of galaxies in the spectroscopic BOSS and 2dFLenS surveys. Therefore, the KiDS analysis already contains the BAO information. For HSC we adopt a set of results based on measurements of the galaxy shear, galaxy-galaxy lensing, and galaxy clustering from <cit.>. We show results for an analysis using a linear bias model <cit.> and from an analysis using a halo model based emulator which includes non-linear scales <cit.>.
* GC: Finally, we compare our results with constraints from the three dimensional clustering of galaxies as measured by BOSS and eBOSS (shown in gold in Figs. <ref> and <ref>). We include the analysis of BAO and Redshift Space Distortions (RSD) from <cit.>. Furthermore, we include an independent analysis based on the effective theory of large scale structure (EFTofLSS) that fits the `full shape' of the power spectrum and bispectrum measured in redshift-space <cit.>. Similar, previous analyses found compatible results <cit.>.
Comparisons for the S_8 and σ_8 parameters are shown in Figs. <ref> and <ref>. We also show comparisons in the σ_8-Ω_m plane for a few selected external data sets in Fig. <ref>. Generally we find good agreement with the CMB lensing results, although we note only the estimate from SPT-3G is not already included in our 3x2pt analysis. When comparing to the four different galaxy weak lensing 3x2pt analyses we find that these generally favour a lower value of S_8 at moderate significance between ∼1σ and 2.3σ. The same is true for the other CMB lensing cross-correlations we compare our results to. From the DESI LRG targets we find a ∼1.6σ lower value of S_8 using CMB lensing data from ACT DR6 and Planck PR4 (0.9σ and 1.9σ for ACT DR6 and Planck PR4 alone respectively).
Using DES galaxies and cosmic shear together with the SPT+Planck lensing reconstruction also yields a ∼2.4σ lower value for S_8, while the discrepancy with the cross-correlation between DES and ACT DR4 is slightly less significant (1.4σ). The results from the cross-correlation of spectroscopic galaxies from BOSS with Planck PR3 lensing also yields a value of S_8 that is 2.7σ lower than our inference. Meanwhile the disagreement with galaxy clustering is less significant (0.6-1.0σ).
Where available we also consider results that directly constrain σ_8 either by combining with BAO or because they include the BAO information as part of a three dimensional clustering analysis and compare these to our analysis with BAO. We broadly find similar levels of agreement/discrepancy as for S_8 except in the case of the `full-shape' galaxy clustering analysis which finds a value of σ_8 about 2.3 σ lower than our results.
We conclude that while our results are in good agreement with primary CMB and CMB lensing results, there are some moderate discrepancies with other large scale structure tracers from galaxy weak lensing surveys and (mostly lower redshift) CMB lensing cross-correlations. However, as we can see in Fig. <ref>, the posteriors for several of these data sets have significant overlap with our results in the σ_8-Ω_m plane. The projection onto S_8 slightly exaggerates the level of disagreement. At current levels of precision the various data sets are in broad agreement and we are unable to conclusively rule out statistical fluctuations as the source of the observed discrepancies.
Furthermore, we also compare our constraints on the Hubble constant to a range of external measurements. This includes model dependent measurements based on the apparent size of the sound horizon such as those from the primary CMB <cit.> and BAO <cit.>, as well as measurements based on the matter-radiation equality scale from <cit.> and <cit.>. Finally, we compare to several local measurements of H_0, including the TDCOSMO strong-lensing time-delay measurement with marginalisation over lens profiles <cit.>, an alternative TDCOSMO measurement with different lens-mass assumptions <cit.>, the Cepheid-calibrated supernovae measurement <cit.>, the TRGB-calibrated supernovae measurement <cit.>[We note that a more recent TRGB-calibrated measurement using data from the James Web Space Telescope finds similar values of H_0 but with slightly larger uncertainties due to a smaller sample size <cit.>.], and recent results employing calibration based on observation of asymptotic giant branch stars in the J-band <cit.>.
We find good agreement with other equality scale based measurements. The constraint from the 3x2pt data alone is in excellent agreement with sound horizon based measurements while the measurements including the supernovae data set yield values about 1.3σ lower than the value inferred from the CMB, but in no statistically significant tension. At the same time the combination of 3x2pt data and supernovae is in ∼3.6σ tension with local measurements of H_0 from the Cepheid-calibrated supernovae (). We find no significant tension with TRGB- or JAGB-calibrated supernovae measurements (∼1.7σ and ∼0.8σ respectively); the change is driven in part by larger uncertainties but also due to a lower central value of H_0 inferred with these methods.
§.§ Reconstructing the growth of perturbations
In addition to measuring the growth of structure within the ΛCDM model through constraints on a single parameter, S_8^ 3x2pt or σ_8, we are able to derive information on the evolution of structures over time by leveraging the different redshift sensitivity of the cross-correlations with the two unWISE samples and the CMB lensing auto-correlation. In Fig. <ref> we show the constraints from the two cross-correlations and the CMB lensing auto-correlation, each analysed jointly with BAO. The redshift kernels shown at the bottom of the top panel give an indication of the redshift sensitivity of the samples given by the fractional contribution to the signal-to-noise, log SNR / z. The computation of log SNR / z includes an approximate marginalisation over galaxy nuisance parameters, achieved by linearising the model for C_ℓ^gg and C_ℓ^κ g in small fluctuations around the best-fit linear bias and shot noise and propagating the uncertainty in these parameters to the covariance matrix.We adopt the median of the redshift sensitivity kernel to represent the effective redshift of each of the three measurements and compute σ_8(z) as σ_8(z=0)D(z), where D(z) is the linear growth function which is primarily dependent on Ω_m. These results are also summarised in Table <ref>. We find excellent agreement with the growth of structures predicted by the ΛCDM model fit to the primary CMB from Planck (grey band in Fig. <ref>).However, as can be easily seen from Fig. <ref>, the three samples have significant redshift overlap. In particular, while the median redshift of the measurement from the lensing auto-spectrum is z_ Med≃ 3.5 it receives significant contributions from lower redshifts where we also have information from the cross-correlation measurements. To optimally combine the available information we explore a reconstruction of the growth of (linear) perturbations with redshift through a parametric form of σ_8(z) which we constrain jointly with all three samples taking into account their overlap. With this method we are able to use the cross-correlation measurements to constrain the low redshift contribution to the lensing auto-spectrum and extract information on the integrated growth of structure at high redshifts, above the two galaxy samples (z≳ 2.4).
Due to the broad redshift kernels of our data sets we cannot constrain the growth of perturbations with arbitrary resolution. Instead we adopt the following simple parametrisation similar to <cit.>: We rescale the linear power spectrum as follows
P_ lin^ new(k, z) = P_ lin^ input(k, z) A(z)
= P_ lin^ input(k, z)
A_0 0≤ z < z_1
A_1 z_1≤ z < z_2
A_2 z_2≤ z
.
Where P_ lin.^ input(k, z) is the linear matter power spectrum computed by at a given cosmology and A_0, A_1, and A_2 are free parameters. The redshift bins, z_1=1.15 and z_2=2.4, are motivated by the redshift origin of the signal for the two cross-correlation measurements and are chosen to separate the two samples as optimally as possible. A_0 is primarily constrained by the Blue sample of unWISE galaxies, while the Green sample is primarily sensitive to A_1. The lensing auto-correlation receives contributions from a wide range of redshifts, but in combination with the two cross-correlation measurements allows us to constrain the amplitude at high redshift, A_2.
We marginalise over A_i with uniform priors in the range 0 to 2. The parameters of interest are then σ_8 √(A_i). Since our model depends on the non-linear matter power spectrum (see <cit.> and <cit.> for detailed descriptions of the models used to fit the lensing auto- and cross-correlations respectively) we take
P_ non-lin.^ new(k, z) = P_ lin.^ new(k, z) + P_ non-lin.^ input(k, z) - P_ lin.^ input(k, z)
= P_ lin.^ input(k, z) [A(z) - 1] + P_ non-lin.^ input(k, z).
The non-linear contributions to the matter power spectrum have a non-trivial dependence on the amplitude of linear fluctuations. In our parametrisation σ_8 on its own (in contrast to its product with √(A_i)) is only constrained by the size of the non-linear terms. This approach effectively allows us to marginalise over the size of these non-linear contributions.
Fig. <ref> shows the results of this analysis in the form of the posteriors on σ_8 √(A_i) for the 3x2pt, 2x2pt and lensing-only data sets. In each case we combine with BAO data (described in Sec. <ref>) to break the degeneracy between σ_8 and Ω_m. We find good consistency with the amplitude of fluctuations predicted by a ΛCDM fit to the primary CMB in each of the three bins.
We can also use these results to reconstruct σ_8(z) which we show in Fig. <ref>. We show the 1σ confidence intervals from our inference chains on the combination √(A_i)σ_8(z=0) D(z) within the relevant bins. We do not separately constraint the shape of D(z) within each bin, but rather assume a single D(z) across all bins with its shape determined primarily by Ω_m. Within each bin we compute the median redshift of the joint signal as we did above for the individual measurements. The parameter constraints are summarised in Table <ref>.The representative constraints at the median signal redshift can be compared to results from the literature for example from the cross-correlation of CMB lensing with DESI LRGs <cit.> or the Quaia quasar catalog <cit.>. Our results are broadly consistent with these external reconstructions across the entire redshift range. At low redshifts (z≲1) we achieve similar precision (∼4%) to constraints presented in <cit.> but cannot match the redshift resolution of the DESI samples. In the redshift range 1 ≲ z ≲ 2.5 our results represent some of tightest constraint on the amplitude of matter density fluctuations; tighter than results in a similar redshift range from <cit.> by a factor of about 4. The main power of our method, however, is the ability to place constraints on structure formation at higher redshifts, typically inaccessible with cross-correlation analysis. The median signal redshift within our highest redshift bin is z_ Med = 5.6, significantly higher than the highest redshift constraint available from cross-correlation with Quaia (z=2.7). In this high redshift range we obtain a ∼4% constraint on the amplitude of matter density fluctuations, consistent with predictions based on a ΛCDM fit to the primary CMB from Planck.
As one can see in Fig. <ref> some residual correlations remain between the redshift bins. We find that bin 1 and 2 are about 37% correlated while bin 2 and 3 are 45% correlated. The correlation between bin 1 and 3 is about 24%.
When using only the lensing auto- and cross-correlations as well as the unWISE auto-correlation, but without BAO, σ_8 √(A_i) is degenerate with Ω_m, as expected. In each of the three bins we therefore determine the best constrained S_8-equivalent combination for our 3x2pt data alone. Using all lensing data from ACT and Planck we find
σ_8 √(A_0) (Ω_m/0.3)^0.66 = 0.784± 0.035,
σ_8 √(A_1) (Ω_m/0.3)^0.49 = 0.839± 0.042, and
σ_8 √(A_2) (Ω_m/0.3)^0.39 = 0.783± 0.033.
We can see that, as expected, the lower redshift results are more degenerate with Ω_m.
§.§ Constraints on beyond Λ CDM cosmologies
We explore several extensions beyond the standard flat ΛCDM cosmology with minimum mass neutrinos. Generally we combine our data for this purpose with primary CMB observations. We also add BAO and supernovae data in some cases. Unless otherwise noted the constraints reported in the following sections consider the 3x2pt data set using both ACT and Planck lensing data. We summarise the constraints on the beyond ΛCDM parameters in Table <ref>.
§.§.§ νΛ CDM
Observations of neutrino flavour oscillations require that neutrinos have non-vanishing mass. Since any mechanism to give neutrinos mass requires physics beyond the standard model, this has significant implications for our understanding of fundamental physics. However, oscillation experiments only constrain the difference between the squared masses of the three mass eigenstates, Δ_12 m^2 ≡ m_1^2 - m_2^2 and |Δ_32 m^2 | where |Δ_32 m^2 |≫Δ_12 m^2, but not the absolute mass scale. Current constraints dictate that the sum of the neutrino masses ∑ m_ν is at least 0.058eV in what is known as the normal hierarchy (Δ_32 m^2>0; two of the masses are significantly smaller than the third) or at least about 0.1eV in the inverted hierarchy (Δ_32 m^2<0; two of the masses are significantly larger than the third).
Direct detection experiments like <cit.> have placed upper limits on the effective electron anti-neutrino mass, m_ν_e<0.8eV at 90% confidence, from tritium beta decay observations. However, the most stringent limits on the sum of the neutrino masses are derived from cosmological observations. After neutrinos become non-relativistic they cluster similarly to CDM on scales above the neutrino free streaming length, but on smaller scales they suppress the growth of perturbations due to their large velocity dispersion. Meanwhile the neutrino energy density also contributes to the Universe's background expansion. The scale dependence of the suppression effect is only poorly constrained by current generation data like ours, but we are sensitive to the overall suppression of the formation of structure.
For this purpose we combine our lensing auto- and cross-spectrum information which probes the matter power spectrum at late times, after neutrinos have become non-relativistic, with primary CMB data which provides information on the early time matter power spectrum before suppression due to massive neutrinos can manifest. We furthermore also add BAO information to probe the background expansion and break degeneracies, for example with the total matter density. It should be noted that the amplitude of the primordial power spectrum extracted from the small scale CMB power spectrum is completely degenerate with the optical depth to reionisation, τ. It is thus important to include the large scale CMB polarisation data from the analysis discussed in Sec. <ref>.
We extend the flat ΛCDM model by a single free parameter, ∑ m_ν, corresponding to the sum of the neutrino masses[As in <cit.> we follow <cit.> and <cit.> and consider a degenerate combination of three equally massive neutrinos.]. Using ACT and Planck lensing and cross-correlation data we find an upper limit on the sum of the neutrino masses of
∑ m_ν < 0.124 eV at 95% confidence
( 3x2pt + CMB + BAO).
This represents a small improvement over the constraint reported in <cit.> (∑ m_ν < 0.13 eV; 95% C.I.) based on the lensing power spectrum only. However, the difference is driven by the additional, higher redshift, BAO likelihoods used in this work which favour a lower value of Ω_m. When reanalysing the lensing auto-spectrum with these likelihoods we find ∑ m_ν < 0.123 eV; 95% C.I. (see also Fig. <ref>). The lack of improvement over auto-spectrum only constraints is not unexpected given that we find a slightly smaller amplitude of matter density fluctuations in the combined analysis of lensing auto- and cross-correlation data than from the auto-correlation alone, compatible with slightly more suppression due to massive neutrinos and therefore a larger neutrino mass. On idealised mock observations we show that, assuming minimum mass normal hierarchy neutrinos, the 3x2pt analysis leads to a marginally (∼5%) tighter upper limit on the neutrino mass sum than the C_ℓ^κκ-only analysis.
When including supernovae observations in addition to BAO and primary CMB, the data favours a slightly larger matter density. Since the quantity best determined by our data is approximately S_8^ 3x2pt∝σ_8 Ω_m^0.4 this results in a lower inference for σ_8 (see also results in Sec. <ref>) and compatibility with larger neutrino induced power suppression. Hence the posterior on ∑ m_ν is shifted to larger values which leads to a small degradation in the one sided constraint to
∑ m_ν < 0.137 eV at 95% c.l.
( 3x2pt + CMB + BAO + SN).
When replacing our baseline BAO likelihoods from 6dFGS, SDSS, BOSS, and eBOSS with the new DESI BAO likelihood we find significantly tighter constraints on the neutrino mass. From the 3x2pt data with primary CMB and DESI BAO from the first year of data we obtain
∑ m_ν < 0.082 eV at 95% confidence
( 3x2pt + CMB + DESI BAO)
This represents only a very marginal improvement over constraints from the lensing auto-spectrum only (∑ m_ν < 0.083 eV at 95% confidence from C_ℓ^κκ + CMB + DESI BAO using the ACT DR6 and Planck PR4 lensing power spectra[Note that this is a slightly weaker constraint than presented in <cit.> for two reasons. Firstly, we use the Planck PR4 likelihood as our default high-ℓ CMB likelihood, and secondly, the older version of the ACT lensing likelihood used in that work effectively deactivated the lensing norm corrections discussed in Appendix <ref>.]). The 3x2pt results formally disfavour the minimum mass allowed in the inverted hierarchy (min_ inv.[∑ m_ν] = 0.1eV) at about 98% confidence. We note that the significantly tighter constraint is in part explained by the DESI preference for a slightly lower value of Ω_m, leading to a larger value of σ_8 at fixed lensing amplitude, requiring less suppression due to neutrinos.
§.§.§ Non-flat Λ CDM
Within the standard model of cosmology, the Universe is predicted to be spatially flat as a consequence of inflation. However, primary CMB data from Planck has at times shown a preference for a closed universe with the curvature parameter, Ω_k<0<cit.>. Since the primary CMB anisotropies alone do not constrain curvature due to a “geometric degeneracy”<cit.> this preference is driven entirely by the lensing-induced peak smearing in Planck measurements of the CMB anisotropies which has been shown to be somewhat larger than expected from the measured lensing power spectrum <cit.>. This preference is not present in independent analyses of CMB anisotropy measurements from ACT+WMAP<cit.> and also disappears in combination with BAO.
When combining CMB lensing and unWISE cross-correlation data with primary CMB anisotropies we also no longer observe a preference for a closed universe. We find
-0.011 < Ω_k < 0.004 at 95% confidence
( 3x2pt + CMB)
using both ACT and Planck lensing data. We show these constraints in the Ω_k-Ω_Λ plane compared to an analysis of primary CMB anisotropies only as well as the combination of primary CMB anisotropies and BAO in Fig. <ref>. While this constraint represents a significant improvement over the constraint from CMB lensing alone <cit.> the combination of primary CMB anisotropies and BAO provides a tighter constraint (-0.003 < Ω_k < 0.003 at 95% confidence). Despite the fact that the combination of BAO, lensing auto- and cross-spectrum data, and primary CMB does not improve on the constraints from BAO and primary CMB only, our result still represents a valuable cross-check on the BAO derived constraints.
§.§.§ Extended dark energy models
In this work we consider two extended dark energy scenarios. In the standard ΛCDM framework dark energy is assumed to be a cosmological constant, Λ, equivalent to a cosmological fluid with equation of state w=-1. First, we consider allowing w to take on values different from -1 but remain constant in time. Our data alone is only very weakly sensitive to w and the effect is largely degenerate with other parameters. However, when combining the 3x2pt data set with the primary CMB from Planck we find
w = -1.53^+0.20_-0.31 ( 3x2pt+ CMB)
with large uncertainties but consistent with w=-1 at ∼2σ. This is not competitive with the constraint obtained using all external data sets considered in this work (CMB + BAO + SN) which yields w = -0.979 ± 0.026. Adding the 3x2pt data improves constraints only very marginally by ∼4% to
w = -0.982± 0.024 ( 3x2pt+ CMB+ BAO+ SN).
Without supernovae data we find
w = -1.027^+0.050_-0.043 ( 3x2pt+ CMB+ BAO)
compared to w = -1.022^+0.053_-0.048 from CMB and BAO without our 3x2pt data (a ∼7% improvement). These constraints are shown in Fig. <ref>.
We also consider a phenomenological parameterisation of an evolving dark energy equation of state. As in <cit.> we adopt the CPL model <cit.>:
w(a) = w_0 + (1-a) w_a
where a=1/(1+z) is the scale factor and both w_0 and w_a are free parameters. This commonly considered parametrisation has been shown to provide a good fit to several physically motivated dark energy models <cit.>.
Our constraints on w_0 and w_a are shown in Fig. <ref>. When using only 3x2pt data and the primary CMB w_0 and w_a are so significantly degenerate that we are unable to provide meaningful constraints. However, in combination with BAO and SN we find
w_0 = -0.881 ± 0.060
w_a = -0.43^+0.25_-0.22( 3x2pt + CMB + BAO + SN)
only a very minor ∼5% and ∼9% improvement on w_0 and w_a respectively over constraints from CMB, BAO and SN alone. Removing the supernovae data yields
w_0 = -0.56 ± 0.24
w_a = -1.27 ± 0.66
( 3x2pt + CMB + BAO)
not improving significantly on constraints from the external data alone (by about 4% and 3% respectively).
To alleviate some of the significant degeneracy between w_0 and w_a evident from the left panel in Fig. <ref> we also determine the `pivot' redshifts, z_p, at which we obtain the best constraints on w(a) for various data combinations. Following <cit.> we estimate z_p by considering the marginalised parameter covariance, defining
a_p = 1 + C_w_0 w_a/C_w_a w_a,
effectively obtaining the best constrained linear combination of w_0 and w_a which guarantees that w_a is uncorrelated with w_p ≡ w(a_p). The model can then be reparametrised as w(a) = w_p + (a_p - a)w_a. We find z_p=0.44, 0.55, and 0.29 when combining the 3x2pt data with CMB, CMB + BAO, and CMB + BAO + SN respectively. For comparison the pivot redshift for the external data sets alone (CMB + BAO + SN) is z_p=0.28 (or z_p=0.54 for CMB + BAO). The equation of state at the pivot redshift is constrained to
w_p = -1.56^+0.28_-0.37( 3x2pt + CMB; z_p=0.44),
w_p = -1.005^+0.062_-0.055(+ BAO; z_p=0.55), and
w_p = -0.976 ± 0.028(+ SN; z_p=0.29).
These represent small improvements over constraints from the external data sets alone of about 11%, 8% and 7% over CMB, CMB + BAO, and CMB + BAO + SN respectively, in line with the improvements we saw for the wCDM model.
We show the two dimensional posterior in the w_p-w_a plane in the right panel of Fig. <ref>. We can see that all our constraints are consistent with (w_p, w_a)=(-1, 0) within ∼2σ. However, at present the constraining power is insufficient to further shed light on recent hints at deviations from the cosmological constant model with our data similarly consistent with the values preferred by the combination of DESI BAO data and supernovae <cit.>.
§.§.§ Massive neutrinos in the context of dynamical dark energy
As has been pointed out several times, the characteristic effect of massive neutrinos on the Universe's background evolution and the suppression of structure formation due to neutrino free streaming can be mimicked by beyond-ΛCDM extensions <cit.>. In part because the scale dependence of the neutrino effect is only poorly constrained with current data, the effect is difficult to distinguish from other non-standard physics. Here we consider neutrino mass constraints in the presence of a modified dark energy equation of state[For constraints on massive neutrinos in a wider range of extended models from the CMB lensing power spectrum alone see <cit.>.]. We derive constraints on ∑ m_ν, both in the two-parameter extension, wCDM+∑ m_ν, as well as in the case of evolving dark energy, w_0 w_aCDM+∑ m_ν. Our constraints are summarised in Table <ref>. Additionally, we show the 1D marginalised posteriors on the sum of the neutrino masses in Fig. <ref>.
As expected, we generally see a degradation of constraints in extended dark energy models. One exception occurs when allowing w to vary while including supernovae in the analysis. The supernovae data set provides tight constraints on w with a slight preference for w>-1. When w>-1 structures grow more slowly and our data are consequently compatible with less suppression due to neutrinos. Within the wCDM+∑ m_ν cosmology our neutrino mass constraint is thus tightened to
∑ m_ν < 0.123 eV at 95% c.l.
( 3x2pt + CMB + BAO + SN)
compared to ∑ m_ν < 0.137eV from the same data in a ΛCDM+∑ m_ν model. In the presence of evolving dark energy we find
∑ m_ν < 0.231 eV at 95% c.l.
( 3x2pt + CMB + BAO + SN).
§ CONCLUSION
We have presented cosmological constraints from the joint analysis of the CMB lensing auto-correlation, the cross-correlation between CMB lensing and galaxy data, and the galaxy auto-correlation using CMB lensing data from ACT DR6 and Planck PR4 together with galaxy data from unWISE.
Building on previous separate analyses of these data sets we provide tight constraints on the amplitude of matter density fluctuations in the redshift range z≃ 0.2-5. Within the ΛCDM model we constrain the relevant parameter S_8^ 3x2pt≡σ_8 (Ω_m/0.3)^0.4 to 1.5% and in combination with BAO we obtain similarly tight constraints on σ_8. Our constraints are in excellent agreement with model predictions from ΛCDM fits to the primary CMB. At the same time our results are not in any statistically significant tension with other LSS probes despite favouring a slightly larger amplitude of matter density fluctuations. Nevertheless, our findings suggest that if the `S_8 tension' is indicative of new or so far neglected physics rather than systematic effects, the effect has to be confined either to very low redshifts (probably below z≃ 0.5) where our data are less sensitive or small scales (k ≳ 0.3h/Mpc) which our data equally does not probe well.
In addition, we constrained the Hubble constant independently of the sound horizon to 3.5%. Our constraints are in excellent agreement with constraints from observations of the primary CMB and BAO. While our current uncertainties are too large to provide conclusive evidence, these results raise further questions about the viability of modifications to the sound horizon to resolve the Hubble tension.
We derive constraints on the amplitude of matter density fluctuations as a function of redshift. When the amplitude of low redshift structure is constrained by the cross-correlation data, the lensing auto-spectrum provides constraints on high redshift structures. We find one of the tightest constraints to date (∼3.3%) on the integrated matter density fluctuations above z≃2.4. The reconstructed σ_8(z) is in excellent agreement with predictions from the primary CMB within the standard ΛCDM model.
Using the 3x2pt data we also revisited constraints on the neutrino mass sum previously presented only from the CMB lensing auto-spectrum. Despite the larger information content we do not tighten existing upper limits. This is because we obtain a slightly smaller amplitude of matter density fluctuations in the joint analysis than in the lensing auto-spectrum analysis, consistent with larger neutrino induced structure suppression and therefore larger neutrino masses. This analysis nevertheless represents a valuable step towards detection of massive neutrinos with future cross- and auto-correlation analyses.
We also investigate, for the first time with this data, a series of beyond-ΛCDM models including spatial curvature and extended dark energy models. We show that our data marginally tightens some of the existing constraints and provides competitive cross-checks on others. We also make our data and likelihood publicly available enabling the community to perform further investigation into models not explored in this work (see Appendix <ref> for details).
This work represents the first 3x2pt analysis with the new ACT DR6 lensing reconstruction. Such analyses, combining all possible auto- and cross-correlations between a galaxy sample and the lensing reconstruction, have become the standard in the galaxy weak lensing field. We showed the excellent constraining power of this analysis approach also for CMB lensing data. Similar analyses using future CMB lensing data for example from Simons Observatory <cit.> and CMB-S4 <cit.>, and galaxy samples from future surveys such as Euclid <cit.> and eventually the Vera C. Rubin Observatory's Legacy Survey of Space and Time <cit.> will provide further improved constraints on cosmological parameters and contribute to improving our understanding of the cosmos.
§ ACKNOWLEDGEMENTS
The authors wish to thank Hironao Miyatake and the HSC team for making HSC chains available to us slightly before their public release. We thank Noah Sailer for useful discussions.
Support for ACT was through the U.S. National Science Foundation through awards AST-0408698, AST-0965625, and AST-1440226 for the ACT project, as well as awards PHY-0355328, PHY-0855887 and PHY-1214379. Funding was also provided by Princeton University, the University of Pennsylvania, and a Canada Foundation for Innovation (CFI) award to UBC. The development of multichroic detectors and lenses was supported by NASA grants NNX13AE56G and NNX14AB58G. Detector research at NIST was supported by the NIST Innovations in Measurement Science program.
ACT operated in the Parque Astronómico Atacama in northern Chile under the auspices of the Agencia Nacional de Investigación y Desarrollo (ANID). We thank the Republic of Chile for hosting ACT in the northern Atacama, and the local indigenous Licanantay communities whom we follow in observing and learning from the night sky.
Computing was performed using the Princeton Research Computing resources at Princeton University, the Niagara supercomputer at the SciNet HPC Consortium, and the Symmetry cluster at the Perimeter Institute. This research also used resources provided through the STFC DiRAC Cosmos Consortium and hosted at the Cambridge Service for Data Driven Discovery (CSD3). SciNet is funded by the CFI under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund–Research Excellence, and the University of Toronto. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award HEP-ERCAPmp107.
GSF acknowledges support through the Isaac Newton Studentship and the Helen Stone Scholarship at the University of Cambridge.
GSF, FJQ, CEV, and BDS acknowledge support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 851274). BDS further acknowledges support from an STFC Ernest Rutherford Fellowship. SF is supported by Lawrence Berkeley National Laboratory and the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
EC acknowledges support from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 849169). CEV received the support from “la Caixa” Foundation (ID 100010434, fellowship code LCF/BQ/EU22/11930099). JK acknowledges support from NSF grants AST-2307727 and AST-2153201. KM acknowledges support from the National Research Foundation of South Africa. MM acknowledges support from NSF grants AST-2307727 and AST-2153201 and NASA grant 21-ATP21-0145. CS acknowledges support from the Agencia Nacional de Investigación y Desarrollo (ANID) through Basal project FB210003.
§.§ Software
We acknowledge use of the <cit.> package and the Python Image Library for producing plots in this paper. Furthermore, we acknowledge use of the <cit.> and <cit.> packages. We use the Boltzmann code <cit.> for calculating theory spectra, and use <cit.> and <cit.> for likelihood analysis.
§ DATA AVAILABILITY
Pre-release versions of the Markov Chain Monte Carlo runs from this paper and <cit.> are available through the NERSC (National Energy Research Scientific Computing Center) Science Gateway, https://portal.nersc.gov/project/act/act_x_unWISE_xcorr+3x2pthere[<https://portal.nersc.gov/project/act/act_x_unWISE_xcorr+3x2pt>].Alongside this publication we also distribute various data products and software relevant to this work. The likelihood, bandpowers, covariances and various auxiliary data products required to perform the analysis presented here will be made available upon publication. The likelihood can then be found https://github.com/ACTCollaboration/unWISExLens_lklhhere[<https://github.com/ACTCollaboration/unWISExLens_lklh>] and all required data products will be available through the NERSC Science Gateway above and . We caution that, because of significant correlations, this likelihood should not be combined with any other CMB lensing or CMB lensing cross-correlation likelihoods, like those from <cit.> or <cit.>. Instead the likelihood provided here can be used to include the auto-correlation measurements in a self consistent manner.
§ LIKELIHOOD CORRECTIONS
The CMB lensing reconstruction is obtained using quadratic estimators which depend on two powers of the observed CMB fields. The normalisation of the estimator and the bias corrections which are required for the lensing power spectrum depend in principle on the underlying CMB power spectra and the lensing convergence power spectrum. In practice we compute the normalisation and bias corrections given a fiducial choice of spectra[The ACT lensing reconstruction adopts fiducial spectra from a ΛCDM model fit to Planck 2015 TTTEEE data with an updated τ prior as in <cit.>.]. While the CMB power spectrum is well constrained by Planck, some residual uncertainty remains that must be propagated to our likelihood analysis.
Let us denote a set of cosmological parameters as θ and the assumed fiducial cosmology as θ_0. As discussed in more detail in <cit.> the unnormalised lensing reconstruction is sensitive to the product of the lensing convergence field, κ_L m(θ), and the lensing response function, ℛ_L(θ). The latter is computed within the fiducial cosmology. When comparing theory to observations we have to account for this fact in the lensing auto- and cross-spectra as
C_L^κ g, obs(θ) = ℛ^-1_L(θ_0)/ℛ^-1_L(θ) C_L^κ g, th(θ), and
C_L^κκ, obs(θ) = [ℛ^-1_L(θ_0)]^2/[ℛ^-1_L(θ)]^2C^κκ_L(θ)-N^1_L(θ_0)+N^1_L(θ).
Here we have also included the N^1-bias in the lensing power spectrum which we compute from simulations and which depends on the fiducial CMB power spectra as well as the lensing power spectrum present in the simulations (see for more details on the N^1-bias).
Fully calculating the above for each point in the sampled parameter space is unfeasible, and hence we follow the approach of <cit.> and <cit.> and forward model the linearised corrections to the theory spectrum due to the parameter deviations from the fiducial cosmology. Given the excellent constraints on the CMB power spectra any deviations from the fiducial spectrum, C^CMB_ℓ(θ_0), are expected to be small justifying this expansion. The spectra to be compared to the observed data spectra are thus given by
C_L^κ g, obs(θ) ≈ C_L^κ g, th(θ)[1 - M_L^ℓ[C^ CMB_ℓ(θ)-C^ CMB_ℓ(θ_0)]], and
C_L^κκ, obs(θ) ≈
C^κκ_L(θ)-2M_L^ℓ[C^ CMB_ℓ(θ)-C^ CMB_ℓ(θ_0)]C^κκ_L(θ_0)
+dN^1_L/dC^ CMB_ℓ[C^ CMB_ℓ(θ)-C^ CMB_ℓ(θ_0)]+dN^1_L/dC^κκ_L^'[C^κκ_L^'(θ)-C^κκ_L^'(θ_0)]
where M_L^ℓ = ∂lnℛ^-1_L / ∂ C_ℓ^CMB|_θ_0 is the linearised normalisation-correction matrix.
For joint constraints with CMB anisotropy spectra, we correct the normalisation and N^1-bias subtraction at each point in parameter space according to Eqs. <ref> and <ref>. For cosmology runs that do not include information from the primary CMB, we propagate the uncertainty in the lensing normalisation due to possible fluctuations in the CMB power spectrum into an additional contribution to the covariance. Specifically, we obtain 1000 posterior samples from the ACT DR4 + Planck primary CMB chains and propagate these to the covariance matrix as described in Appendix B of <cit.> and Sec. 6.1 of <cit.>. This step is done consistently to both the ACT and Planck parts of the covariance matrix.
|
http://arxiv.org/abs/2409.03075v1 | 20240904210018 | Elephant trunk wrinkles: A mathematical model of function and form | [
"Yang Liu",
"Alain Goriely",
"L. Angela Mihai"
] | physics.bio-ph | [
"physics.bio-ph",
"cond-mat.soft",
"74B20, 74G10, 74G60, 9210"
] |
Elephant trunk wrinkles:
A mathematical model of function and form
Yang Liu[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK, Department of Mechanics, School of Mechanical Engineering, Tianjin University, Tianjin 300354, China]
Alain Goriely[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK]
L. Angela Mihai[Corresponding author: L.A. Mihai, Email: , School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, UK]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
12pt
A remarkable feature of the elephant trunk is the pronounced wrinkling that enables its great flexibility. Here, we devise a general mathematical model that accounts for characteristic skin wrinkles formed during morphogenesis in elephant trunk. Using physically realistic parameters and operating within the theoretical framework of nonlinear morphoelasticity, we elucidate analytically and numerically the effect of skin thickness, relative stiffness and differential growth on the physiological pattern of transverse wrinkles distributed along the trunk. We conclude that, since the skin and muscle components have similar material properties, geometric parameters, such as curvature, play important roles. In particular, our model predicts that, in the proximal region close to the skull, where curvature is lower, fewer wrinkles form and sooner than in the distal narrower region where more wrinkles develop. Similarly, less wrinkling is found on the ventral side, which is flatter, compared to the dorsal side. In summary, the mechanical compatibility between the skin and the muscle enables them to grow seamlessly, while the wrinkled skin acts as a protective barrier that is both thicker and more flexible than the unwrinkled skin.
6pt
6pt
Keywords: hyperelastic solids, nonlinear deformation, instabilities, wrinkling, bilayer systems, mathematical modeling.
6pt
Mathematics Subject Classification: 74B20, 74G10, 74G60, 9210.
“Elephants have wrinkles, wrinkles, wrinkles,
Elephants have wrinkles, wrinkles everywhere.
On their trunks, on their ears, on their knees,
On their toes, no one knows, no one knows
Why-I-I-I-Ih!”– Nursery Rhyme
§ INTRODUCTION
The elephant trunk (Latin proboscis, Ancient Greek πρ oγ oσκíç or proboskís) is an iconic, highly versatile limb containing approximately 100,000 radial and lateral muscles fibers, connected to the elephant's head by an opening in the skull and controlled by a proboscis nerve <cit.>. Due to its exceptional characteristics, it has attracted much interest from the research community motivated by biomimicking its properties and movements <cit.>.
A salient feature contributing to elephant trunk's agility is its rich pattern of wrinkles (see Figure <ref>), which form and develop throughout elephant's life <cit.>. During fetal growth, transverse trunk wrinkles and furrows appear gradually <cit.>. In adult elephants, the number of transverse wrinkles differs longitudinally, with more wrinkles formed distally (at the tip) than proximally (near the skull), and more dorsal (top) than ventral wrinkles. The number of wrinkles further changes with trunk-lateralisation. MicroCT-imaging indicates that the outer trunk skin has a relatively constant thickness, while the inner skin is thinner within folds than between folds. Thus the generation of wrinkles in elephant trunk is primarily a result of differential growth appearing during development <cit.>.
Wrinkling instability is a ubiquitous mechanism involving deformations across the scales. A general theory of small strain superposed on large strain deformations for homogeneous isotropic hyperelastic materials was developed by <cit.>. Particular cases of infinitesimal deformations superposed on finite extension or compression were analyzed in <cit.>. The stability of a solid circular cylinder subject to finite extension and torsion was treated in <cit.>. For tubes of Mooney-Rivlin material with arbitrary length and thickness, by applying the Stroh formalism <cit.>, an explicit formulation for the Euler buckling load with its first nonlinear corrections was derived in <cit.>. A similar approach was employed in <cit.> to calculate the nonlinear buckling load of a compressible elastic cylinder. Surface wrinkles formed by straightening of a circular sector or by bending of a cylindrical sector were examined in <cit.> and <cit.>, respectively. Wrinkling due to bending of an inflated cylindrical shell was discussed in <cit.>. An extension of Flügge's formalism <cit.> applied to the buckling of thin-walled cylinders of nonlinear elastic material was presented in <cit.>. The instability of a thick-walled hyperelastic tube subject to external pressure and axial compression was considered in <cit.>. Other forms of instability, like necking or bulging, also arise during inflation of elastic tubes. These phenomena were studied extensively by <cit.>. The influence of residual stresses on the stability of circular cylinders, and in particular on stretch-induced localized bulging and necking, was addressed by <cit.>. Post-buckling modes for a core-shell cylindrical system of neo-Hookean material under axial compression with a perfectly bonded interface were investigated experimentally, theoretically and computationally in <cit.>. The influence of geometrical incompatibility on pattern selection of growing bilayer tubes was modelled in <cit.>. A cylinder with shear modulus arbitrarily varying along the radial direction which buckled and wrinkled under axial compression was examined semi-analytically in <cit.>. The onset of wrinkling in an anisotropically growing fibre-reinforced tube was analyzed in <cit.>. Wrinkling patterns in a cylindrical bilayer where only the outer layer grew were explored in <cit.>. Radially distributed wrinkles induced by a thin elastic film growing on a soft elastic substrate were presented in <cit.>.
In this paper, building on the rich methodology for elastic instabilities, we construct a mathematical model that accounts for transverse skin wrinkling in elephant trunk. Working within the theoretical framework of large strain morphoelasticity <cit.>, we address the following key question: What is the effect of the relative thickness, stiffness, and growth of the skin and muscle substrate on wrinkles formation along the elephant trunk? In Section <ref>, we model the trunk as a tubular cylindrical bilayer where the skin forms the thin outer layer, the muscle constitutes the thick inner layer, and a perfect bond exists between these two constituents. To account for large elastic strains, we describe each layer as an incompressible homogeneous isotropic nonlinear hyperelastic material. Assuming axisymmetric deformations, in Section <ref>, we present extensive numerical results for skin wrinkling when some model parameters change while others are fixed. In Section <ref>, we treat analytically and numerically different limiting cases. From our comprehensive analysis, we conclude that relative growth, geometry and material properties, together with loading conditions, compete to generate the characteristic pattern of transverse wrinkles in elephant trunks. While this study is motivated by a specific application scenario, our fundamental results extend further and are relevant to other applications as well.
§ PROBLEM FORMULATION
We model (a segment of) the elephant trunk as a cylindrical tube bilayer composed of two concentric homogeneous isotropic incompressible hyperelastic tubes. The skin forms the outer shell, while the muscle substrate constitutes the inner thicker core. We denote the reference state by ℬ_0, and set R_0, R_1, h_0 and l_0 the inner and outer radii of the core, the uniform radial shell thickness, and the axial length of the cylindrical system, respectively, where R_0<R_1 and h_0≪ R_1-R_0. Assuming that both the shell and the core grow until a deformed state is attained, we designate ℬ as the current configuration where the cylindrical geometry is maintained and the geometrical parameters become r_0, r_1, h, and l, respectively, with r_0<r_1 and h≪ r_1-r_0. We further assume that, at one end, the cylindrical system is fixed (homogeneous Dirichlet boundary condition), while at the other end, the system is constrained elastically (Robin boundary condition <cit.>) by a linear (Hookean) spring of stiffness K (Hooke's constant). Our bilayer system is represented schematically in Figure <ref>.
Within the usual system of cylindrical polar coordinates, we denote by 𝐗=(R,Θ,Z) and 𝐱=(r,θ,z) a material point in the reference and current configuration, respectively, where
R_0⩽ R⩽ R_1+h_0, 0⩽Θ⩽ 2π, 0⩽ Z⩽ l_0,
r_0⩽ r⩽ r_1+h, 0⩽θ⩽ 2π, 0⩽ z⩽ l.
For both the shell and the core, the deformation gradient 𝐅 from the reference to the current configuration takes the form <cit.>
𝐅=𝐀𝐆,
where A is the elastic deformation tensor and G is the growth tensor.
Given the strain-energy function W(𝐀) of an incompressible homogeneous isotropic hyperelastic material, the corresponding nominal stress tensor 𝐒 and Cauchy stress tensor σ are, respectively,
𝐒=J𝐆^-1∂ W/∂𝐀-pJ𝐆^-1𝐀^-1 σ=𝐀∂ W/∂𝐀-p𝐈,
where J=𝐅=𝐆 represents the volume change due to biological growth, p denotes the Lagrange multiplier associated with the incompressibility constraint for the isochoric elastic deformation
A=1, and 𝐈 is the second-order identity tensor.
In the current configuration, the equilibrium equation for a quasi-static problem without body forces takes the form
÷ σ=0.
For the axisymmetric deformation considered here, the only non-trivial equation is
dσ_11/d r+σ_11-σ_22/r=0,
where the indices 1,2,3, correspond to r-, θ-, z-directions, respectively.
In the subsequent analysis, we add the subscripts s and m to distinguish between physical quantities associated with the skin shell and the muscle core, respectively. For example, W_s represents the strain-energy function for the skin and W_m for the muscle. Physical quantities that are valid for both these components will be written without their subscripts.
The primary deformation from ℬ_0 to ℬ is governed by the equations
r=r(R), θ=Θ, z=λ_zZ,
where the longitudinal stretch ratio λ_z is constant. The deformation gradient then simplifies as follows,
𝐅=d r/d R𝐞_r⊗𝐞_r+r/R𝐞_θ⊗𝐞_θ+λ_z𝐞_z⊗𝐞_z,
where {𝐞_r,𝐞_θ,𝐞_z} is the usual orthonormal basis for the coordinate system.
Denoting by {λ_i}_i=1, 2, 3 the principal stretch ratios in the radial, azimuthal and axial direction, respectively, by equation (<ref>), we obtain
λ_1=dr/dR, λ_2=r/R, λ_3=λ_z.
We further consider the following growth tensors for the skin and muscle, respectively:
{ 𝐆_s = 𝐞_r⊗𝐞_r+𝐞_θ⊗𝐞_θ+γ g𝐞_z⊗𝐞_z,
𝐆_m= 𝐞_r⊗𝐞_r+𝐞_θ⊗𝐞_θ+g𝐞_z⊗𝐞_z,
.
where g≥1 is their common growth factor and γ≥1 is the differential (or relative) growth factor between the skin (outer layer) and the muscle (inner layer) in the axial (length) direction. Hence,
𝐆_s=𝐆_γ𝐆_m,
𝐆_γ = 𝐞_r⊗𝐞_r+𝐞_θ⊗𝐞_θ+γ𝐞_z⊗𝐞_z.
From equation (<ref>), we deduce that
{ 𝐀_s = α_s1𝐞_r⊗𝐞_r+α_s2𝐞_θ⊗𝐞_θ+α_s3𝐞_z⊗𝐞_z,
𝐀_m= α_m1𝐞_r⊗𝐞_r+α_m1𝐞_θ⊗𝐞_θ+α_m3𝐞_z⊗𝐞_z,
.
and, by equation (<ref>), we have
α_m3=γα_s3=λ_zg^-1.
From the incompressibility conditions A_s=α_s1α_s2α_s3=1 and A_m=α_m1α_m2α_m3, together with the multiplicative decomposition (<ref>), we infer that
{ rdr/dR = λ_z^-1γ gR, r_1⩽ r⩽ r_1+h,
rdr/dR = λ_z^-1gR, r_0⩽ r⩽ r_1.
.
By integration, we obtain
{ r^2 = λ_z^-1γ g(R^2-R_1^2)+r_1^2, r_1⩽ r⩽ r_1+h,
r^2 = λ_z^-1g(R^2-R_0^2)+r_0^2, r_0⩽ r⩽ r_1.
.
The geometry of the system in the current configuration ℬ is described by
{ r_1 = [λ_z^-1g(R_1^2-R_0^2)+r_0^2]^1/2,
h = {λ_z^-1gγ[(R_1+h_0)^2-R_1^2]+r_1^2}^1/2-r_1,
l = λ_z l_0,
.
where the deformed inner radius r_0 remains to be determined.
Expressing the strain-energy function in terms of the three principal elastic stretches as W(α_1,α_2,α_3), the non-zero components of Cauchy stress tensor take the following form,
{ σ_sii = α_siW_si-p_s,
σ_mii= α_miW_mi-p_m,
.
where W_si=∂ W_s/∂α_i and W_mi=∂ W_m/∂α_i, i=1,2,3 (there is no summation for repeated indices in the above equations). Assuming that both the outer and inner surfaces are free and the interface between the shell and the core remains continuous (perfectly bonded), we have
σ_m11|_r=r_0=0, σ_s11|_r=r_1+h=0, (σ_s11-σ_m11)|_r=r_1=0.
Since α_1α_2α_3=1, we can define the following function depending only on two variables,
w(α,α_3)=W(α_1,α_2,α_3),
where α_2=α and α_1=α^-1α_3^-1. Applying the chain rule, we obtain
w_1=∂ w/∂α=α^-1(α_2W_2-α_1W_1),
w_2=∂ w/∂α_3=α_3^-1(α_3W_3-α_1W_1).
From (<ref>)-(<ref>), it follows that
{ σ_s11 = ∫_α_h^α_sw_s1/1-α^2α_s3dα,
σ_m11= ∫_α_r_1^α_mw_s1/1-α^2α_m3dα+∫_α_h^α_r_1w_s1/1-α^2α_s3dα,
0 = ∫_α_r_1^α_r_0w_s1/1-α^2α_m3dα+∫_α_h^α_r_1w_s1/1-α^2α_s3dα,
.
where
α_r_0=r_0/R_0, α_r_1=r_1/R_1, α_h=r_1+h/R_1+h_0.
In view of (<ref>), the above elastic stretches are connected by
{ α_r_1^2=α_r_0^2R_0^2+g(R_1^2-R_0^2)R_1^2,
α_h^2=α_r_0^2R_0^2+g(R_1^2-R_0^2)+gγ[(R_1+h_0)^2-R_1^2](R_1+h_0)^2.
.
The associated Lagrange multipliers can be determined from the following identities:
{ p_s=α_s1W_s1-σ_s11,
p_m=α_m1W_m1-σ_m11.
.
The resultant axial force is equal to
N= 2π(∫_r_0^r_1σ_m33rdr+∫_r_1^r_1+hσ_m33rdr)
= π∫_r_0^r_1[2(α_m3W_m3-α_m1W_m1)-(α_m2W_m2-α_m1W_m1)]rdr
+π∫_r_1^r_1+h[2(α_s3W_s3-α_s1W_s1)-(α_s2W_s2-α_s1W_s1)]rdr
= π R_0^2(λ_zα_r_0^2-g)∫_α_r_1^α_r_02α_m1w_m2-α w_m1/(1-α^2α_m3)(g-α^2α_m3)αdα
+π R_1^2(λ_zα_r_1^2-gγ)∫_α_r_h^α_r_12α_s1w_s2-α w_s1/(1-α^2α_s3)(gγ-α^2α_s3)αdα.
Since the axial displacement is restricted at one end while the other end is attached to a spring of stiffness K, we can express the axial force as
N=K(l-l_0)=K(λ_z-1)l_0.
Then, for a given value of g, we can derive the axial extension λ_z and the deformed inner radius r_0 from the system of equations:
N(r_0,λ_z)=K(λ_z-1)l_0, σ_m11(r_0,λ_z)=0,
as illustrated in Figure <ref>.
For definiteness of our problem, we consider the incompressible neo-Hookean-type functions:
{ W_s=μ_s/2(λ_1^2+λ_2^2+λ_3^2-3),
W_m=μ_m/2(λ_1^2+λ_2^2+λ_3^2-3),
.
where μ_s>0 and μ_m>0 are shear moduli at infinitesimal strain. The respective non-dimensionalized axial Cauchy stress components are indicated in Figure <ref>.
It is useful to introduce the following dimensionless quantities:
R_0=R_0/l_0, R_1=R_1/l_0, h_0=h_0/l_0, ζ=h_0/R_1,
N=N/πμ_m[(R_1+h_0)^2-R_0^2], σ=σ/μ_m, K=K/πμ_m(R_1+h_0+R_0), β=μ_s/μ_m.
Then, by equation (<ref>), the dimensionless resultant axial force is
N=-K(λ_z-1)/R_1+h_0-R_0.
Since the skin shell is stiffer than the muscle core, surface wrinkles will be generated at some critical growth. For simplicity, henceforth, we drop the over-hat from our notation.
§ LINEAR BIFURCATION ANALYSIS
In this section, we derive the critical condition for transverse wrinkling (see Figure <ref>) using the Stroh formalism <cit.>. We then construct a robust numerical scheme using the surface impedance matrix method <cit.> to solve the eigenvalue problem arising from wrinkling instability.
§.§ Incremental theory
To obtain the incremental equations for the stability analysis, we denote the perturbed state for the bilayer system by ℬ, with the associated position vector 𝐱. The relation between the position vectors in ℬ and ℬ is
𝐱(r,z)=𝐱+u,
where u=u_1(r,z)𝐞_r+u_3(r,z)𝐞_z is the incremental displacement when θ is fixed. We focus on the axial instability <cit.> since major wrinkles in elephant trunk tend to develop in the longitudinal direction <cit.>.
The deformation gradient from the reference configuration ℬ_0 to the perturbed configuration ℬ can be expressed as follows,
𝐅=∂𝐱/∂𝐗=(𝐈+η)𝐅,
where η=𝐮 stands for the displacement gradient given by
η=[
[ u_1,1 u_1,2-u_2r u_1,3; u_2,1 u_1r+u_2,2 u_2,3; u_3,1 u_3,2 u_3,3 ]] =[
[ u_1,1 0 u_1,3; 0 u_1r 0; u_3,1 0 u_3,3 ]],
with u_i,1=∂ u_1/∂ r, u_i,2=∂ u_1/(r∂θ), u_i,3=∂ u_i/∂ z, i=1,2,3.
Assuming that the growth tensor 𝐆 is constant, we have
𝐀𝐆=𝐅=(𝐈+η)𝐅=(𝐈+η)𝐀𝐆,
and therefore,
𝐀=(𝐈+η)𝐀.
Recalling that the elastic deformation is isochoric, incompressibility implies (𝐈+η)=1, which in its linearized form specialises to
trη=u_1,1 +u_1/r+u_3,3 =0,
where `tr' is the trace operator.
The nominal stress tensor in ℬ takes the form
𝐒=J𝐆^-1∂ W/∂𝐀-pJ̃𝐆^-1𝐀^-1,
with p the associated Lagrange multiplier.
We define the Lagrange multiplier increment
p=p-p
and introduce the incremental stress tensor
χ^T=J^-1(-)^T,
where `T' denotes the transpose operator.
The incremental equilibrium equation reads
divχ^T=0.
As the magnitude of each component of η is small, we can expand χ in terms of η, retaining the linear terms, i.e.,
χ_ij=𝒜_jilkη_kl+pη_ji-pδ_ji+𝒪(|η_ij|^2), i,j,k,l=1,2,3,
where the summation convention for repeated indices is applied and 𝒜=(𝒜_jilk)_i,j,k,l=1,2,3 denotes the first-order instantaneous modulus tensor with the following nonzero entries:
{ 𝒜_iijj=α_iα_j W_iW_j,
𝒜_ijij=α_iW_i-α_jW_j/α_i^2-α_j^2α_i^2, i≠ j and α_i≠α_j,
𝒜_ijij=𝒜_iiii-𝒜_jjjj+α_iW_i/2, i≠ j and α_i=α_j,
𝒜_ijji=𝒜_ijij-α_iW_i, i≠ j.
.
Note that there is no summation for repeated indices in the above expressions and the tensor 𝒜 has pairwise symmetry, 𝒜_ijkl=𝒜_klij.
By (<ref>), the nonzero incremental stress components are
{ χ_11=𝒜_1111η_11+𝒜_1122η_22+𝒜_1133η_33+p η_11-p,
χ_22=𝒜_2211η_11+𝒜_2222η_22+𝒜_2233η_33+p η_22-p,
χ_33=𝒜_3311η_11+𝒜_3322η_22+𝒜_3333η_33+p η_33-p,
χ_13=𝒜_3131η_13+𝒜_3113η_31+p η_31 ,
χ_31=𝒜_1313η_31+𝒜_1331η_13+p η_13.
.
Then equation (<ref>) reduces to
{ (rχ_11)_,1+rχ_13,3-χ_22=0,
(rχ_31)_,1+rχ_33,3=0.
.
We emphasize that the above general formulae are valid for both the skin and muscle layers. To specify a layer, the related subscript will be added to indicate its affiliation.
The corresponding boundary and interface conditions are, respectively:
χ_s𝐞_r|_r=r_1+h= 0, (χ_s-χ_m)𝐞_r|_r=r_1= 0
and
(u_s1-u_m1)|_r=r_1=0, (u_s3-u_m3)|_r=r_1=0.
The sliding conditions at the ends are
𝐞_r·χ_s𝐞_z|_z=0,l=0, 𝐞_r·χ_m𝐞_z|_z=0,l=0.
§.§ Stroh formulation
Employing the Stroh method <cit.>, we look for periodic solutions of the form:
{ u_1(r,z)=U(r)cos kz, u_3(r,z)=W(r)sin kz,
χ_11(r,z)=T_11(r)cos kz, χ_31(r,z)=T_31(r)sin kz,
.
where k represents the wave number in the axial direction. In view of equations (<ref>), (<ref>), denoting the dimensionless wave number by n, the sliding conditions (<ref>) yield
k=nπ/l, n=1,2,3,⋯.
We further define a displacement-traction vector
Γ=[𝐔(r),𝐓(r)]^T, 𝐔(r)=[U(r),W(r)]^T 𝐓(r)=[T_11(r),T_31(r)]^T.
From (<ref>), (<ref>), and (<ref>)_1,5, we obtain
Γ'(r)=1/r𝐍(r)Γ(r),
with the prime denoting differentiation with respect to r and the Stroh block matrix taking the form
𝐍=[[ 𝐍_1 𝐍_2; 𝐍_3 -𝐍_1^T ]],
where 𝐍_i (i=1,2,3) are 2×2 sub-matrices, such that 𝐍_2 and 𝐍_3 are symmetric. These sub-matrices can be expressed as follows,
𝐍_1=[[ -1 -rk; rk(𝒜_1331+p)𝒜_1313 0 ]], 𝐍_2=[[ 0 0; 0 1𝒜_1313 ]], 𝐍_3=[[ t_11 t_12; t_12 t_22 ]],
where
{ t_11=𝒜_1111-2𝒜_1122+𝒜_2222+r^2k^2𝒜_3131-r^2k^2/𝒜_1313(𝒜_1331+p)^2+2p,
t_12=rk(𝒜_1111-𝒜_1122-𝒜_1133+𝒜_2233+p),
t_22=r^2k^2(𝒜_1111-2𝒜_1133+𝒜_3333+2p).
.
§.§ The surface impedance matrix method
Next, we apply the surface impedance matrix method <cit.>, and introduce the conditional impedance matrix 𝐙(r) which satisfies
𝐓=𝐙(r)𝐔.
From the relations
𝐔'=1/r(𝐍_1𝐔+𝐍_2𝐓), 𝐓'=1/r(𝐍_3𝐔-𝐍_1^T𝐓),
we obtain the Riccati equation
𝐙'=1/r(𝐍_3-𝐍_1^T𝐙-𝐙𝐍_1-𝐙𝐍_2𝐙).
We then apply the above general expressions to the bilayer system. Fur the skin layer, we use the boundary condition 𝐙(r_1+h)=0 determined from (<ref>) to integrate (<ref>) from r_1+h to r_1 and find 𝐙_s(r_1). Applying the same procedure to the muscle layer yields 𝐙_m(r_1). The displacement and traction continuities at interface results in (𝐙_m(r_1)-𝐙_s(r_1))𝐔_s(r_1)=0. For surface wrinkling, the existence of a non-trivial solution of 𝐔 finally leads to the bifurcation condition
Φ(β,γ,n,g,R_0,R_1,h_0,K)=0,
where
Φ(β,γ,n,g,R_0,R_1,h_0,K)=(𝐙_m(r_1)-𝐙_s(r_1)).
§.§ Numerical examples
In this section, the interplay between different model parameters is captured by numerical examples where some model parameters vary while others are kept fixed. For the elephant trunk, in <cit.>, the Young moduli of skin and muscle tissue are estimated as E_s=3μ_s≤ 1190±120 kPa and E_m=3μ_m≈ 938 kPa, respectively, and the skin is found at most 1.27 times stiffer than the muscle.
Nonetheless, we should mention that those moduli were measured by deforming wrinkled skin samples, first at small strain then at large strains, and therefore, the large strain measurements would be closer to those for the unwrinkled skin, which are not available. Additionally, these quantities will be different in the unborn elephant, where wrinkles first form <cit.>, than in calf and in adult elephants where wrinkles continue to evolve. Further measurements are needed. Meanwhile, our relevant plots in this paper are those where the modulus ratio is β=1.27. Notwithstanding these estimates, our modelling approach is valid more generally, and we include a wide range of examples which are valuable in their own right and can be useful to this and other applications as well.
In Figure <ref>, we show bifurcation curves identified from the solutions of (<ref>) for different parameter values. These plots suggest that the relative growth factor γ decreases as a function of the wave number n when the pre-growth factor g increases and also when the relative modulus β increases. For each curve, the local minimum gives the first bifurcation point with the critical relative growth γ_cr and the critical wave number n_cr. In particular, when β=1.27, the relative growth factor γ∼ 2 appears to be optimal.
For β large, as illustrated in Figure <ref> shows bifurcation curves, all closed curves are shrinking for increasing β, indicating a higher modulus ratio delays the critical relative growth γ_cr. Additionally, the bifurcation condition (<ref>) has no solution for some ranges of n values. This feature is quite different from the previously known result that a stiffer film triggers an earlier instability. Another inference is that, for a large enough β, the bifurcation curve may vanish. To study this feature, we plot in Figure <ref> the function Φ, given by equation (<ref>), with the differential growth factor γ increasing while n=12 and the other parameters as in Figure <ref>. The dots where the black arrow crosses the curves in Figure <ref> highlight the minimum points, not the bifurcation points (note that the bifurcation point satisfies the bifurcation condition Φ=0). Meanwhile, the arrow indicates that all curves increase with increasing β, until a critical value β_max^(n=12), where the function Φ is tangential to the line Φ=0. When β exceeds this value, the bifurcation condition has no solution. In general, for n=n_cr, the critical value is β_max, which gives the global maximum value of β for generating wrinkles. This optimal value can be identified by solving the simultaneous equations:
Φ=0, ∂Φ/∂ n=0, ∂Φ/∂γ=0.
where the bifurcation function Φ is described by equation (<ref>). For the elephant trunk, the relevant results are those where β is close to 1.
Figure <ref> displays the monotonically increasing β_max as a function of K. When K→0, corresponding to a free boundary, we find β_max≈0.6466. With a free end, if the skin modulus is higher or comparable to that for the muscle, so that β⩾1, surface wrinkles are impossible, even when compression is generated by differential growth, i.e., when γ>1. The underlying mechanism, in this case, is that growth happens through free trunk elongation. In the other limiting case, where K→∞, the spring constraint becomes a fixed end (Dirichlet boundary) condition, which is usually adopted in the context of growth-induced wrinkling <cit.>. In this scenario, regardless of the value of β, surface wrinkles will form at a critical growth.
To investigate the effect of different parameters on the critical state, we set β=1.27. Figure <ref> shows how the critical relative growth factor and wave number vary with respect to the skin thickness h_0. It suggests that a thicker skin always retards surface wrinkling and, at the same time, reduces the number of wrinkles. This is consistent with other reported results for growth-induced wrinkling <cit.>. Since R_1=0.25, a thicker skin layer will give rise to a lower curvature. As a result, we have that curvature increase will cause an earlier instability if the muscle thickness is fixed.
Next, we comment on the plots in Figure <ref> showing the dependence of γ_cr and n_cr on the radius of the bilayer tube R_1+h_0. In particular, we keep the thickness ratio ζ=h_0/R_1 constant, meaning that both h_0 and R_1 will increase when R_1+h_0 increases. It can be seen from Figure <ref> that the critical differential growth γ_cr is non-monotonic but varies only in a small interval [2.099,2.199]. Meanwhile, the critical wave number n_cr drops uniformly. This implies that, when the thickness ratio ζ is fixed, the critical differential growth does not alter much as the bilayer tube becomes thicker or thinner. However, the wave number changes rapidly. This may explain why there are more wrinkles in the distal narrower part of the elephant trunk than in the proximal wider part near the skull (see Figure <ref>).
For planar film/substrate bilayers, surface wrinkles are usually not affected by the thickness of the substrate which can be seen as a half-space <cit.>. To further check whether this is true for curved structures, we display the variations of γ_cr and n_cr as functions of R_1 by fixing all other parameters. As expected, R_1 alone has no effect on the wrinkled pattern but it does affect the critical differential growth γ_cr. We find that a thicker muscle substrate will cause an earlier instability. Remember that, since h_0 is constant in this case, a higher R_1 corresponds to a lower curvature. Therefore, we summarize by stating that curvature increase can delay surface wrinkles if the thickness of the skin layer is fixed. Furthermore, we discover that the inner radius R_0 has no influence on both the critical differential growth and critical wave number, thus we do not show the associated results here.
Figure <ref> illustrates the effect of the spring stiffness K on the critical state, namely that the critical differential growth γ_cr decreases as K increases, while the critical wave number n_cr is a two-step function of K.
Finally, Figure <ref> shows the critical relative growth factor γ_cr and wave number n_cr when the relative modulus β varies. In this case, we set K=20, instead of K=1 adopted in previous examples, since this latter value is too small to generate wrinkles when β is large. From Figure <ref>, we have β_max≈4.4746 when K=1. However, if K=20, then the associated value becomes β_max≈73.5354. So wrinkles are only possible provided that β<β_max, as seen from Figure <ref>.
We observe from Figure <ref> that the dependence of γ_cr on β is non-monotonic, which is different from the case when K→∞ <cit.>. By carefully analyzing the deformation, we find that the bifurcation is induced by the resultant axial force N. By equation (<ref>), N is proportional to both K and λ_z. When β increases, corresponding to a stiffer outer layer, a larger axial force N is required to generate an instability. Furthermore, Figure <ref> shows the critical stretch λ_z^cr for bifurcation as a function of β. As an increasing function is obtained, more force is necessary to trigger surface wrinkling for higher β. In Figure <ref>, the critical wave number n_cr reduces with increasing β, consistent with the other results.
§ ASYMPTOTIC ANALYSIS
In this section, we present approximate analytical results for the primary deformation and the bifurcation condition.
§.§ Primary deformation
To make analytical progress, we set λ_z=1, corresponding to the case with Dirichlet boundary conditions at both ends of the cylindrical system. This is also the limiting case when the elastic spring is infinitely stiff, i.e., K→∞. Given that the inner surface of the tubular system is free, the equation for the elastic hoop stretch α_r_0 is
g/α_r_0^2-g/α_r_1^2+βγ^2g/α_r_1^2-βγ^2g/α_h^2+logα_r_1^2/α_r_0^2+βγlogα_h^2/α_r_1^2=0,
where α_1^2 and α_h^2 depend on α_r_0^2 through the relations (<ref>). Assuming
α_r_0^2∼ g, α_r_1^2∼ g, α_h^2∼ g,
we have the Taylor expansion
logα_r_1^2/α_r_0^2≈α_r_1^2/α_r_0^2-1, logα_h^2/α_r_1^2≈α_h^2/α_r_1^2-1.
Then (<ref>) is approximated as follows,
g/α_r_0^2-g/α_r_1^2+βγ^2g/α_r_1^2-βγ^2g/α_h^2+α_r_1^2/α_r_0^2-1+βγ(α_h^2/α_r_1^2-1)=0,
or equivalently,
α_h^2/g(α_r_1^2/g-α_r_0^2/g)(α_r_1^2/g+1)+βγα_r_0^2/g(α_h^2/g-α_r_1^2/g)(α_h^2/g+γ)=0.
Expressing α_1^2 and α_h^2 in terms of α_r_0^2, as in (<ref>), yields a cubic equation in x=α_r_0^2/g, which admits only one real solution and can be solved directly. In particular, if γ=1, then α_r_0=α_r_1=α_h=g^1/2.
When R_0∼ h_0^1/2, we can approximate α_r_0∼ g^1/2 by seeking a solution of the form
α_r_0≈ x_0+x_1h_0+x_2h_0^2+x_3h_0^3+⋯,
where x_i, i=0,1,2,⋯, are unknowns to be determined. After inserting the above formula into the cubic equation for α_r_0^2/g, then expanding in powers of h_0 and collecting the coefficients of h_0^i, i=0,1,2,⋯, we obtain infinitely many algebraic equations. Therefore we are able to solve for x_0, x_1, x_2 and find an asymptotic solution for r_0=α_r_0 R_0 of the form
r_0≈ g^1/2R_0[1+βγ h_02R_1(1+8 R_0^2R_1^2)(γ ^2-1)+3
βγ(γ^2-1)^2-2(4γ^2-γ+3)(γ-1)8 R_1^2].
Note that r_0→ g^1/2R_0 as γ→ 1, which is the exact solution when there is no differential growth, i.e., the skin and the muscle grow simultaneously at the same rate.
Figure <ref> illustrates the exact and asymptotic solutions for r_0 as functions of the differential growth factor γ, when the pre-growth factor g and muscle outer radius R_1 are given, the inner radius is R_0=0.1, the skin thickness is h_0=0.01, and the relative stiffness is β=1.27, as estimated in elephant trunk.
§.§ WKB approximation
To derive analytical formulae for the critical stretch and critical wave number, we employ the WKB (Wentzel–Kramers–Brillouin) method <cit.>, which has been found useful in the context of wrinkling in curved and graded structures <cit.>.
First, we see from Figure <ref> that the boundary spring stiffness K has a marginal influence on the critical wave number n_cr. Therefore, we can assume K→∞, which is equivalent to λ_z=1 (i.e., Dirichlet boundary conditions are imposed at both ends of the tubular system).
Second, we consider γ=1 so there is no differential growth, so that the pre-growth factor g as the control parameter. As the main purpose of the asymptotic analysis is to gain analytical insight into the effect of different geometrical and material parameters on the critical state, this is a reasonable simplification.
Then the primary deformation can be analytically characterized, as follows. Starting from the original eigenvalue problem arising from equations (<ref>), we introduce a stream function
ψ(r,z)=f(r)sin(kz).
Then the incremental displacements are
u(r,z)=1/r∂ψ/∂ z, w(r,z)=-1/r∂ψ/∂ r,
and the linearized incremental incompressibility condition is automatically satisfied.
In terms of the stream function, the incremental equations and boundary conditions become, respectively,
r^3𝒜_1313f””+2r^2(r𝒜_1313'-𝒜_1313)f”'+a_2(r)f”+a_1(r)f'+a_0f=0,
and
{ ϱ_s11(r_1+h)=0, ϱ_s31(r_1+h)=0, ϱ_m11(r_0)=0, ϱ_m31(r_0)=0,
ϱ_s11(r_1)=ϱ_m11(r_1), ϱ_s31(r_1)=ϱ_m31(r_1), f_s(r_1)=f_m(r_1), f^'_s(r_1)=f^'_m(r_1),
.
where
{ ϱ_11=r^2𝒜_1313f”'+(r^2𝒜_1313'-r𝒜_1313)f”+b_1(r)f'+b_0(r)f=0,
ϱ_31=r𝒜_1313f”-𝒜_1313f'+k^2r(𝒜_1313+p)f=0,
.
with the coefficients {a_i}_i=0,1,2 and {b_i}_i=0,1 taking the following forms:
{ a_0=k^2r(𝒜_2222-𝒜_1111+2𝒜_1133-2𝒜_2233+𝒜_1111'-𝒜_1122'-𝒜_1331'-𝒜_1133'+𝒜_2233')
+k^4r^3𝒜_3131+k^2r^3𝒜_1331”+k^2r^3p”,
a_1=k^2r^2(𝒜_1111-2𝒜_1331-2𝒜_1133+𝒜_3333)+k^2r^3(2𝒜_1133'-𝒜_1111'+2𝒜_1331'-𝒜_3333')
-3𝒜_1313+3r𝒜_1313'-r^2𝒜_1313”,
a_2=k^2r^3(2𝒜_1133-𝒜_1111+2𝒜_1331-𝒜_3333)+3r𝒜_1313-3r^2𝒜_1313'+r^3𝒜_1313',
b_0=k^2r(𝒜_1111-𝒜_1122-𝒜_1133+𝒜_2233+p+𝒜_1331'+p'),
b_1=k^2r(2𝒜_1133-𝒜_1111+𝒜_1331-𝒜_3333-p')+𝒜_1313-r𝒜_1313'.
.
For the WKB-type solution to (<ref>), we have
f=exp(∫_r_c^r𝒮(τ)dτ),
where 𝒮(r) is a function to be determined, the lower limit of integration is r_c=r_1 for the skin and r_c=r_0 for the muscle.
To solve the eigenvalue problem with variable coefficients, arising from the mathematical model for transverse wrinkles when the wave number n=k/π is large, we look for solutions of the form
𝒮(r)=n 𝒮_0(r)+𝒮_1(r)+1/n𝒮_2(r)+⋯,
where 𝒮_i (i=1,2,3,⋯) are unknown functions.
Substituting the forms (<ref>) and (<ref>) into (<ref>) then equating the coefficient of n to zero provides an algebraic equation for 𝒮_0. Solving this equation directly yields four independent solutions. Similarly, we are able to derive 𝒮_1 and 𝒮_2 in a systematic manner.
For the neo-Hookean material and the restricted growth case considered here, namely, γ=1 and λ_z=1, we obtain the concise formulae given by
𝒮_0^(1,2)=±π, 𝒮_0^(3,4)=±π g^-1/2, 𝒮_1^(1,2,3,4)=1/2r, 𝒮_2^(1,2)=±3/8π r^2, 𝒮_2^(3,4)=±3 g^1/2/8π r^2.
As a result, we express the general solutions as follows
{ f_m(r)=∑_i=1^4 C_iexp(∫_r_0^r𝒮^(i)_m(τ)dτ), r_0⩽ r ⩽ r_1,
f_s(r)=∑_i=1^4 C_i+4exp(∫_r_1^r𝒮^(i)_s(τ)dτ), r_1⩽ r ⩽ r_1+h.
.
Substituting these into the boundary and interface conditions (<ref>), we obtain
𝐌𝐂=0,
where
𝐂=[C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8]^T,
and
𝐌=[
[ M_11 M_12; M_21 M_22, ]],
with the sub-matrices
𝐌_11=[[ 0 0 0 0; 0 0 0 0; 𝔄_m1(r_0) 𝔄_m2(r_0) 𝔄_m3(r_0) 𝔄_m4(r_0); 𝔅_m1(r_0) 𝔅_m2(r_0) 𝔅_m3(r_0) 𝔅_m4(r_0) ]],
𝐌_12=[[ E_s1𝔄_s1(r_1+h) E_s2𝔄_s2(r_1+h) E_s3𝔄_s3(r_1+h) E_s4𝔄_s4(r_1+h); E_s1𝔅_s1(r_1+h) E_s2𝔅_s2(r_1+h) 𝔅_s3(r_1+h) E_s4𝔅_s4(r_1+h); 0 0 0 0; 0 0 0 0 ]],
𝐌_21=[[ -E_m1𝔄_m1(r_1) -E_m2𝔄_m2(r_1) -E_m3𝔄_m3(r_1) -E_m4𝔄_m4(r_1); -E_m1𝔅_m1(r_1) -E_m2𝔅_m2(r_1) -E_m3𝔅_m3(r_1) -E_m4𝔅_m4(r_1); -E_m1 -E_m2 -E_m3 -E_m4; -E_m1𝒮_m^(1)'(r_1) -E_m2𝒮_m^(2)'(r_1) -E_m3𝒮_m^(3)'(r_1) -E_m4𝒮_m^(4)'(r_1) ]],
𝐌_22=[[ 𝔄_s1(r_1) 𝔄_s2(r_1) 𝔄_s3(r_1) 𝔄_s4(r_1); 𝔅_s1(r_1) 𝔅_s2(r_1) 𝔅_s3(r_1) 𝔅_s4(r_1); 1 1 1 1; 𝒮_s^(1)'(r_1) 𝒮_s^(2)'(r_1) 𝒮_s^(3'(r_1) 𝒮_s^(4)'(r_1); ]],
and the components
.
E_mi=exp(∫_r_0^r_1𝒮_m^(i)dr), E_si=exp(∫_r_1^r_1+h𝒮_s^(i)dr),
𝔄_i=b_0+b_1𝒮^(i)'+r(r𝒜_1313'-𝒜_1313)(𝒮^(i)')^2+r^2𝒜_1313(𝒮^(i)')^3
+r(r𝒜_1313'-𝒜_1313)𝒮_s^(i)''+3r^2𝒜_1313𝒮_s^(i)'𝒮_s^(i)''+r^2𝒜_1313𝒮_s^(i)''',
𝔅_i=rk^2(𝒜_1313+p)-𝒜_1313𝒮^(i)'+r𝒜_1313(𝒮^(i)')^2+r𝒜_1313𝒮^(i)'',
} i=1,2,3,4.
Pursuing a non-trivial solution yields
𝐌=0.
It can further to be deduced from (<ref>) that E_m1 and E_m3 are exponentially large. On the other hand, as the skin is very thin, i.e., h is also small, E_si, i=1,2,3,4, no longer possess the exponentially large or small nature. In view of these facts, the terms proportional to E_m1E_m3 are dominant such that
𝐌/E_m1E_m3=Ψ+.
We therefore obtain an approximate but explicit bifurcation condition:
Ψ(g,n,h_0,β,R_0,R_1)=0.
Before proceeding further, we specify the parameter values to be used in the subsequent calculations. To this end, we plot in Figure <ref> the exact bifurcation curves, in the absence of relative growth (i.e., setting γ=1), when the radius R_1 is fixed, R_0=0.1, h_0=0.01, and β=1.27. We find that each curve has two local minima: a first one which increases as R_1 increases, with its critical wave number decreasing, and a second one which is independent of R_1, and thus remains the same for all the curves. For example, when R_1=0.25, the first local minimum is the global minimum, attained at the wave number n=2, while when R_1=0.35, the second local minimum is the global minimum attained at n=17. This suggest a possible mode transition as the aspect ratio of the bilayer tube increases. Similar mode transitions have also been reported for compressed tubes <cit.> and film/substrate structures <cit.>.
For the current problem, we can determine the critical value for the geometric parameter R_1 where the mode transition occurs by applying the following two-step procedure:
(I) First, setting R_1=0.35 say, we determine g_min and n_min for the global minimum of the bifurcation curve by solving simultaneously the equations
Φ(g,n)=0, ∂Φ(g,n)/∂ n=0.
(II) Second, substituting g=g_min in the bifurcation condition Φ, we identify R_1=R_1^(m) and the associated small wave number n^(m) by solving the following two equations
Φ(R_1,n)=0, ∂Φ(R_1,n)∂ n=0.
By applying the above procedure when R_0=0.1, h_0=0.01, β=1.27 and γ=1, as in Figure <ref>, we obtain R_1^(m)≈ 0.2989.
This interesting mode transition deserved to be treated in detail in a separate study. As our attention here is focused on wrinkling, i.e., on the case when the wave number is large, we shall select R_1=0.4 in all subsequent examples and thus avoid the above mode transition.
Figure <ref> compares the exact and asymptotic bifurcation curves for the pre-growth factor g as a function of the wave number n, without differential growth (i.e., we set γ=1), when the skin thickness h_0 is given, the muscle inner radius is R_0=0.1, its outer radius is R_1=0.25, and their modulus ratio is β=1.27 (small) or β=50 (large). Excellent agreement is found for all illustrated cases.
Next, we concentrate on two distinct limiting cases, namely when β∼ 1 and when β→∞, respectively.
§.§.§ The limiting case when the shear modulus ratio is close to unity
In elephant trunk, the skin and muscle substrate have comparable stiffness <cit.>, thus β∼ 1. In this case, it can be seen from Figure <ref> that the critical pre-growth factor g_cr is far from 1, even when the skin layer is very thin, i.e., 0<h_0≪ 1. In this case, we have two small parameters h_0 and 1/n, and approximate the critical growth factor and critical wave number as follows, <cit.>
g_cr=g_0+g_1h_0+g_2h_0^2+⋯
and
n_cr=h_0^-1(n_0+n_1h_0+n_2h_0^2+⋯).
Inserting the above forms into the simultaneous equations given by the bifurcation condition (<ref>) and the equation ∂Ψ/∂ n=0, we find that the two leading order equations for g_0 and n_0 can only be solved numerically. Once they are solved, all higher order unknowns are then derived recursively. Although we do not include here the lengthy expressions for the leading-order equations and the recursive relations, we point out that g_0 and n_0 are both independent of R_0 and R_1, and are only related to β, indicating that the critical pre-growth g_cr is dominated by β only. This is useful in explaining Figure <ref> where γ_cr belongs in the small interval [2.099,2.199]. Similarly, n_cr h_0 is governed by β, and thus the critical wave number n_cr decreases as h_0 increases, which explains the result in Figure <ref>.
The three-term approximations (<ref>)-(<ref>) for the critical pre-growth factor g_cr and critical wave number n_cr when R_0=0.1, R_1=0.4, and ζ=0.0025 (h_0=0.01) are compared with the exact solutions in Figure <ref>. It can be seen from these plots that, for β close to 1, an excellent agreement is found, validating our asymptotic solutions.
§.§.§ The limiting case when the shear modulus ratio is large
When the modulus ratio β is large, we observe from Figure <ref> that the critical pre-growth factor is close to 1. So there are in total four small parameters, namely, g_cr-1, 1/β, 1/n, and h_0. Again, we assume that there is no differential growth, i.e., γ=1. To derive analytical formulae for the critical stretch and critical wave number, we set the scalings <cit.>
g_cr-1∼𝒪(β^-2/3), n_cr∼𝒪(β^2/3), h_0∼𝒪(β^-1),
then pursue an asymptotic solution in terms of the small parameter 1/β. In a similar manner as in the previous subsection, we obtain the following explicit forms for the critical pre-growth factor
g_cr =1+3^2/3β^-2/3+7/203^4/3β^-4/3+h_0^2/R_1^23^1/3β^2/3-h_0/2R_1β^-1
+1/23^2/3β^-5/3+1009/2800 3β^-2
+79 h_0^2/20 R_1^2-h_0^4/R_1^4β^2+𝒪(β^-7/3),
and the critical wave number
n_cr =1/π h_0[3^1/3β^-1/3-1/203^3β^-1+h_0^2/R_1^2β-h_0/2R_13^-1/3β^-2/3.
.+47 h_0^2/40R_1^23^-1/3β^1/3-2 h_0^4/R_1^43^-1/3β^7/3+1229/29003^2/3β^-5/3+𝒪(β^-1)].
From equations (<ref>)-(<ref>) we infer that the inner radius R_0 is not involved up to the truncated order. Meanwhile, the muscle radius R_1 and the skin thickness h_0 affect the critical pre-growth factor trough higher order terms, indicating that their influence is relatively minor compared to that of the relative modulus β. In addition, both h_0 and β feature in the leading-order term in (<ref>). Recalling the notation ζ=h_0/R_1, we can rewrite
g_cr =1+3^2/3β^-2/3+7/203^4/3β^-4/3+ζ^2 3^1/3β^2/3-1/2ζβ^-1
+1/23^2/3β^-5/3+1009/2800 3β^-2+79/20ζ^2-ζ^4β^2+𝒪(β^-7/3),
and
n_cr =1/π h_0[3^1/3β^-1/3-1/203^3β^-1+ζ^2β-ζ/23^-1/3β^-2/3.
.+47/40ζ^2 3^-1/3β^1/3-2ζ^4 3^-1/3β^7/3+1229/29003^2/3β^-5/3+𝒪(β^-1)].
It can be seen from the above equations that g_cr and n_crπ h_0 depend h_0 and R_1 through ζ. In particular, when β and ζ are fixed, the critical pre-growth factor g_cr is constant.
Figure <ref> shows the exact and asymptotic solutions for the critical pre-growth factor g_cr given by equation (<ref>) and the critical wave number n_cr given by equation (<ref>), as functions of the modulus ratio β, when R_0=0.1, R_1=0.4, and ζ=0.04 (h_0=0.01), in the absence of differential growth (i.e., when γ=1).
We also remark that, in equation (<ref>) for transverse wrinkles distributed along the tube length, the leading-order term 3^1/3β^-1/3 is the same as those for the circumferential wrinkles in growing bilayer tubes
<cit.>, core/shell cylinders <cit.>, and surface wrinkles in planar film/substrate bilayers <cit.>. This analsyis further confirms the universal scaling between the modulus ratio and number of wrinkles.
§ CONCLUSION
This work stems from the need to understand wrinkling in elephant trunk where physical measurements suggest that skin and muscle substrate have comparable stiffness. Due to their mechanical compatibility, the two components can grow together seamlessly, and the wrinkled skin acts as a protective barrier that is at the same time thicker and more flexible than the unwrinkled skin. Moreover, geometric parameters, such as curvature, play key roles in the formation of transverse wrinkles. In particular, our model predicts that fewer wrinkles form and earlier in the proximal region where curvature is lower than in the distal region which has larger curvature, as observed in elephant trunks. Similarly, the dorsal side presents more wrinkling than the ventral side, since curvature is higher dorsally than ventrally.
While our theoretical and numerical results describe mathematically how transverse wrinkles form, our investigation extends beyond that and can be useful to many other applications. Follow-up models should take into account additional deformations and loading conditions, e.g., bending, unbending, and inflation under internal pressure, as in a water filled elephant trunk, for example. More sophisticated hyperelastic strain-energy functions for skin and muscle tissues could also be considered.
The elephant trunk is a rich source of inspiration for bio-mechanical devices, but its physiology is yet to be understood. We hope that our analysis will stimulate further quantitative studies of elephant trunk and elephant skin more generally.
Acknowledgment
We gratefully acknowledge the UKRI Horizon Europe Guarantee MSCA (Marie Skłodowska-Curie Actions) Postdoctoral Fellowship to Yang Liu (EPSRC Grant No. EP/Y030559/1). Yang Liu further acknowledges the financial support from the National Natural Science Foundation of China (Project No. 12072227).
Data availability statement
There are no additional data associated with this article.
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
9
[Ayer and Mariappa, 1950]Ayer:1950:AM Ayer AA, Mariappa D. 1950. External characters of three foetuses of the Indian elephant. Proceedings of the Indian Academy of Sciences B 31(4), 193–209 (doi: 10.1007/BF03050577).
[Ben Amar and Goriely, 2005]BenAmar2005JMPS
Ben Amar M., Goriely A., 2005.
Growth and instability in elastic tissues.
Journal of the Mechanics and Physics of Solids 53, 2284–2319 (doi: 10.1016/j.jmps.2005.04.008).
[Biryukov, 1985]Biryukov1985
Biryukov, S.V., 1985.
Impedance method in the theory of elastic surface waves.
Sov. Phys. Acoust. 31, 350–354.
[Biryukov et al., 1995]Biryukov1995
Birykov, S.V., Gulyaev, Y.V., Krylov, V.V., Plessky, V.P., 1995.
Surface acoustic waves in inhomogeneous media.
Springer Berlin, Heidelberg.
[Boas and Paulli, 1908]Boas:1908:BP Boas J.E.V., Paulli S., 1908. The Elephant's Head: Studies in the Comparative Anatomy of the Organs of the Head of the Indian Elephant and Other Mammals, published at the cost of the Carlsberg-Fund, Copenhagen.
[Dagenais et al., 2021]Dagenais:2021:etal Dagenais P., Hensman S., Haechler V., 2021. Elephants evolved strategies reducing the biomechanical complexity of their trunk. Current Biology 31(21), 4727–-4737 (doi: 10.1016/j.cub.2021.08.029).
[Dai and Liu, 2014]Dai2014EPL Dai, H.-H., Liu, Y., 2014. Critical thickness ratio for buckled and wrinkled fruits and vegetables. Europhysics Letters 108(4), 44003 (doi: 10.1209/0295-5075/108/44003).
[De Pascalis et al., 2010]DePascalis:2011:PDG De Pascalis R., Destrade M., Goriely A., 2010. Nonlinear correction to the Euler buckling formula for compressed cylinders with guided-guided end conditions. Journal of Elasticity 102 (2), 191–200 (doi: 10.1007/s10659-010-9265-6).
[Destrade et al., 2014]Destrade:2014:DOSV Destrade M., Ogden R.W., Sgura I., Vergori L., 2014. Straightening wrinkles, Journal of the Mechanics and Physics of Solids 65, 1–11 (doi: 10.1016/j.jmps.2014.01.001).
[Duka et al., 1993]Duka1993AM Duka E.D., England A.H., Spencer A.J.M., 1993. Bifurcation of a solid circular cylinder under finite extension and torsion. Acta Mechanica 98, 107–121 (doi: 10.1007/BF01174297).
[Eales, 1926]Eales:1926 Eales NB. 1926. XI—The Anatomy of the head of a foetal African elephant, Elephas africanus (Loxodonta africana). Transactions of the Royal Society of Edinburgh 54(3), 491–551. (doi: 10.1017/S0080456800016082).
[Emery and Fu, 2021]Emery:2021:EF Emery D., Fu Y., 2021. Elasto-capillary circumferential buckling of soft tubes under axial loading: existence and competition with localised beading and periodic axial modes. Mechanics of Soft Materials 3, 3 (doi: 10.1007/s42558-021-00034-x).
[Flügge, 1962]Flugge:1962 Flügge W., 1962. Statik und Dynamik der Schalen. Berlin, Springer.
[Flügge, 1973]Flugge:1973 Flügge W., 1963. Stresses in Shells. Berlin, Heidelberg, New York, Springer-Verlag.
[Fosdick and Shield, 1963]Fosdick:1963:FS Fosdick R.A., Shield R.T., 1963. Small bending of a circular bar superposed on finite extension or compression. Archive for Rational Mechanics and Analysis 12, 223–248 (doi: 10.1007/BF00281227).
[Fu et al., 2016]Fu:2016:FLF Fu Y., Liu J.L., Francisco G.S., 2016. Localized bulging in an inflated cylindrical tube of arbitrary thickness - the effect of bending stiffness. Journal of the Mechanics and Physics of Solids 90, 45–60 (doi: 10.1016/j.jmps.2016.02.027).
[Fu et al., 2021]Fu:2021:FLG Fu Y., Jin L., Goriely A., 2021. Necking, beading, and bulging in soft elastic cylinders. Journal of the Mechanics and Physics of Solids 147, 104250 (doi: 10.1016/j.jmps.2020.104250).
[Goriely, 2017]goriely17 Goriely A., 2017. The Mathematics and Mechanics of Biological Growth, Springer-Verlag, New York.
[Goriely et al., 2008]Goriely2008PRSA Goriely A., Vandiver R., Destrade M., 2008. Nonlinear Euler buckling. Proceedings of the Royal Society A 464(2099), 3003–3019 (doi:10.1098/rspa.2008.0184).
[Green et al., 1952]Green:1952:GRS Green A.E., Rivlin R.S., Shield R.T., 1952. General theory of small elastic deformations superposed on finite elastic deformations. Proceedings of the Royal Society A 211, 128–154 (doi: 10.1098/rspa.1952.0030).
[Gustafson and Abe, 1998]Gustafson:1998:GA Gustafson K., Abe T., 1998. The third boundary condition – Was it Robin's?. The Mathematical Intelligencer 20, 63–71 (doi: 10.1007/BF03024402).
[Hildebrandt et al., 2007]Hildebrandt:2007:etal Hildebrandt T., Drews B., Gaeth A.P., Goeritz F., Hermes R., Schmitt D., Gray C., Rich P., Streich W.J., Short R.V., Renfree M.B., 2007. Foetal age determination and development in elephants. Proceedings of the Royal Society B 274, 323–331 (doi: 10.1098/rspb.2006.3738).
[Hinch, 1991]Hinch1991 Hinch, E.J., 1991. Perturbation Methods. Cambridge University Press.
[Ilichev and Fu, 2014]Ilichev:2014:IF Il'ichev A.T., Fu Y., 2014. Stability of an inflated hyperelastic membrane tube with localized wall thinning. International Journal of Engineering Science 80, 53–61 (doi: 10.1016/j.ijengsci.2014.02.031).
[Jia et al., 2014]Jia:2014:etal Jia F., Cao Y.P., Zhao Y., Feng X.Q., 2014. Buckling and surface wrinkling of an elastic graded cylinder with elastic modulus arbitrarily varying along radial direction. International Journal of Applied Mechanics 6(1), 1450003 (doi: 10.1142/S1758825114500033).
[Jia et al., 2015]Jia2015PRE Jia F., Li B., Cao Y.P., Xie W.H., Feng X.Q., 2015. Wrinkling pattern evolution of cylindrical biological tissues with differential growth. Physical Review E 91(1), 012403 (doi: 10.1103/PhysRevE.91.012403).
[Jia et al., 2018]Jia:2018:etal Jia F., Pearce S.P., Goriely A., Curvature delays growth-induced wrinkling. Physical Review E 98, 033003 (doi: 10.1103/PhysRevE.98.033003).
[Jin et al., 2018]Jin:2018:etal Jin L., Liu Y., Cai Z., Asymptotic solutions on the circumferential wrinkling of growing tubular tissues. International Journal of Engineering Science 128, 31–43 (doi: 10.1016/j.ijengsci.2018.03.005).
[Kaczmarski et al., 2024a]Kaczmarski:2024a:etal Kaczmarski B., Leanza S., Zhao R., Kuhl E., Moulton D.E., Goriely A., 2024. Minimal design of the elephant trunk as an active filament. Physical Review Letters 132, 248402 (doi: 10.1103/PhysRevLett.132.248402).
[Kaczmarski et al., 2024b]Kaczmarski:2024b:etal Kaczmarski B., Moulton D.E., Goriely A., Kuhl E., 2024. Minimal activation with maximal reach: Reachability clouds of bio-inspired slender manipulators. Extreme Mechanics Letters Volume 71, 102207 (doi: 10.1016/j.eml.2024.102207).
[Kier and Smith, 1985]Kier:1985:KS Kier W.M., Smith K.K., 1985. Tongues, tentacles and trunks: The biomechanics of movement in muscular-hydrostats. Zoological Journal of the Linnean Society 83(4), 307–-324 (doi: 10.1111/j.1096-3642.1985.tb01178.x).
[Leanza et al., 2024]Leanza:2024:etal Leanza S., Lu-Yang J., Kaczmarski B., Wu S., Kuhl E., Zhao R.R., 2024. Elephant trunk inspired multimodal deformations and movements of soft robotic arms. Advanced Functional Materials, 202400396 (doi: 10.1002/adfm.202400396).
[Li and Yen, 1972]Li:1972:LY Li R.C.M., Yen K.H., 1972. Elastic waves guided by a solid layer between adjacent substrates. IEEE Transactions on Microwave Theory and Techniques 20(7), 477–486 (doi: 10.1109/TMTT.1972.1127788).
[Liu and Dai, 2014]Liu2014IJES Liu Y., Dai H.-H., 2014. Compression of a hyperelastic layer-substrate structure: Transitions between buckling and surface modes. International Journal of Engineering Science 80, 74–89 (doi: 10.1016/j.ijengsci.2014.02.020).
[Liu and Dorfmann, 2024]Liu2024MMS Liu Y., Dorfmann L., 2024. Localized necking and bulging of finitely deformed residually stressed solid cylinder. Mathematics and Mechanics of Solids 29(6), 1153–1175 (doi: 10.1177/10812865231186951).
[Liu et al., 2022]Liu:2022:etal Liu C., Du Y., Li K., Zhang Y., Han Z., Zhang Y., Qu S., Lü C., 2022. Geometrical incompatibility guides pattern selection in growing bilayer tubes. Journal of the Mechanics and Physics of Solids 169, 105087 (doi: 10.1016/j.jmps.2022.105087).
[Liu et al., 2024]Liu:2024:etal Liu R.-C., Liu Y., Goriely A., 2024. Surface wrinkling of a film coated to a graded substrate. Journal of the Mechanics and Physics of Solids, 186, 105603 (doi: 10.1016/j.jmps.2024.105603).
[Pearce and Fu, 2010]Pearce:2010:PF Pearce S.P., Fu Y., 2010. Characterization and stability of localized bulging/necking in inflated membrane tubes. IMA Journal of Applied Mathematics 75(4), 581–602 (doi: 10.1093/imamat/hxq026).
[Schulz et al., 2020]Schulz:2020:SFSSH Schulz A.K., Fourney E., Sordilla S., Sukhwani A., Hu D.L., 2020. Elephant trunk skin: Nature's flexible kevlar. International Conference on Intelligent Robots and Systems (IROS), 10.
[Schulz et al., 2022a]Schulz:2022a:SBBSRHARHH Schulz A.K., Boyle M., Boyle C., Sordilla S., Rincon C., Hooper S., Aubuchon C., Reidenberg J.S., Higgins. C, Hu D.L., 2022. Skin wrinkles and folds enable asymmetric stretch in the elephant trunk. PNAS Biophysics and Computational Biology 119(31), e2122563119 (doi: 10.1073/pnas.2122563119).
[Schulz et al., 2022b]Schulz:2022b:SRWTSMEH Schulz A.K., Reidenberg J.S., Wu J.N., Tang C.Y., Seleb B., Mancebo J., Elgart N., Hu D.L., 2022. Elephants trunks use an adaptable prehensile grip. Bioinspiration & Biomimetics 18, 026228 (doi: 10.1088/1748-3190/acb477).
[Schulz et al., 2023]Schulz:2023:SSZS Schulz A.K., Schneider N., Zhang M., Singal K., 2023. A year at the forefront of hydrostatic motion. Biology Open 12, bio059834 (doi: 10.1242/bio.059834).
[Schulz et al., 2024]Schulz:2024:SRKRHB Schulz A.K., Reveyaz N., Kaufmann L., Ritter C., Hildebrandt T., Brecht M., 2023. Elephants develop wrinkles through both form & function. Society of Integrative and Comparative Biology, Seattle, USA, 1 (doi: 10.1101/2023.08.24.554618).
[Shuvalov, 2003a]Shuvalov2003PRSA
Shuvalov, A.L., 2003.
A sextic formalism for three-dimensional elastodynamics of cylindrically anisotropic radially inhomogeneous materials.
Proc. R. Soc. Lond. A 459, 1611–1639.
[Shuvalov, 2003b]Shuvalov2003QJMAM
Shuvalov, A.L., 2003.
The frobenius power series solution for cylindrically anisotropic radially inhomogeneous elastic materials.
Q. J. Mech. Appl. Math, 56(3), 327–345.
[Sigaeva et al., 2018]Sigaeva:2018:etal Sigaeva T., Mangan R., Vergori L., Destrade M., Sudak L., 2018. Wrinkles and creases in the bending, unbending and eversion of soft sectors. Proceedings of the Royal Society A 474, 20170827 (doi: 0.1098/rspa.2017.0827).
[Springhetti et al., 2023]Springhetti:2023:SRB Springhetti R., Rossetto G., Bigoni D., 2023. Buckling of thin-walled cylinders from three dimensional nonlinear elasticity. Journal of Elasticity 154(1-4), 297–323 (doi: 10.1007/s10659-022-09905-4).
[Stroh, 1962]Stroh:1962 Stroh A.N., 1962. Steady state problems in anisotropic elasticity. Journal of Mathematics and Physics 41, 77–103 (doi: 10.1002/sapm196241177).
[Thompson, 1942]Thompson:1942 Thompson D.W., 1942. On Growth and Form, Cambridge University Press, Cambridge.
[Trivedi et al., 2008] Trivedi:2008:etal Trivedi D., Rahn C.D., Kier W.M., Walker I.D., 2008. Soft robotics: biological inspiration, state of the art, and future research. Applied Bionics and Biomechanics 5(3), 99–117 (doi: 10.1080/11762320802557865).
[Wang and Fu, 2021]Wang:2021:WF Wang M., Fu Y., 2021. Necking of a hyperelastic solid cylinder under axial stretching: Evaluation of the infinite-length approximation. International Journal of Engineering Science 159, 103432 (doi: 10.1016/j.ijengsci.2020.103432).
[Wang et al., 2019]Wang:2019:etal Wang S.B., Guo G.M., Zhou L., Li L.A., Fu Y., 2019. An experimental study of localized bulging in inflated cylindrical tubes guided by newly emerged analytical results. Journal of the Mechanics and Physics of Solids 124, 536–554 (doi: 10.1016/j.jmps.2018.11.011).
[Wilkes, 1955]Wilkes1955 Wilkes E.W., 1955. On the stability of a circular tube under end thrust. The Quarterly Journal of Mechanics and Applied Mathematics 8, 88–100 (doi:10.1093/qjmam/8.1.88).
[Wilson et al., 1991]Wilson:1991:etal Wilson JF, Mahajan U, Wainwright SA, Croner LJ. 1991. A continuum model of elephant trunks. ASME Journal of Biomechanical Engineering 113(1), 79–84 (doi: 10.1115/1.2894088).
[Woo and Shield, 1962]Woo:1962:WS Woo T.C., Shield R.T., 1962. Fundamental solutions for small deformations superposed on finite biaxial extension of an elastic body. Archive for Rational Mechanics and Analysis 9, 196–224 (doi: 10.1007/BF00253345).
[Wu et al., 2024]Wu:2024:etal Wu W., Yin Y., Li Y., Fan X., 2024. Theoretical analysis of inflated tube wrinkling behavior under pure bending. International Journal of Mechanical Sciences 273, 109166 (doi: 10.1016/j.ijmecsci.2024.109166).
[Ye et al., 2020]Ye:2020:YLF Ye Y., Liu Y., Fu Y., 2020. Weakly nonlinear analysis of localized bulging of an inflated hyperelastic tube of arbitrary wall thickness. Journal of the Mechanics and Physics of Solids 135, 103804 (doi: 10.1016/j.jmps.2019.103804).
[Ye et al., 2019]Ye:2019:etal Ye S., Yin S.F., Li B., Feng X.Q., 2019. Torsion instability of anisotropic cylindrical tissues with growth. Acta Mechanica Solida Sinica 32(5), 621–632 (doi: 10.1007/s10338-019-00087-6).
[Zhang et al., 2023]Zhang:2023:etal Zhang J., Li Y., Kan Z., Yuan Q., Rajabi H., Wu Z., Peng H., Wu J., 2023. A preprogrammable continuum robot inspired by elephant trunk for dexterous manipulation. Soft Robotics 10(3), 636-646 (doi: 10.1089/soro.2022.0048).
[Zhao et al., 2014]Zhao2014JMPS Zhao Y., Cao Y.P., Feng X.Q., Ma K., 2014. Axial compression-induced wrinkles on a core–shell soft cylinder: Theoretical analysis, simulations and experiments. Journal of the Mechanics and Physics of Solids 73, 212–227 (doi: 10.1016/j.jmps.2014.09.005).
[Zhu et al., 2008]Zhu:2008:ZLO Zhu Y., Luo X.Y., Ogden R.W., 2008. Asymmetric bifurcations of thick-walled circular cylindrical elastic tubes under axial loading and external pressure. International Journal of Solids and Structures 45, 3410–3429 (doi: 10.1016/j.ijsolstr.2008.02.005).
|
http://arxiv.org/abs/2409.03278v1 | 20240905064141 | Magnitude homology and homotopy type of metric fibrations | [
"Yasuhiko Asao",
"Yu Tajima",
"Masahiko Yoshinaga"
] | math.AT | [
"math.AT"
] |
iText2KG: Incremental Knowledge Graphs Construction Using Large Language Models
Yassir LAIRGI1,20000-0002-7284-5489 Ludovic MONCLA10000-0002-1590-9546 Rémy CAZABET10000-0002-9429-3865 Khalid BENABDESLEM1 Pierre CLÉAU2
September 9, 2024
=============================================================================================================================================
§ ABSTRACT
In this article, we show that each two metric fibrations with a common base and a common fiber have isomorphic magnitude homology, and even more, the same magnitude homotopy type. That can be considered as a generalization of a fact proved by T. Leinster that the magnitude of a metric fibration with finitely many points is a product of those of the base and the fiber. We also show that the definition of the magnitude homotopy type due to the second and the third authors is equivalent to the geometric realization of Hepworth and Willerton's pointed simplicial set.
§ INTRODUCTION
The notion of a metric fibration was defined by T. Leinster in his study of magnitude (<cit.>). It is a “fibration in the category of metric spaces”, defined analogously to the Grothendieck fibrations of small categories, where one sees a metric space as an category enriched over ([0, ∞), ≥, +). Based on the fact that a Grothendieck fibration can also be considered as a lax functor, the first author later provided an analogous description for the metric fibration (<cit.>). A remarkable property of the metric fibration is that the magnitude of the total space of a metric fibration is a product of those of the base and the fiber if they are finite metric spaces (<cit.> Theorem 2.3.11). In this article, we show that the same is true for the magnitude homology and the magnitude homotopy type of a metric fibration possibly with infinitely many points. Namely we have the following.
Let π : E B a metric fibration, and let F be its fiber. For ℓ>0, we have a homotopy equivalence
^ℓ_∗(E) ≃⊕_ℓ_ + ℓ_ = ℓ^ℓ__∗(F)⊗^ℓ__∗(B),
where denotes the magnitude chain complex.
Let π : E B be a metric fibration and let F be its fiber. Then we have a homotopy equivalence
| M^ℓ_∙(E)| ≃⋁_ℓ_ + ℓ_ = ℓ| M^ℓ__∙(F)| ∧ | M^ℓ__∙(B)|,
where | M^ℓ_∙(-)| is the geometric realization of the Hepworth and Willerton's pointed simplicial set (<cit.>).
In particular, we give an another proof for the Künneth theorem for magnitude homology proved by Hepworth and Willerton (<cit.> Proposition 8.4).
We use the terminology magnitude homotopy type as a CW complex whose singular homology is isomorphic to the magnitude homology of some metric sapce. Such a topological space first appeared in Hepworth and Willerton's paper (<cit.> Definition 8.1), and later the second and the third author gave another definition (<cit.>) by generalizing the construction for graphs due to the first author and Izumihara (<cit.>). In their paper, the second and the third author stated that the both definitions of the magnitude homotopy type, theirs and Hepworth-Willeton's, are equivalent without a proof. We gave a proof for it in the appendix (Proposition <ref>).
The main idea of the proof of our main results is to construct a contractible subcomplex D^ℓ_∗(E) of the magnitude chain complex ^ℓ_∗(E) for a metric fibration π : E B. We have the following isomorphism (Proposition <ref>)
^ℓ_∗(E)/D_∗^ℓ(E)≅⊕_ℓ_ + ℓ_ = ℓ^ℓ__∗(F)⊗^ℓ__∗(B),
where F is the fiber of π. To find such a subcomplex D^ℓ_∗(E), we use the classification horizontal, vertical, tilted, of pairs of points of E as in Figure <ref>. We define (Definition <ref>) a submodule D^ℓ_n(E) of ^ℓ_n(E) ⊂ E^n+1 as one generated by tuples (x_0, …, x_n) that contains tilted pair (x_s, x_s+1) earlier than horizontal-vertical triple (x_t, x_t+1, x_t+2) (namely s+1 ≤ t), or contains horizontal-vertical triple (x_t, x_t+1, x_t+2) earlier than tilted pair (x_s, x_s+1) (namely t+2 ≤ s). We show that D^ℓ_∗(E) is a subcomplex of ^ℓ_∗(E) (Lemma <ref>), and that it is contractible (Proposition <ref>) by using the algebraic Morse theory. For the magnitude homotopy type, we basically follow the same argument using Δ-sets instead of chain complexes (Section <ref>).
In the remained part of this article, we show the isomorphism of magnitude homology in Section <ref>, and show the equvalence of magnitude homotopy type in Section <ref>. The Section <ref> is an appendix section in which we show the equivalence of definions of the magnitude homotopy type.
§.§.§ Acknowledgements
Y. A. was supported by JSPS KAKENHI 24K16927. Y. T. was supported by JST SPRING JPMJSP2119.
M. Y. was partially supported by JSPS KAKENHI JP22K18668 and JP23H00081.
§ ISOMORPHISM AT HOMOLOGY LEVEL
§.§ magnitude homology
Let (X, d) be a metric space.
* For ℓ∈_≥ 0 and n ∈_≥ 0, we define
P_n^ℓ(X) := {(x_0, …, x_n) ∈ X^n+1| x_i≠ x_i+1, ∑_i=0^n-1d(x_i, x_i+1) = ℓ},
and P_n(X) := ∪_ℓ P_n^ℓ(X).
* For x, y, z ∈ X, we write x ≺ y ≺ z if d(x, z) = d(x, y) + d(y, z).
* The magnitude chain complex (^ℓ_∗(X), ^ℓ_∗) is defined by ^ℓ_n(X) = P^ℓ_n(X) and
_n (x_0, …, x_n) := ∑_x_i-1≺ x_i ≺ x_i+1(-1)^i(x_0, …, x̂_i, …, x_n).
Its homology ^ℓ_∗(X) is called the magnitude homology of X.
§.§ metric fibration
A Lipschitz map π : E B is a metric fibration if it satisfies the following : for all x ∈ E and b ∈ B, there uniquely exists x^b ∈π^-1b satisfying
* d(x, x^b) = d(π x, b),
* d(x, y) = d(x, x^b) + d(x^b, y) for all y ∈π^-1b.
Let π : E B be a metric fibration. For b, b' ∈ B, a map π^-1b π^-1b' ; x ↦ x^b' is an isomorphism of metric spaces.
<cit.> Lemma 2.3.10, <cit.> Lemma 3.4.
* Let be a monoid freely generated by words h, v, t. We denote the subset of that consists of n words by _n.
* For a metric fibration π : E B, we define a map T : P_1(E) _1 by
T(x, x') = if d(x, x') = d(π x, π x'),
if d(π x, π x') = 0,
if 0 < d(π x, π x') < d(x, x').
We extend this map to a map T : P_n(E) by T(x_0, …, x_n) = T(x_0, x_1)… T(x_n-1, x_n).
* For xy∈_2 and z∈_1, we write xy = z if there is a metric fibration π : E B and (x, y, z) ∈ P_2(E) satisfying that x≺ y ≺ z, T(x, y, z) = xy and T(x, z) = z. We also define { xy} = { z∈_1 | xy = z}.
The words , , are abbreviations of horizontal, vertical and tilted respectively.
In the following figures, the graph on the left is (I_2 × I_2) × I_3, where I_n is the graph with vertices {1, …, n} and edges {{i, i+1}| 1 ≤ i ≤ n-1}, and the graph on the right is a non-trivial metric fibration over the complete graph K_3 with the fiber I_2. We have the following :
* 1 ≺ 2 ≺ 6, T(1, 2, 6) = , T(1, 6) = ,
1 ≺ 5 ≺ 6, T(1, 5, 6) = , T(1, 6) = ,
* 1 ≺ 2 ≺ 7, T(1, 2, 7) = , T(1, 7) = ,
1 ≺ 6 ≺ 7, T(1, 6, 7) = , T(1, 7) = ,
* 1 ≺ 5 ≺ 10, T(1, 5, 10) = , T(1, 10) = ,
1 ≺ 6 ≺ 10, T(1, 6, 10) = , T(1, 10) = ,
* 1 ≺ 6 ≺ 11, T(1, 6, 11) = , T(1, 11) =,
* 1 ≺ 2 ≺ 3, T(1, 2, 3) = , T(1, 3) =,
* a ≺ e ≺ f, T(a, e, f) = , T(a, f) =.
For each x, y∈_1, we have the following.
* { xy} = {}⇔ xy = ⇔ xy =.
* {}= {}= {}, and { x} = { x} = {} for all x∈_1.
* {} = {, }.
* For (x, y, z) ∈ P_2(E) with x≺ y ≺ z and T(x, y, z) =, we have T(x, z) = if and only if π x ≺π y ≺π z.
* Obviously we have { xy} = {}⇒ xy =. Also we have xy = ⇒{ xy} = {}. Hence it is enough to show that xy = implies xy =. Let (x, y, z) ∈ P_2(E) with x ≺ y ≺ z. We show that T(x, z) = implies T(x, y) = T(y, z) =. If T(x, z) =, we have π x = π z, which implies
d(x, y) + d(y, z) = d(π x, π y) + d(x, y^π x) + d(π y, π z) + d(y^π z, z)
= d(x, y^π x) + d(y^π x, z) + 2d(π x, π y)
≥ d(x, z) + 2d(π x, π y).
Since we have x ≺ y ≺ z, we obtain that d(π x, π y) = d(π z, π y) = 0, namely T(x, y) = T(y, z) =.
* Note that we have = and = by Example <ref> (1), and we also have ( = ) and ( = ) by the definition of the metric fibration. Hence we obtain {}= {} = {}. Suppose that T(x, y, z) = x for (x, y, z) ∈ P_2(E), x∈_1 and x ≺ y ≺ z. Then we have d(x, z) = d(x, y) + d(y, z) > d(π x, π y) + d(π y, π z) ≥ d(π x, π z) > 0 by T(y, z) = and (1). Hence we obtain T(x, z) =, and by Example <ref> (2), (3) and (4), we obtain { x} = {}. We can similarly show that { x} = {}.
* We have {}⊂{, } by (1), and the inverse inclusion follows from Example <ref> (5) and (6).
* By T(x, y, z) = and x≺ y ≺ z, we have
d(x, z) = d(x, y) + d(y, z) = d(π x, π y) + d(π y, π z).
Hence T(x, z) = implies that d(π x, π z) = d(x, z) = d(π x, π y) + d(π y, π z), and π x ≺π y ≺π z implies that d(x, z) = d(π x, π z).
§.§ a subcomplex D_∗^ℓ(E)⊂^ℓ_∗(E)
In the following, we construct a chain subcomplex D^ℓ_∗(E) ⊂^ℓ_∗(E) that consists of tuples of special types P_n^ℓ, (E) and P_n^ℓ, (E). We define the set P_n^ℓ, (E) ⊂ P^ℓ_n(E) as tuples containing tilted pair (x_s, x_s+1) earlier than horizontal-vertical triple (x_t, x_t+1, x_t+2) (namely s+1 ≤ t). Dually, we define the set P_n^ℓ, (E) ⊂ P^ℓ_n(E) as tuples containing horizontal-vertical triple (x_t, x_t+1, x_t+2) earlier than tilted pair (x_s, x_s+1) (namely t+2 ≤ s). Formally we define them as follows.
For a metric fibration π : E B, we define subsets P_n^ℓ, (E), P_n^ℓ, (E) ⊂ P^ℓ_n(E) by
P_n^ℓ, (E) := {x ∈ P^ℓ_n(E) | Tx ∈^m^m' for m, m'≥ 0},
P_n^ℓ, (E) := {x ∈ P^ℓ_n(E) | Tx ∈^m^m'+1 for m, m'≥ 0}.
We also define a submodule D^ℓ_n(E) := P_n^ℓ, , (E) ⊂^ℓ_n(E), where P_n^ℓ, , (E) = P_n^ℓ, (E)∪ P_n^ℓ, (E).
We have _n x ∈ D^ℓ_n-1(E) for x ∈ P_n^ℓ, , (E). Namely, D^ℓ_∗(E) ⊂^ℓ_∗(E) is a chain subcomplex.
It follows from Lemma <ref>.
Let π : E B be a metric fibration. We fix b ∈ B and F := π^-1b. Then we have an isomorphism of chain complexes
^ℓ_∗(E)/D_∗^ℓ(E)≅⊕_ℓ_ + ℓ_ = ℓ^ℓ__∗(F)⊗^ℓ__∗(B).
Note that the module ^ℓ_n(E)/D_n^ℓ(E) is freely generated by tuples x ∈ P^ℓ_n(E) with Tx = ^m^n-m for some 0 ≤ m ≤ n. For each n≥ 0, we define a homomorphism φ_n : ^ℓ_n(E)/D_n^ℓ(E) ⊕_ℓ_ + ℓ_ = ℓ
m≥ 0^ℓ__m(F)⊗^ℓ__n-m(B) by
φ_n(x_0, …, x_n) = (x_0^b, …, x_m^b)⊗ (π x_m, …, π x_n),
where we suppose that T(x_0, …, x_n) = ^m^n-m. This homomorphism has an inverse ψ_n defined by
ψ_n((f_0, …, f_m)⊗ (b_0, …, b_n-m)) = (f_0^b_0, …, f_m^b_0, f_m^b_0b_1, f_m^b_0b_1b_2, …, f_m^b_0… b_n-m),
where we denote a point (f_m^b_0)^b_1 by f_m^b_0b_1 and similarly for further iterations.
Hence it reduces to show that φ_∗ is a chain map. We denote the boundary operator on ^ℓ_∗(E)/D_∗^ℓ(E) induced from ^ℓ_∗ by [^ℓ]_∗ in the following. For (x_0, …, x_n) ∈^ℓ_n(E)/D_n^ℓ(E) with T(x_0, …, x_n) = ^m^n-m, we have
[^ℓ]_n(x_0, …, x_n) = ∑_x_i-1≺ x_i ≺ x_i+1
T(x_i-1, x_i+1) ≠(-1)^i(x_0, …, x̂_i, …, x_n)
= ∑_x_i-1≺ x_i ≺ x_i+1
1 ≤ i ≤ m-1(-1)^i(x_0, …, x̂_i, …, x_m, …, x_n)
+ ∑_π x_i-1≺π x_i ≺π x_i+1
m+1 ≤ i ≤ n-1(-1)^i(x_0, …, x_m, …, x̂_i, …, x_n),
by Lemma <ref> (2) and (4). Hence we obtain that
φ_n-1[^ℓ]_n(x_0, …, x_n) = ∑_x^b_i-1≺ x^b_i ≺ x^b_i+1
1 ≤ i ≤ m-1(-1)^i(x^b_0, …, x̂_i^b, …, x^b_m) ⊗ (π x_m, …, π x_n)
+ ∑_π x_i-1≺π x_i ≺π x_i+1
m+1≤ i ≤ n-1(-1)^i(x^b_0, …, x^b_m) ⊗ (π x_m, …, π x_i, …, π x_n).
On the other hand, for φ_n(x_0, …, x_n) = (x^b_0, …, x^b_m)⊗ (π x_m, …, π x_n) ∈^ℓ__m(F)⊗^ℓ__n-m(B), we have
(_m^ℓ_⊗_n-m^ℓ_)φ_n(x_0, …, x_n) = ∑_x^b_i-1≺ x^b_i ≺ x^b_i+1
1 ≤ i ≤ m-1(-1)^i(x^b_0, …, x̂_i^b, …, x^b_m)⊗ (π x_m, …, π x_n)
+ ∑_π x_i-1≺π x_i ≺π x_i+1
m+1≤ i ≤ n-1(-1)^i(x^b_0, …, x^b_m)⊗ (π x_m, …, π x_i, …, π x_n).
Thus we obtain that φ_n-1[^ℓ]_n = (_m^ℓ_⊗_n-m^ℓ_)φ_n.
§.§ Algebraic Morse Theory
We recall the algebraic Morse theory studied in <cit.>. Let C_∗ = (C_∗, ∂_∗) be a chain complex with a decomposition C_k = ⊕_a ∈ I_n C_n, a and C_n, a≅ for each k. For a ∈ I_n+1 and b ∈ I_n, let f_ab C_n+1, a C_n, b be the composition
C_n+1, a↪ C_n+1 C_n↠ C_n, b. We define a directed graph Γ_C_∗ with vertices ∐_n I_n and directed edges {a → b | f_ab≠ 0}. We recall terminologies on the matching.
* A matching M of a directed graph Γ is a subset of directed edges M ⊂ E(Γ) such that each two distinct edges in M have no common vertices.
* For a matching M of a directed graph, vertices that are not the endpoints of any edges in M are called critical.
* For a matching M of a directed graph Γ, we define a new directed graph Γ^M by inverting the direction of all edges in M.
A matching M on Γ_C_∗ is called a Morse matching if it satisfies the following.
* f_ab is an isomorphism if a → b ∈ M.
* Γ_C_∗^M is acyclic, that is, there are no closed paths in Γ_C_∗^M of the form
a_1→ b_1→…→ b_p-1→ a_p=a_1
with a_i∈ I_n+1 and b_i∈ I_n for some p.
For a matching M on Γ_C_∗, we denote the subset of I_n that consists of critical vertices by I_n.
For a Morse matching M on Γ_C_∗, we have a chain complex (C_n = ⊕_a ∈I_n C_n, a, _∗) that is homotopy equivalent to (C_∗, _∗).
§.§ matching on D_∗^ℓ(E)
We apply algebraic Morse theory to the chain complex (D^ℓ_∗(E), ^ℓ_∗) with the decomposition D^ℓ_n(E) = ⊕_a ∈ P^ℓ, , _n(E) D_n, a and D_n, a≅. For a = (x_0, …, x_n+1) ∈ P^ℓ, , _n+1(E) and b ∈ P^ℓ, , _n(E), we write b = ^ℓ_n+1, ia if b = (x_0, …, x̂_i, …, x_n+1). It is immediately verified that f_ab is an isomorphism for a ∈ P^ℓ, , _n+1(E) and b ∈ P^ℓ, , _n(E) if and only if b = ^ℓ_n+1, ia for some i.
* For a = (x_0, …, x_n) ∈ P^ℓ, _n(E) with Ta ∈^m^m', we define
a^ := (x_0, …, x_m+m', x_m+m'^π x_m+m'+1, x_m+m'+1, …, x_n).
* For (x_0, …, x_n) ∈ P^ℓ_n(E), we define
|(x_0, …, x_n)| := ∑_T(x_i, x_i+1)= i.
Namely, we obtain a tuple a^ by filling the gap of the first tilted part of a. The filled part becomes horizontal-vertical triple.
Let a_1 ≠ a_2 ∈ P^ℓ, _n(E). If a_2 = ^ℓ_n+1, ia_1^ for some i, then we have |a_1^| < |a_2^|.
Suppose that Ta_1 = ^m^m' xw for some x∈_1 and w∈. Then we have Ta_1^ = ^m^m'+1 xw. If we have ^ℓ_n+1, ia_1^ = a_2 ∈ P^ℓ, _n(E), then we should have
Ta_2 ∈{^m-1^m' xw, ^m^m”^m'-m”-2 xw, ^m^m'+1 w},
by Lemma <ref>. In each case, we have
Ta_2^∈{^m-1^m' xw, ^m^m”+1^m'-m”-2 xw, ^m^m'+2 w}
respectively. In all cases, we have |a_1^| < |a_2^|.
We define a matching M on D_∗^ℓ(E) by
M = {f_a^a : a^→ a | a ∈ P^ℓ, _n(E) }.
This is apparently a matching, and is also acyclic by Lemma <ref>. Further, there is no critical vertex in Γ_D_∗^ℓ(E). Thus we obtain the following by Proposition <ref>.
The chain complex D_∗^ℓ(E) is contractible.
Let π : E B a metric fibration, and let F be its fiber. For ℓ>0, we have a homotopy equivalence and an isomorphism
^ℓ_∗(E) ≃^ℓ_∗(E)/D^ℓ_∗(E) ≅⊕_ℓ_ + ℓ_ = ℓ^ℓ__∗(F)⊗^ℓ__∗(B).
It follows from Propositions <ref>, <ref> and the fact that each quasi-isomorphism between levelwise free chain complexes is induced from a homotopy equivalence.
Note that, by Corollary <ref>, we reprove the Künneth theorem in <cit.> Proposition 8.4, namely ^ℓ_∗(F× B) ≅ H_∗(⊕_ℓ_ + ℓ_ = ℓ^ℓ__∗(F)⊗^ℓ__∗(B)).
§ EQUIVALENCE OF MAGNITUDE HOMOTOPY TYPE
§.§ Δ-set
We denote the category of finite ordinals {0 < 1< … < n} = :[n]'s and order preserving maps between them by Δ. We define maps δ_n,i : [n-1] [n] and σ_n,i : [n+1] [n] for 0 ≤ i ≤ n by δ_n, i j =
j j < i,
j+1 j ≥ i, and σ_n, i j =
j j ≤ i,
j-1 j > i. We abbreviate them to δ_i and σ_i. Note that all order preserving map f : [m] [n] can be uniquely decomposed as a composition of order preserving maps f = ϕ_1(f)ϕ_2(f) such that ϕ_1(f) is injective and ϕ_2(f) is surjective. Also, we can decompose ϕ_1(f) and ϕ_2(f) into compositions of δ_i's and σ_i's respectively.
A family of sets X_∙ = {X_n}_n≥ 0 equipped with maps d_i : X_n X_n-1 (0 ≤ i ≤ n) is called a Δ-set if it satisfies d_id_j = d_j-1d_i for i<j. Equivalently, a Δ-set is a functor Δ_ inj^ op, where Δ_ inj is the category of finite ordinals and order preserving injections that are generated from δ_i's. We define the category of Δ-sets by Δ := ^Δ_ inj^ op.
Note that the inclusion j : Δ_ injΔ induces a functor j^∗ : Δ. Namely, for a simplicial set S_∙, we can obtain a Δ-set j^∗ S_∙ by forgetting the degeneracy maps. The functor j^∗ has the left adjoint (<cit.> Theorem 1.7) j_! : Δ defined by
(j_!X_∙)_n = {(p, f) | p ∈ X_n-k, f : [n] ↠ [n-k] ∈Δ , 0≤ k ≤ n}.
The structure maps d_i : (j_!X_∙)_n (j_!X_∙)_n-1, s_i : (j_!X_∙)_n (j_!X_∙)_n+1 for 0≤ i ≤ n are defined by
d_i(p, f) = ((ϕ_1(fδ_i))^∗ p, ϕ_2(fδ_i)),
s_i(p, f) = (p, fσ_i),
where we use the following composition and factorization of maps:
[n-1] [r]^-δ_i[dr]_-ϕ_2(fδ_i) [n] [r]^-f [n-k]
[m] [ur]_-ϕ_1(fδ_i) .
* For a metric space X, ℓ∈_≥ 0 and n ∈_≥ 0, we define m^ℓ_n(X) := P^ℓ_n(X)∪{∗}. We also define maps d_i : m^ℓ_n(X) m^ℓ_n-1(X) for 0 ≤ i ≤ n by
d_i(∗) = ∗,
d_i(x_0, …, x_n) =
(x_0, …, x̂_i, …, x_n) if x_i-1≺ x_i ≺ x_i+1, 1≤ i ≤ n-1,
∗ otherwise.
Then it is immediate to verify that m^ℓ_∙(X) is a Δ-set.
* For a metric space X, we denote Hepworth and Willerton's simplicial set (<cit.> Definition 8.1) by M^ℓ_∙(X). That is defined by
M^ℓ_n(X) = {(x_0, …, x_n) ∈ X^n+1|∑_i=0^n-1d(x_i, x_i+1) = ℓ}∪{∗},
for ℓ∈_≥ 0 and n ∈_≥ 0. The maps d_i's are defined by the same formula as those of m^ℓ_∙, and s_i's are defined by s_i(x_0, …, x_n) = (x_0, …, x_i, x_i, …, x_n) and s_i(∗) = ∗.
* For a point ∗∈, defined by ∗_n = {∗}, we have
(j_!j^∗∗)_n ≅{f : [n] ↠ [n-k] | 0 ≤ k ≤ n},
and d_if = ϕ_2(fδ_i), s_i f = fσ_i for f : [n] ↠ [n-k]. Note that the non-degenerate simplices of (j_!j^∗∗)_∙ are only identities id_[n], and its geometric realaization |(j_!j^∗∗)_∙| is S^∞.
* For a metric space X and ℓ∈_≥ 0, we define a simplicial set M^ℓ_∙(X) by
M^ℓ_n(X) = {(x_0, …, x_n) ∈ X^n+1|∑_i=0^n-1d(x_i, x_i+1) = ℓ}∪{f : [n] ↠ [n-k] | 0 ≤ k ≤ n}.
We define
d_i(f) = ϕ_2(fδ_i),
d_i(x_0, …, x_n) =
(x_0, …, x̂_i, …, x_n) if x_i-1≺ x_i ≺ x_i+1, 1≤ i ≤ n-1,
id_[n-1] otherwise.
and
s_i(f) = fσ_i,
s_i(x_0, …, x_n) = (x_0, …, x_i, x_i, …, x_n).
We have j_! m^ℓ_∙(X) ≅ M^ℓ_∙ (X).
In the following, we denote the maps j_! m^ℓ_n(X) j_! m^ℓ_m(X) and M^ℓ_n(X) M^ℓ_m(X) induced from a map f : [m] [n] by f^ m and f^ M respectively. We also denote the structure maps d_i, s_i's of j_! m^ℓ_∙(X) and M^ℓ_∙(X) by d_i^ m, s_i^ m and d_i^ M, s_i^ M's respectively. We define a map F_n : (j_! m^ℓ_∙(X))_n M^ℓ_n (X) by
F_n (p, f) = f p= ∗
f^ Mp p ≠∗,
where we identify an element p ∈ P^ℓ_n-k(X) ⊂ m^ℓ_n-k(X) with an element p ∈ M^ℓ_n-k (X). This map is obviously a bijection, hence it reduces to show that this defines a morphism of simplicial sets. Now we have
F_n+1 s_i^ m(p, f) = F_n+1(p, fσ_i)= fσ_i p= ∗
s_i^ Mf^ Mp p≠∗ =s_i^ MF_n(p,f).
We also have
F_n-1 d_i^ m(p, f) = F_n-1(ϕ_1^ m p, ϕ_2) =ϕ_2 ϕ_1^ m p= ∗
ϕ_2^ Mϕ_1^ M p ϕ_1^ m p≠∗,
where we abbreviate ϕ_1(fδ_i), ϕ_2(fδ_i) to ϕ_1, ϕ_2 respectively, and we identify ϕ^ m_1p ∈ m^ℓ_∙(X) with ϕ^ M_1p ∈ M^ℓ_∙ (X). Also, we have
d_i^ MF_n(p, f) = d_i^ Mf p= ∗,
d_i^ Mf^ M p p ≠∗,
= ϕ_2 p= ∗,
ϕ_2^ Mϕ_1^ M p p≠∗,
= ϕ_2 p= ∗,
ϕ_2 p ≠∗, ϕ_1^ m p =∗ ,
ϕ_2^ Mϕ_1^ M p p≠∗, ϕ_1^ m p ≠∗,
=ϕ_2 ϕ_1^ m p= ∗,
ϕ_2^ Mϕ_1^ M p ϕ_1^ m p≠∗.
Hence F_∙ is an isomorphism of simplicial sets.
We have a homotopy equivalence | M^ℓ_∙(X)| ≃ | M^ℓ_∙(X)|.
Obviously we have an inclusion j_!j^∗∗ M^ℓ_∙(X), and its quotient map M^ℓ_∙(X) M^ℓ_∙(X). Hence it induces a sequence |j_!j^∗∗| | M^ℓ_∙(X)| | M^ℓ_∙(X)|. Since |j_!j^∗∗| ≃ S^∞ is a subcomplex of | M^ℓ_∙(X)|, we conclude that | M^ℓ_∙(X)| ≃ | M^ℓ_∙(X)|.
§.§ D^ℓ_∙(E) ⊂ m^ℓ_∙(E)
For a metric fibration π : E B, we define a Δ-subset D^ℓ_∙(E) ⊂ m^ℓ_∙(E) by D^ℓ_n(E) = P^ℓ, , _n(E)∪{∗} for ℓ∈_≥ 0.
We can verify that D^ℓ_∙(E) is indeed a Δ-set by Lemma <ref>.
|j_!D^ℓ_∙(E)| is contractible.
By the same argument as the proof of Proposition <ref>, |j_!D^ℓ_∙(E)| is homotopy equivalent to the geometric realization of a simplicial subset K_∙⊂ M^ℓ_∙(E) generated from the family of sets P^ℓ, , _∙(E). Since the non-degenerate simplices of K_∙ are elements of P^ℓ, , _n(E)'s, the chain complex C_∗ K is homotopy equivalent to the chain complex D^ℓ_∗(E) of Definition <ref>, which is contractible. Therefore it reduces to show that |K_∙| is simply connected. Recall that the fundamental groupoid Π_1 |K_∙| is equivalent to the fundamental groupoid Π_1 K_∙, whose objects are vertices of K_∙ and morphisms are generated by edges of K_∙ with the identification d_0σ d_2σ∼ d_1σ for σ∈ K_2. Now Π_1 K_∙ has only one object, and each morphism is a sequence of tuples (x_0, x_1) with T(x_0, x_1) =. Since we have (x_0, x_1)=d_1(x_0, x_1)^∼ d_0(x_0, x_1)^d_2(x_0, x_1)^∼∗, this groupoid is a trivial group.
We have a homotopy equivalence |j_! m^ℓ_∙(E)| ≃ |j_! m^ℓ_∙(E)/j_!D^ℓ_∙(E)|.
Same as Proposition <ref>.
We have m^ℓ_∙(E)/D^ℓ_∙(E) ≅ m^ℓ_∙(F× B)/D^ℓ_∙(F× B), where F = π^-1b for a fixed b∈ B.
We define a map φ_∙ : m^ℓ_∙(E)/D^ℓ_∙(E) m^ℓ_∙(F× B)/D^ℓ_∙(F× B) by φ_n(∗) = ∗ and
φ_n(x_0, …, x_n)
=((x_0^b, π x_0), …, (x_i^b, π x_i), …, (x_m^b, π x_m),(x_m^b, π x_m+1), …, (x_m^b, π x_m+j) …, (x_m^b, π x_n)),
where we suppose that T(x_0, …, x_n) = ^m^n-m. This map has an inverse ψ_∙ defined by
ψ_n((f_0, b_0), …, (f_m, b_0), (f_m, b_1), …, (f_m, b_n-m))= (f_0^b_0, …, f_m^b_0, f_m^b_0b_1, f_m^b_0b_1b_2, …, f_m^b_0… b_n-m).
Hence it reduces to show that φ_∙ is a morphism of Δ-sets, but it can be verified in the same manner as Propposition <ref>.
Let π : E B be a metric fibration and let F be its fiber. Then we have a homotopy equivalence | M^ℓ_∙(E)| ≃ | M^ℓ_∙(F× B)|.
We have homotpy equivalences
| M^ℓ_∙(E)| ≃ |j_! m^ℓ_∙(E)| ≃ | j_! m^ℓ_∙(E)/j_!D^ℓ_∙(E)|
≅ |j_! m^ℓ_∙(F× B)/j_!D^ℓ_∙(F× B)| ≃ | j_! m^ℓ_∙(F× B)| ≃ | M^ℓ_∙(F× B)|,
by Propositions <ref>, <ref>, <ref> and <ref>. Note that j_! commutes with quotients since it is a left adjoint.
From Tajima and Yoshinaga's Künneth theorem for magnitude homotopy type (<cit.> Theorem 4.27) together with the coincidence of two definitions of magnitude homotopy types (Proposition <ref>), we have the following.
Let π : E B be a metric fibration and let F be its fiber. Then we have a homotopy equivalence | M^ℓ_∙(E)| ≃⋁_ℓ_ + ℓ_ = ℓ| M^ℓ__∙(F)| ∧ | M^ℓ__∙(B)|.
§ APPENDIX
In this appendix, we prove the following proposition which is stated in <cit.> without a proof.
Let X be a metric space and ℓ∈_≥ 0. Tajima and Yoshinaga's magnitude homotopy type ℳ^ℓ(X) is homeomorphic to the geometric realization | M^ℓ_∙(X)| of Hepworth and Willerton's simplicial set M^ℓ_∙(X).
Recall from <cit.> that the CW complex ℳ^ℓ(X) is defined as the quotient |Δ Cau^ℓ(X)|/|Δ' Cau^ℓ(X)| of the geometric realization of simplicial complexes Δ Cau^ℓ(X) and Δ' Cau^ℓ(X). Here, the simplicial complex Δ Cau^ℓ(X) is the order complex of the poset Cau^ℓ(X) = ∐_a, b ∈ X Cau^ℓ(X ; a, b) defined by
Cau^ℓ(X ; a, b) = {(x, t) ∈ X × [0, ℓ] | d(a, x) ≤ t, d(x, b) ≤ℓ - t},
where (x, t) ≤ (x', t') if and only if d(x, x') ≤ t'-t. Then the simplicial complex Δ Cau^ℓ(X) = ∐_a, b ∈ XΔ Cau^ℓ(X ; a, b) is defined by
Δ Cau^ℓ(X ; a, b) = {{(x_0, t_0), …, (x_n, t_n)}| d(x_i, x_i+1) ≤ t_i+1-t_i for -1 ≤ i ≤ n },
where we put x_-1 = a, x_n+1 = b, t_-1 = 0, t_n+1 = ℓ. Since we can extend each partial order to a total order, the simplicial complex Δ Cau^ℓ(X ; a, b) can be considered as an ordered simplicial complex, and each face of it can be expressed as a tuple ((x_0, t_0), …, (x_n, t_n)) which is not just a set of points {(x_0, t_0), …, (x_n, t_n)}. The simplicial subcomplex Δ' Cau^ℓ(X) = ∐_a, b ∈ XΔ Cau^ℓ(X ; a, b) is defined by
Δ' Cau^ℓ(X ; a, b) = {((x_0, t_0), …, (x_n, t_n)) ∈Δ Cau^ℓ(X ; a, b) |∑_i=0^n-1d(x_i, x_i+1) <ℓ},
which is also ordered. Here we note that we have d(x_i, x_i+1) = t_i+1-t_i for all -1≤ i ≤ n if and only if ∑_i=0^n-1d(x_i, x_i+1) = ℓ by Proposition 4.2 of <cit.>.
Note first that each ordered simplicial complex X can be turned into a Δ-set X in a natural manner, and we obtain a simplicial set j_!X. Obviously, the geometric realization of the ordered simplicial complex X is homeomorphic to the geometric realization |j_!X| by the definitions. Also, for a pair Y ⊂ X of ordered simplicial complexes, we have |X|/|Y| ≅ |j_!X|/|j_!Y| ≅ |j_!X/j_!Y|. Hence we have
ℳ^ℓ(X) = |Δ Cau^ℓ(X)|/|Δ' Cau^ℓ(X)|
≅⋁_a, b|Δ Cau^ℓ(X; a, b)|/|Δ' Cau^ℓ(X; a, b)|
≅⋁_a, b|j_!Δ Cau^ℓ(X; a, b)/j_!Δ' Cau^ℓ(X; a, b)|
≅ |⋁_a, bj_!Δ Cau^ℓ(X; a, b)/j_!Δ' Cau^ℓ(X; a, b)|.
Now, by Proposition 4.2 of <cit.>, we have ⋁_a, bj_!Δ Cau^ℓ(X; a, b)/j_!Δ' Cau^ℓ(X; a, b) = M^ℓ_∙(X).
99
A2Y. Asao,
Classification of metric fibrations.
arXiv:2307.04387, 2023.
AI Y. Asao and K. Izumihara, Geometric approach to graph magnitude homology. Homology Homotopy Appl. 23, No. 1, 297-310 (2020).
HW R. Hepworth and S. Willerton,
Categorifying the magnitude of a graph.
arXiv:1505.04125; Homology, Homotopy and Applications 19(2) (2017), 31–60.
L2T. Leinster, The magnitude of metric spaces. arXiv:1012.5857; Documenta Mathematica 18 (2013), 857–905.
RS C. P. Rourke and B. J. Sanderso, Δ-sets. I. Homotopy theory. In: Quart. J. Math. Oxford Ser. (2) 22 (1971), pp. 321–338.
Sk
E. Sköldberg, Morse theory from an algebraic viewpoint, Trans. Amer. Math. Soc. 358 (2006), 115–129.
TY
Y. Tajima and M. Yoshinaga, Causal order complex and magnitude homotopy type of metric spaces, arXiv:2302.09752, 2023; International Mathematics Research Notices 4 (2024), 3176–3222.
|
http://arxiv.org/abs/2409.02460v1 | 20240904055526 | Weak decays of $B_s$ to $D_s$ based on the helicity analysis | [
"Sara Rahmani",
"Mostafa Ahwazian"
] | hep-ph | [
"hep-ph"
] | |
http://arxiv.org/abs/2409.03492v1 | 20240905125938 | Distributionally Robust Optimisation with Bayesian Ambiguity Sets | [
"Charita Dellaporta",
"Patrick O'Hara",
"Theodoros Damoulas"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Bias correction of posterior means using MCMC outputs
Yukito Iba
September 9, 2024
=====================================================
*These authors contributed equally to this work.footnote
§ ABSTRACT
Decision making under uncertainty is challenging since the data-generating process (DGP) is often unknown.
Bayesian inference proceeds by estimating the DGP through posterior beliefs about the model's parameters.
However, minimising the expected risk under these posterior beliefs can lead to sub-optimal decisions due to model uncertainty or limited, noisy observations.
To address this, we introduce Distributionally Robust Optimisation with Bayesian Ambiguity Sets (DRO-BAS) which hedges against uncertainty in the model by optimising the worst-case risk over a posterior-informed ambiguity set. We show that our method admits a closed-form dual representation for many exponential family members and showcase its improved out-of-sample robustness against existing Bayesian DRO methodology in the Newsvendor problem.
§ INTRODUCTION
Decision-makers are regularly confronted with the problem of optimising an objective under uncertainty.
Let x ∈^d be a decision-making variable that minimises a stochastic objective function f: ^d ×Ξ→, where Ξ is the data space and let ^⋆∈(Ξ) be the data-generating process (DGP) where (Ξ) is the space of Borel distributions over Ξ. In practice, we do not have access to ^⋆ but to n independently and identically distributed (i.i.d.) observations := ξ_1:n∼^⋆. Without knowledge of the DGP, model-based inference considers a family of models _Θ := {_θ : θ∈Θ}⊂(Ξ) where each _θ has probability density function p(ξ | θ) for parameter space Θ⊆^k. In a Bayesian framework, data is combined with a prior π(θ) to obtain posterior beliefs about θ through Π(θ | ). Bayesian Risk Optimisation <cit.> then solves a stochastic optimisation problem:
min_x ∈^d _θ∼Π(θ|)[_ξ∼_θ[f(x, ξ)]].
However, our Bayesian estimator is likely different from the true DGP due to model and data uncertainty: the number of observations may be small; the data noisy; or the prior or model may be misspecified.
The optimisation problem (<ref>) inherits any estimation error, and leads to overly optimistic decisions on out-of-sample scenarios even if the estimator is unbiased: this phenomenon is called the optimiser's curse <cit.>.
For example, if the number of observations is small and the prior is overly concentrated, then the decision is likely to be overly optimistic.
To hedge against the uncertainty of the estimated distribution, the field of Distributionally Robust Optimisation (DRO) minimises the expected objective function under the worst-case distribution that lies in an ambiguity set U ⊂(Ξ).
Discrepancy-based ambiguity sets contain distributions
that are close to a nominal distribution in the sense of some discrepancy measure such as the Kullback-Leibler (KL) divergence <cit.>, Wasserstein distance <cit.> or Maximum Mean Discrepancy <cit.>.
For example, some model-based methods <cit.> consider a family of parametric models and create discrepancy-based ambiguity sets centered on the fitted model.
However, uncertainty about the parameters is not captured in these
works, which can lead to a nominal distribution far away from the DGP when the data is limited. The established framework for capturing such uncertainty is Bayesian inference.
The closest work to ours, using parametric Bayesian inference to inform the optimisation problem, is Bayesian DRO (BDRO) by <cit.>.
BDRO constructs discrepancy-based ambiguity sets with the KL divergence and takes an expected worst-case approach, under the posterior distribution.
More specifically, let U_ϵ(_θ) := {∈(Ξ) : (‖_θ) ≤ϵ} be the ambiguity set centered on distribution _θ with parameter ϵ∈ [0, ∞) controlling the size of the ambiguity set.
Under the expected value of the posterior, Bayesian DRO solves:
min_x ∈^d _θ∼Π(θ|) [sup_∈ U_ϵ (_θ) _ξ∼[f(x, ξ)]],
where _θ∼Π(θ|)[Y] := ∫_Θ Y(θ)Π(θ|) dθ denotes the expectation of random variable Y: Θ→ with respect to Π(θ|).
A decision maker is often interested in protecting against and quantifying the worst-case risk, but BDRO does not correspond to a worst-case risk analysis.
Moreover, the BDRO dual problem is a two-stage stochastic problem that involves a double expectation over the posterior and likelihood.
To get a good approximation of the dual problem, a large number of samples are required, which increases the solve time of the dual problem.
We introduce DRO with Bayesian Ambiguity Sets (DRO-BAS), an alternative optimisation objective for Bayesian decision-making under uncertainty, based on a posterior-informed ambiguity set. The resulting problem corresponds to a worst-case risk minimisation over distributions with small expected deviation from the candidate model. We go beyond ball-based ambiguity sets, which are dependent on a single nominal distribution, by allowing the shape of the ambiguity set to be informed by the posterior.
For many exponential family models, we show that the dual formulation of DRO-BAS is an efficient single-stage stochastic program.
§ DRO WITH BAYESIAN AMBIGUITY SETS
We propose the following DRO-BAS objective:
min_x ∈^d sup_: _θ∼Π[D(, _θ)] ≤ϵ _ξ∼ [ f_x(ξ) ],
where ∈(Ξ) is a distribution in the ambiguity set, f_x(ξ) := f(x, ξ) is the objective function, D: (Ξ) ×(Ξ) → is a divergence, and ϵ∈ [0, ∞) is a tolerance level.
The ambiguity set is informed by the posterior distribution Π by considering all probability measures ∈(Ξ) which are ϵ-away from _θ in expectation, with ϵ dictating the desired amount of risk in the decision.
The shape of our ambiguity set is flexible and driven by the posterior distribution. This is contrary to standard ambiguity sets which correspond to a ball around a fixed nominal distribution. The DRO-BAS problem (<ref>) is still a worst-case approach, keeping with DRO tradition, instead of BDRO's expected worst-case formulation (<ref>), see <Ref>.
The Bayesian posterior Π(θ|) targets the KL minimiser between the model family and ^⋆ <cit.>, hence it is natural to choose D(, _θ) to be the KL divergence of with respect to _θ denoted by (‖_θ). This means that as n →∞ the posterior collapses to θ_0 := _θ∈Θ(^⋆, _θ) and the ambiguity set is just a KL-ball around _θ_0.
Using the KL divergence in the DRO-BAS problem in (<ref>), it is straight-forward to obtain an upper bound of the worst-case risk for general models (see <Ref> for a proof):
sup_: _θ∼Π [(Q ‖_θ)] ≤ϵ _ξ∼[f_x(ξ)] ≤inf_γ≥ 0 γϵ + _θ∼Π[ γln_ξ∼_θ [ exp( f_x(ξ)/γ) ] ].
Exact closed-form solutions of DRO-BAS can be obtained for a wide range of exponential family models with conjugate priors. When the likelihood distribution is a member of the exponential family, a conjugate prior also belongs to the exponential family <cit.>.
In this setting, before we prove the main result, we start with an important Lemma.
Let p(ξ|θ) be an exponential family likelihood and π(θ), Π(θ|) a conjugate prior-posterior pair, also members of the exponential family.
Let τ_0, τ_n ∈ T be hyperparameters of the prior and posterior respectively, where T is the hyperparameter space.
Let θ̅_n ∈Θ depend upon τ_n and let G: T → be a function of the hyperparameters. If the following identity holds:
_θ∼Π[ ln p(ξ|θ) ] = ln p(ξ|θ̅_n) - G(τ_n),
then the expected KL-divergence can be written as:
_θ∼Π[ (‖_θ) ] = (, _θ̅_n) + G(τ_n).
The condition in (<ref>) is a natural property of many exponential family models, some of which are showcased in Table <ref>. Future work aims to prove this for all exponential family models.
It is straightforward to establish the minimum tolerance level ϵ_min required to obtain a non-empty ambiguity set. Since the KL divergence is non-negative, under the condition of Lemma <ref>, for any ∈(Ξ):
_θ∼Π [(‖_θ)] = (, _θ̅_n) + G(τ_n) ≥ G(τ_n) := ϵ_min(n).
We are now ready to prove our main result.
Suppose the conditions of Lemma <ref> hold and ϵ≥ϵ_min(n) as in (<ref>).
Let τ_n ∈ T, θ̅_n ∈Θ, and G: T →.
Then
sup_: _θ∼Π [(‖_θ)] ≤ϵ _ξ∼[f_x(ξ)] = inf_γ≥ 0 γ (ϵ - G(τ_n)) + γln_ξ∼ p(ξ|θ̅_n) [ exp( f_x(ξ)/λ) ].
To guarantee that the DRO-BAS objective upper bounds the expected risk under the DGP, the decision-maker aims to choose ϵ large enough so that ^⋆ is contained in the ambiguity set. The condition in (<ref>) yields a closed-form expression for the optimal radius ϵ^⋆ by noting that:
ϵ^⋆ = _θ∼Π [(^⋆‖_θ)] = (^⋆, _θ̅_n) + G(τ_n).
If the model is well-specified, and hence ^⋆ and _θ̅_n belong to the same exponential family, it is straightforward to obtain ϵ^⋆ based on the prior, posterior and true parameter values.
We give examples in <Ref>.
In practice, since the true parameter values are unknown, we can approximate ϵ^⋆ using the observed samples. It follows that for any ϵ≥ϵ^⋆≥ϵ_min(n):
_ξ∼^⋆[f(x, ξ)] ≤sup_: _θ∼Π [(Q ‖_θ)] ≤ϵ _ξ∼[f_x(ξ)].
§ THE NEWSVENDOR PROBLEM
Experiment setup. We evaluate DRO-BAS against the BDRO framework on a univariate Newsvendor problem with a well-specified univariate Gaussian likelihood with unknown mean and variance (<Ref> showcases a misspecified setting).
The goal is to choose an inventory level 0 ≤ x ≤ 50 of a perishable product with unknown customer demand ξ∈ that minimises the cost function f(x, ξ) = h max(0, x - ξ) + b max(0, ξ - x), where h and b are the holding cost and backorder cost per unit of the product respectively.
We let ^⋆ be a univariate Gaussian (μ_⋆, σ^2_⋆) with μ_⋆ = 25 and σ^2_⋆ = 100.
For random seed j = 1,…,200, the training dataset _n^(j) contains n = 20 observations and the test dataset _m^(j) contains m = 50 observations.
The conjugate prior and posterior are normal-gamma distributions (<Ref>).
N is the total number of samples from each model.
For each seed j, we run DRO-BAS and BDRO with N = 25, 100, 900 and across 21 different values of ϵ ranging from 0.05 to 3. For DRO-BAS, N is the number of samples from p(ξ|θ̅_n) and for BDRO, N = N_θ× N_ξ where N_θ is the number of posterior samples and N_ξ likelihood samples due to the double expectation present; we set N_θ = N_ξ to compare models on an equal N total samples regime. For a given ϵ, we calculate the out-of-sample mean m(ϵ) and variance v(ϵ) of the objective function f(x^(j)_ϵ, ξ̂_i) over all ξ̂_i ∈^(j)_m and over all seeds j=1,…,200, where x^(j)_ϵ is the optimal solution on training dataset ^(j)_n (see <Ref>).
Analysis. <Ref> shows that, for small sample size N = 25, 100, our framework dominates BDRO in the sense that DRO-BAS forms a Pareto front for the out-of-sample mean-variance tradeoff of the objective function f.
That is, for any ϵ_1, let m_BDRO(ϵ_1) and v_BDRO(ϵ_1) be the out-of-sample mean and variance respectively of BDRO:
then there exists ϵ_2 with out-of-sample mean m_BAS(ϵ_2) and variance v_BAS(ϵ_2) of BAS-DRO such that m_BAS(ϵ_2) < m_BDRO(ϵ_1) and v_BAS(ϵ_2) < v_BDRO(ϵ_1).
When N=900, <Ref> shows DRO-BAS and BDRO lie roughly on the same Pareto front.
To summarise, BDRO requires more samples N than DRO-BAS for good out-of-sample performance, likely because BDRO must evaluate a double expectation over the posterior and likelihood, whilst DRO-BAS only samples from p(ξ|θ̅_n).
For fixed N, the solve times for DRO-BAS and BDRO are broadly comparable (see <Ref>).
§ DISCUSSION
We proposed a novel approach to Bayesian decision-making under uncertainty through a DRO objective based on posterior-informed Bayesian ambiguity sets. The resulting optimisation problem is a single-stage stochastic program with closed-form formulation for a variety of exponential-family models. The suggested methodology has good out-of-sample performance, as showcased in <Ref>. Future work aims to extend DRO-BAS to a general formulation for exponential family models, including higher-dimensional problems, in which we expect to see further advantages of our method due to the nature of the Bayesian Ambiguity Set.
CD acknowledges support from EPSRC grant [EP/T51794X/1] as part of the Warwick CDT in Mathematics and Statistics. PO and TD acknowledge support from a UKRI Turing AI acceleration Fellowship [EP/V02678X/1] and a Turing Impact Award from the Alan Turing Institute. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) license to any Author Accepted Manuscript version arising from this submission.
Supplementary Material
The Supplementary Material is organised as follows: <Ref> provides details for all the exponential family models discussed in <Ref>, while <Ref> contains the proofs of all mathematical results appearing in the main text. <Ref> provides additional experimental details for the Newsvendor problem in <Ref>. Finally, in <Ref> we present experimental results for the newsvendor problem example with a misspecified model.
§ SPECIAL CASES
We derive the values of G(τ_n) and θ̅_n for different likelihoods and conjugate prior/posterior in <Ref>.
Each subsection contains a corollary with the result in <Ref>.
§.§ Gaussian model with unknown mean and known variance
Let the random variable ξ be univariate and have continuous support.
We assume the variance σ^2 of ξ is known.
We estimate the mean of a univariate Gaussian distribution with known variance σ^2.
The example can be found in <cit.>.
We define our parameter θ to be the unknown mean μ and we place a Gaussian prior π(μ) over it.
The likelihood is p(ξ|μ) = (μ, σ^2), the prior over μ is
π(μ) = (μ_0, σ_0^2), and the conjugate posterior is Π(μ|) = (μ|μ_n , σ_n^2),
where
μ_n := σ^2/nσ^2_0 + σ^2μ_0 + n σ_0^2/n σ_0^2 + σ^2μ̂ 1/σ^2_n := 1/σ_0^2 + n/σ^2 μ̂ := 1/n∑_i=1^n ξ_i.
Let μ_n ∈ and σ, σ_n ∈_+. Then
_μ∼(μ_n,σ_n^2)[ log(μ, σ^2) ] = log(μ_n, σ^2) - σ_n^2/2σ^2.
The result is a special case of <cit.>.
When the likelihood is a Gaussian distribution with unknown mean and known variance σ^2 and the prior and posterior are Gaussian distributions (see <Ref>), then <Ref> holds with θ̅_n = μ_n and G(τ_n) = σ_n^2/2σ^2.
<Ref> shows that the condition (<ref>) in <Ref> holds, thus <Ref> follows.
Tolerance level ϵ
In the well-specified case, where we assume that ^⋆ := _θ^⋆ for some θ^⋆∈Θ, it is easy to obtain the required size of the ambiguity set exactly. Let θ^⋆ := μ^⋆ and ^⋆ := N(μ^⋆, σ^2). By <Ref> it follows that:
_θ∼ N(μ_n, σ^2_n)[(^⋆, _θ) ] = (μ^⋆ - μ_n)^2 + σ^2_n/2 σ^2.
So for a fixed finite sample ξ_1:n∼^⋆, if ϵ≥ϵ^⋆ := (μ^⋆ - μ_n)^2 + σ^2_n/2 σ^2, it follows that DRO-BAS upper-bounds the target optimisation objective:
_ξ∼^⋆ [f_x(ξ)] ≤sup_: _θ∼Π(·|ξ_1:n) [(Q ‖ _θ)] ≤ϵ _ξ∼[f_x(ξ)].
In practice, since μ^⋆ is unknown, ϵ^⋆ can be approximated by using the sample mean.
§.§ Gaussian Model with Unknown Mean and Variance
In this section, we consider a Bayesian model that estimates the unknown mean and variance of a uni-variate Gaussian distribution.
We define our model in <Ref>, then prove some preliminary results for the normal-gamma distribution, before proving our main result in <Ref>.
Following <cit.>, we place a normal-gamma prior over the mean μ and precision λ = σ^-2.
The normal-gamma prior is the conjugate to a Gaussian likelihood and results in a normal-gamma posterior distribution.
Likelihood: p(ξ|μ, λ) = (ξ|μ, λ^-1)
Prior: π(μ, λ) = NG(μ, λ|μ_0, κ_0, α_0, β_0 ) = (μ|μ_0, (κ_0 λ)^-1) · Ga(λ|α_0, β_0)
Posterior: π(μ, λ|) = NG(μ, λ|μ_n, κ_n, α_n, β_n) = (μ|μ_n , (λκ_n)^-1 ) · Ga(λ|α_n, β_n)
where
μ_n := κ_0 μ_0 + n ξ̅_n/n + κ_0, κ_n := κ_0 + n, α_n := α_0 + n/2, ξ̅_n = 1/n∑_i-1^n ξ_i,
β_n := β_0 + 1/2∑_i=1^n (ξ_i - ξ̅_n)^2 + κ_0 n (ξ̅_n - μ_0)^2/2(κ_0 + n).
In <Ref>, we derive condition <Ref> for a Gaussian model with unknown mean and unknown variance.
Before proceeding, we need to define the gamma and digamma functions and recall the moments of the normal-gamma distribution.
The gamma function Γ: → and digamma function ψ: → are
Γ(z) := (z-1)! ψ(z) := /ẓlnΓ(z).
Let NG(μ, λ|μ_n, κ_n, α_n, β_n) be a normal-gamma distribution with parameters μ_n ∈ and κ_n, α_n, β_n ∈_+.
The moments of the normal-gamma distribution are
_NG[lnλ] = ψ(α_n) - lnβ_n, _NG[λ] = α_n/β_n, _NG[λμ] = μ_n α_n/β_n, _NG[λμ^2] = 1/κ_n + μ_n^2α_n/β_n.
where ψ: → is the digamma function from <Ref>.
Let μ_n ∈ and κ_n, α_n, β_n ∈_+. Then
_NG(μ, λ|μ_n, κ_n, α_n, β_n)[ ln(ξ|μ, λ^-1 ) ] = ln( ξ|μ_n, β_n/α_n) - 1/2( 1/κ_n +lnα_n - ψ(α_n) )
where ψ: → is the digamma function from <Ref>.
First, observe that the natural logarithm of the Gaussian distribution may be re-written as
ln(ξ|μ, λ^-1) = ln( λ^1/2/(2π)^1/2exp( - λ/2 (ξ - μ)^2 ) )
= 1/2lnλ - 1/2ln 2π - λ/2 (ξ - μ)^2
= 1/2lnλ - 1/2ln 2π - 1/2λξ^2 + λμξ - 1/2λμ^2.
In what follows, for shorthand, we denote the expectation _NG(μ, λ|μ_n, κ_n, α_n, β_n) as _μ, λ∼ NG:
_μ, λ∼ NG[ ln(ξ|μ, λ^-1 ) ]
(i)=_μ, λ∼ NG[ 1/2lnλ - 1/2ln 2π - 1/2λξ^2 + λμξ - 1/2λμ^2 ]
(ii)= - 1/2ln 2π + 1/2_μ, λ∼ NG[ lnλ] - 1/2ξ^2 ·_μ, λ∼ NG[ λ] + ξ·_μ, λ∼ NG[ λμ] - 1/2_μ, λ∼ NG[ λμ^2 ]
(iii)= - 1/2ln 2π + 1/2 (ψ(α_n) - lnβ_n) - 1/2ξ^2 α_n/β_n + ξμ_n α_n/β_n - 1/2(1/κ_n + μ_n^2α_n/β_n)
(iv)= - 1/2ln 2π + 1/2(ψ(α_n) - lnβ_n - 1/κ_n) - α_n/2β_n( ξ - μ_n )^2
(v)= - 1/2ln 2π - 1/2(lnβ_n - lnα_n + lnα_n - ψ(α_n) + 1/κ_n) + lnexp(- 1/2β_n/α_n( ξ - μ_n )^2 )
(vi)= - 1/2(lnα_n - ψ(α_n) + 1/κ_n) - 1/2ln(2πβ_n/α_n) + lnexp(- 1/2β_n/α_n( ξ - μ_n )^2 )
(vii)= - 1/2( 1/2α_n + I_α_n + 1/κ_n) + ln( 1/√(2πβ_n/α_n)exp( - 1/2 β_n/α_n(ξ - μ_n)^2 ) )
(viii)= - 1/2( lnα_n - ψ(α_n) + 1/κ_n) + ln( ξ|μ_n, β_n/α_n)
where in equation (i) we take the expectation over the normal-gamma distribution;
(ii) we apply linearity of expectation;
(iii) we use the moment-generating functions from <Ref>;
(iv) we complete the square;
(v) we add and subtract lnα_n;
(vi) and (vii) we re-arrange and apply log identities;
and finally in (viii) we use the definition of a Gaussian probability density function.
When the likelihood is a Gaussian distribution with unknown mean and variance, and the conjugate prior and posterior are normal-gamma distributions (see <Ref>), then <Ref> holds with θ̅_n = (μ_n, β_n/α_n) and G(τ_n) = 1/2( lnα_n - ψ(α_n) + 1/κ_n).
<Ref> shows that the condition (<ref>) in <Ref> holds, thus <Ref> follows.
Tolerance level ϵ
In the well-specified case, where we assume that ^⋆ := _θ^⋆ for some θ^⋆∈Θ, it is easy to obtain the required size of the ambiguity set exactly. Let θ^⋆ := (μ^⋆, λ^⋆^-1) and ^⋆ := N(μ^⋆, λ^⋆^-1). Using <Ref> we obtain:
_μ, λ∼ NG(μ, λ|μ_n, κ_n, α_n, β_n)[(^⋆, (ξ|μ, λ^-1)) ]
= ( ^⋆ ‖ (μ_n, β_n/α_n) ) + 1/2( 1/κ_n + lnα_n - ψ(α_n) )
= ln(√(λ^⋆β_n/α_n)) + λ^⋆^-1 + (μ^⋆ - μ_n)^2/2 β_n/α_n - 1/2 + 1/2( 1/κ_n + lnα_n - ψ(α_n) )
= 1/2(ln(λ^⋆β_n )
+ λ^⋆^-1 + (μ^⋆ - μ_n)^2)/2 β_n/α_n - 1
+ 1/κ_n - ψ(α_n)
).
§.§ Exponential likelihood with conjugate gamma prior
The likelihood p(ξ|θ) is an exponential distribution (ξ|θ) where θ > 0 is the rate parameter.
The prior π(θ) is a gamma distribution (θ|α_0, β_0) with shape α_0 > 0 and rate β_0 > 0.
The parameters α_n, β_n of the posterior π(θ|) = (θ|α_n, β_n) are given by α_n = α_0 + n and β_n = β_0 + ∑_ξ_i ∈ ξ_i.
When the likelihood is an exponential distribution with gamma prior and posterior (see <Ref>), then
_(θ|α_n, β_n)[ ln(ξ|θ) ] = ln( ξ|α_n/β_n) + ψ(α_n) - lnα_n.
Starting from the left-hand side, we take the log of the PDF of the exponential distribution, then use the logarithm expectation of the gamma distribution, and finally re-arrange using log identities:
_(θ|α_n, β_n)[ ln(ξ|θ) ] = _(θ|α_n, β_n)[ lnθ - θξ]
= ψ(α_n) - lnβ_n - α_n/β_nξ
= ψ(α_n) - lnα_n + lnα_n/β_n - α_n/β_nξ
= ψ(α_n) - lnα_n + ln( α_n/β_nexp( - α_n/β_nξ) )
= ψ(α_n) - lnα_n + ln(ξ|α_n/β_n).
The last line follows by the definition of the PDF of the exponential distribution.
When the likelihood is an exponential distribution with gamma prior and posterior, then <Ref> holds with θ̅_n = α_n/β_n and G(τ_n) = ψ(α_n) - lnα_n.
<Ref> shows that the condition (<ref>) in <Ref> holds, thus <Ref> follows.
Tolerance level ϵ
In the well-specified case, where we assume that ^⋆ := _θ^⋆ for some θ^⋆∈Θ, it is easy to obtain the required size of the ambiguity set exactly. Let θ^⋆ be the true rate parameter, i.e. ^⋆ := (θ^⋆). Using <Ref> we obtain:
_(θ|α_n, β_n)[(^⋆, (θ) ]
= ( ^⋆ ‖ (α_n/β_n) ) + ψ(α_n) - ln(α_n)
= ln(θ^⋆) - ln(α_n/β_n) + α_n/β_n θ^⋆ - 1 + ψ(α_n) - ln(α_n).
§ PROOFS OF THEORETICAL RESULTS
§.§ Proofs of DRO-BAS upper bound in Equation (<ref>)
Before proving the required upper bound, we recall the definition of the KL divergence and its convex conjugate.
Let μ,ν∈(Ξ) and assume μ is absolutely continuous with respect to ν (μ≪ν).
The KL-divergence of μ with respect to ν is defined as:
(μ‖ν):= ∫_Ξln(μ(dξ)/ν(dξ)) μ(dξ).
Let ν∈(Ξ) be non-negative and finite.
The convex conjugate ^⋆(·‖ν) of (·‖ν) is
^⋆(·‖ν)(h) = ln( ∫_Ξexp(h) dν).
See Proposition 28 and Example 7 in <cit.>.
The result follows from a standard Lagrangian duality argument and an application of Jensen's inequality.
More specifically, we introduce a Lagrangian variable γ≥ 0 for the expected-ball constraint on the left-hand side of (<ref>) as follows:
sup_: _θ∼Π [(Q ‖_θ)] ≤ϵ _Q[f_x]
(i)≤inf_γ≥ 0 sup_∈(Ξ) _Q[f_x] + γϵ - γ_Π[ (‖_θ) ]
(ii)=inf_γ≥ 0 γϵ + sup_∈(Ξ) _Q[f_x] - _Π[ γ(‖_θ) ]
(iii)=inf_γ≥ 0 γϵ + ( _Π[ γ(·‖_θ) ] )^⋆ (f_x)
(iv)≤inf_γ≥ 0 γϵ + _Π[ ( γ(·‖_θ) )^⋆(f_x) ]
(v)=inf_γ≥ 0 γϵ + _Π[ γln__θ[ exp(f_x/γ) ] ].
Inequality (i) holds by weak duality.
Equality (ii) holds by linearity of expectation and a simple rearrangement.
Equality (iii) holds by the definition of the conjugate function.
Inequality (iv) holds by Jensen's inequality ([·])^⋆≤[(·)^⋆] because the conjugate is a convex function.
Equality (v) holds by <Ref> and the fact that for γ≥ 0 and function ϕ, (γϕ)^⋆(y) = γϕ^⋆(y/γ).
§.§ Proof of <Ref>
Starting from the left-hand side, we have
_Π[ (‖_θ) ]
(i)=_θ∼π(θ|)[ ∫_Ξ q(ξ) ln( q(ξ)/p(ξ|θ)) ξ̣]
(ii)=_θ∼π(θ|)[ ∫_Ξ q(ξ) ln( q(ξ) ) - q(ξ) ln( p(ξ|θ)) ξ̣]
(iii)=∫_Ξ q(ξ) ln( q(ξ) ) - q(ξ) ·_θ∼π(θ|)[ ln( p(ξ|θ)) ] ξ̣
(iv)=∫_Ξ q(ξ) ln( q(ξ) ) - q(ξ) ·(ln p(ξ|θ̅_n) - G(τ_n) ) ξ̣
(v)=∫_Ξ q(ξ) ln( q(ξ)/p(ξ|θ̅_n)) ξ̣ + ∫_Ξ q(ξ) · G(τ_n) ξ̣
(vi)=(‖_θ̅_n) + _[G(τ_n)]
(vii)=(q(ξ) ‖ p(ξ|θ̅_n)) + G(τ_n).
where (i) is by the definition of the KL-divergence; (ii) follows by log properties; (iii) holds by linearity of expectation; (iv) holds by condition (<ref>) in <Ref>; (v) holds by rearrangement and properties of log; (vi) holds by the definition of the KL-divergence and the definition of _; and (vii) holds by the expected value of a constant.
§.§ Proof of <Ref>
We begin by restating the Lagrangian dual from the proof of <Ref>, but with the added claim that strong duality holds between the primal and dual problems:
sup_: _θ∼Π [(‖_θ)] ≤ϵ _[f_x] =inf_γ≥ 0 sup_∈(Ξ) _[f_x] + γϵ - γ_Π[ (‖_θ) ].
The conditions under which our claim of strong duality holds will be proved later.
Next, we substitute the right-hand side of equation (vi) above into the dual problem in (<ref>):
sup_: _θ∼Π [( ‖ _θ)] ≤ϵ _ξ∼[f_x(ξ)]
= inf_γ≥ 0 γϵ + sup_∈(Ξ) ∫_Ξ f_x(ξ) q(ξ) ξ̣ - γ( ( ‖ p(ξ|θ̅_n) ) + G(τ_n) )
= inf_γ≥ 0 γϵ + γ G(τ_n) + ( γ (· ‖ p(ξ|θ̅_n) ) )^⋆( f_x(ξ) )
= inf_γ≥ 0 γ (ϵ - G(τ_n)) + γln_ξ∼ p(ξ|θ̅_n)[ exp( f_x(ξ)/γ) ],
where the second and third equality holds by the definition of the conjugate of the KL-divergence and by <Ref>.
Finally, it remains to argue that strong duality holds. First, note that the primal problem is a concave optimisation problem with respect to distribution . Second, when ϵ > G(τ_n), then distribution = p(ξ|θ̅_n) is a strictly feasible point to the primal constraint because
_θ∼Π [( ‖ _θ)] = 0 < ϵ - G(τ_n).
§ NEWSVENDOR PROBLEM - ADDITIONAL DETAILS
We provide additional details about our Newsvendor experiment in <Ref> when ^⋆ is a Gaussian distribution with μ_⋆ = 25 and σ^2_⋆ = 100.
Hyperparameters. The prior and posterior are normal-gamma distributions.
We set the prior hyperparameters to be μ_0 = 0 and κ_0, α_0, β_0 = 1.
The derivation of the hyperparameters can be found in <Ref>.
Values of ϵ_min and ϵ^⋆. From <Ref> and equation (<ref>), the value of ϵ_min is 0.047.
From equation (<ref>), the average value of ϵ^⋆ over all J seeds is 0.089 with standard deviation 0.048.
Implementation. We implemented the dual problems for DRO-BAS (<Ref>) and BDRO <cit.> in Python using CVXPY version 1.5.2 and the MOSEK solver version 10.1.28. Our implementation uses disciplined parametrized programming <cit.> which - after an initial warm start for seed j=1 - allows us to solve subsequent seeds j = 2,…,J rapidly (see <Ref>).
We used a 12-core Dual Intel Xeon E5-2643 v3 @ 3.4 Ghz with 128GB RAM.
Out-of-sample mean and variance. For a given ϵ and seed j, let the optimal solution be x^(j)(ϵ).
We calculate the out-of-sample mean m^(j)(ϵ) = _ξ∼^(j)_m [f(x^(j)(ϵ),ξ)] and variance v^(j)(ϵ) = Var_ξ∼^(j)_m[ f(x^(j)(ϵ),ξ) ] of the objective under the empirical test distribution ^(j)_m.
For a given ϵ, the out-of-sample mean m(ϵ) and variance v(ϵ) across all seeds is
m(ϵ) = 1/J∑_j=1^J m^(j)(ϵ) v(ϵ) = 1/J∑_j=1^J v^(j)(ϵ) + 1/J-1∑_j=1^J (m^(j)(ϵ) - m(ϵ) )^2.
The out-of-sample variance v(ϵ) is equal to the mean of the variances v^(j)(ϵ) plus the variance of the means m^(j)(ϵ) <cit.>.
Solve time. On the initial warm-start seed j=1, for each N, DRO-BAS solves the dual problem from <Ref> faster than the BDRO dual problem.
For example, when N=900, DRO-BAS solves problems in 0.27 seconds on average, whilst BDRO solves problem in 5.56 seconds.
These results suggest that, for fixed N, if the solve is started from scratch with no warm start, then DRO-BAS will solve instances faster than BDRO.
For seeds j=2,…,J, disciplined parametrized programming (DPP) significantly speeds up the solve for BDRO: the average solve time for seeds j=2,…,J is 0.40 seconds when N=900.
In contrast, DPP does not speed up the solve for DRO-BAS: the average solve time for seeds j=2,…,J is 0.40 when N=900.
We conjecture that the speed up for BDRO using DPP is because BDRO has N_θ Lagrangian dual variables compared to DRO-BAS having exactly one Lagrangian dual variable.
BDRO then benefits from the warm start because it can reuse the presolve effort spent on the N_θ dual variables spent during the warm start.
§ MISSPECIFIED SUPPLEMENTARY EXPERIMENTS - TRUNCATED NORMAL
In this section, we present additional experiments when the data-generating process ^⋆ is a truncated normal distribution.
The truncated normal has mean μ_⋆ = 10 and variance σ^2_⋆ = 100.
The likelihood is a Gaussian distribution, so our model is misspecified.
The conjugate prior and posterior are still normal-gamma distributions with the same hyperparameters as <Ref>.
The experimental setup is also the same as <Ref>: the values of ϵ, N, n, m, and J are all specified the same.
Analysis. When the likelihood is misspecified, <Ref> shows the out-of-sample mean-variance tradeoff is again a Pareto front.
This is the same conclusion as the well-specified case in <Ref>.
Furthermore, when N=900, DRO-BAS has a small advantage on the mean-variance tradeoff.
Solve time. <Ref> shows the same conclusions about the solve time from <Ref> can be made about the solve time for the truncated normal data-generating process.
|
http://arxiv.org/abs/2409.03011v1 | 20240904180945 | A New IW And-Type Star: Karachurin 12 with Tilted Disks and Diverse cycles | [
"Qi-Bin Sun",
"Sheng-Bang Qian",
"Li-Ying Zhu",
"Qin-Mei Li",
"Fu-Xing Li",
"Min-Yu Li",
"Ping Li"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
Resolving Twin Jets and Twin Disks with JWST and ALMA:
The Young WL 20 Multiple System
[
=========================================================================================
empty
keywords: Binary stars; Cataclysmic variable stars; Dwarf novae; individual (Karachurin 12)
§ INTRODUCTION
Cataclysmic variables (CVs) are of considerable interest due to their unique characteristics and their contributions to astrophysical research. These systems typically consist of a white dwarf (the primary) and a late main-sequence star (the secondary). Key types of CVs include classical novae, recurrent novae, novae-like stars (NLs), dwarf novae (DNe), and magnetic CVs <cit.>. In these binary systems, the secondary star overflows its Roche lobe and transfers mass to the primary, forming a semi-detached close binary system.
When the magnetic field of the white dwarf is weak (below 1 MG), it allows for the formation of accretion disks around the white dwarf. In contrast, a stronger magnetic field (exceeding 1 MG) disrupts the accretion disk, leading to the formation of accretion curtains or columns ().
Dwarf novae (DNe) represent a subclass of CVs, which are weakly magnetic or non-magnetic CVs. In these systems, the variability is primarily driven by the accretion disks, which cause outbursts that, while less intense than those seen in novae (which involve thermonuclear reactions), occur more frequently. DNe are typically categorized into three main types: Z Camelopardalis, SU Ursae Majoris, and U Geminorum. Normal DN outbursts result from luminosity variations driven by thermal instabilities in the accretion disk. This behavior is commonly explained by the accretion disk instability model (DIM; ).
DIM posits that the accretion disk undergoes transitions between three states—cold and stable, thermally unstable, and hot and stable—driven by changes in opacity with temperature. In the low-temperature state where hydrogen is neutral (below ∼ 6000 K), the accretion disk is stable and exhibits low viscosity. As the temperature increases and hydrogen becomes partially ionized, the disk becomes hotter, more viscous, and thermally unstable. When the temperature rises sufficiently to fully ionize hydrogen (above ∼ 8000 K), the disk stabilizes again, at which point it has high viscosity, and its thermal behavior can be described by a classical S-shaped stability curve (See, e.g., ; ; for details).
Z Camelopardalis (Z Cam) are particularly distinguished by their “standstill” behavior during the decline phase of outbursts, where their brightness stabilizes approximately 0.7 magnitudes below the peak level. DIM explains typical “standstill” by describing the disk as being in a hot stable state. When the mass transfer rate from the secondary star exceeds a critical value (Ṁ_crit), the disk remains in a hot stable state, similar to that observed in Z Cam and NLs. Z Cam have mass transfer rates close to Ṁ_crit, with slight variations potentially triggering the “standstill” phenomenon <cit.>.
However, during the “Z CamPaign” observation campaign, <cit.> observed for the first time that the “standstill” in IW And and V513 Cas did not conclude by returning to the quiescent state but instead culminated in an outburst, followed immediately by a dip and then a rapid return to “standstill”. This unusual behavior, identified as the “anomalous standstill phenomenon” by <cit.>. <cit.> categorized these objects as “IW And-type objects (or phenomena)” and proposed the existence of a previously unknown type of limit-cycle oscillation in IW And-type stars. Multiple IW And-type objects have been found in subsequent studies, such as V507 Cyg <cit.>, ST Cha <cit.>, IM Eri <cit.>, KIC 9406652 <cit.> and HO Pup <cit.>.
IW And-type phenomenon is a new challenge to DIM, and the specific physical processes about it are still under debate.
<cit.> was one of the first to systematically explanation the IW And-type phenomenon, suggesting that mass-transfer burst is the primary cause and partially reproducing the observed effects. However, the exact trigger for these mass-transfer oburst remains unresolved.
<cit.> conducted numerical simulations based on the tilted thermal-viscous instability model, which also successfully reproduces the IW And-type phenomenon. They suggest that a tilted accretion disk allows mass from the secondary star to more easily reach the inner disk, establishing a new cycle. In this cycle, the inner disk remains nearly always in a hot state, while the outer disk undergoes repeated outbursts, producing the observed light curve characteristics of the IW And-type phenomenon.
The precession of tilted accretion disks has been observed across various types of celestial systems <cit.> and is particularly common in CVs, where the periods are typically only a few days <cit.>. In CVs, tilted accretion disks often exhibit signals known as “negative superhumps” (NSHs), with periods approximately 5% shorter than orbital period <cit.>. This phenomenon is thought to result from the retrograde precession of the tilted disk combined with the effects of mass stream from the secondary <cit.>.
Our recent research has uncovered that the depth of eclipses, the brightness minima during eclipses, the amplitude of NSHs, and their frequencies all display periodic variations that align with the precession period of the tilted disk (e.g., TV Col, SDSS J0812 and HS 2325+8205 ; ). These findings provide strong evidence for the existence of tilted disks and the origin of NSH associated with the precession of tilted disks. Furthermore, we have observed a correlation between DNe outbursts and the formation of NSHs, offering new perspectives on the origins of NSHs and the mechanisms behind DN outbursts (e.g., AH Her, ASAS J1420, TZ Per, and V392 Hya; ).
Karachurin 12 was discovered by Raul Karachurin, and we used the naming convention of the American Association of Variable Star Observers (AAVSO). Other name examples include
FBS1726+618, ASASSN-V J172710.78+614528.0, and SDSS J172710.79+614527.8. Kato et al. (2018)(vsnet-chat 7938)[http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-chat/7938] suggests that Karachurin 12 be classified as a Z Cam-type DN.
There has been no in-depth analysis of Karachurin 12 to date, making this paper the first detailed study. This paper will determine the parameters and light curve characteristics, and use dip as an index and NSH as a probe to investigate the IW And-type phenomenon in Karachurin 12, offering new observational evidence to better understand this phenomenon.
The structure of this paper is outlined as follows:
Section <ref> details the data sources utilized in this paper.
Section <ref> focuses on identifying periodic information for Karachurin 12.
Section <ref> provides an in-depth analysis of the IW And-type phenomenon.
Section <ref> explores the evolution of the NSH in relation to the IW And phenomenon.
Section <ref> discusses the physical processes of the IW And phenomenon in the context of current research advances and the results presented in this paper.
Finally, Section <ref> presents the conclusions.
§ SOURCES OF OBSERVATION DATA
The All-Sky Automated Survey for Supernovae (ASAS-SN; ) is a long-term initiative dedicated to rapid supernova monitoring across the sky (V < 17 mag) and also tracking numerous variable stars <cit.>. Karachurin 12 was photometrically observed in the V-band by ASAS-SN from HJD 2456675.131 to HJD 2458372.751, with an average magnitude of 14.33 mag (see Fig. <ref>). Data were retrieved from the ASAS-SN Variable Stars Database [https://asas-sn.osu.edu/variables].
The Zwicky Transient Facility (ZTF; ) is a northern-sky optical survey focused on high-cadence time-domain astronomy, utilizing the 48-inch Samuel Oschin Schmidt Telescope at Palomar Observatory. ZTF operates with three custom filters: ZTF_zg, ZTF_zr, and ZTF_zi. Karachurin 12 was observed in these bands over approximately 6 years (from HJD 2458198.8895 to HJD 2460363.9715; see Fig. <ref>). Data were obtained from the Lasair database [https://lasair.roe.ac.uk/].
The Transiting Exoplanet Survey Satellite (TESS; ) is primarily designed for exoplanet detection but has also collected a significant amount of variable star data. TESS conducts sky surveys in sectors, observing each sector for approximately one month in the 600 to 1000 nm wavelength range, utilizing both long-cadence (30 minutes) and short-cadence (2 minutes) modes. It produces Simple Aperture Photometry (SAP) and Pre-Search Data Conditional Simple Aperture Photometry (PDCSAP) light curves (see and for details). Karachurin 12 was observed across 35 sectors over approximately 5 years in short-cadence mode (see Fig. <ref>), with detailed observation data listed in Table <ref>. In this paper, we used the SAP data, which were downloaded from the Mikulski Archive for Space Telescopes (MAST)[https://mast.stsci.edu/].
§ PERIOD DETERMINATION
The ASAS-SN, ZTF, and TESS observations all exhibit IW And-type phenomena with notable variations in outburst amplitudes. We analyzed the periods for each dataset individually using the software (details in ). For ZTF, frequency analyses were conducted separately on the ZTF_zg, ZTF_zr, and ZTF_zi bands. The period of IW And-type phenomena was determined to be 31.98(5) days from ASAS-SN data (see Fig. <ref> a) and 38.095(2) days from TESS data (see Fig. <ref> c). For ZTF, the cycle periods were 35.82(3) days for ZTF_zg, 35.97(3) days for ZTF_zr, and 36.60(6) days for ZTF_zi (see Fig. <ref> b). Averaging these values, the period of the IW And-type phenomena is found to be 35.69(3) days.
The TESS data, with a 120-second exposure time, provides crucial insights into detailed variations in our study. Frequency analysis reveals periodic signals with periods of f_2 = 0.201662(6) d^-1, f_3 = 3.155674(13) d^-1, f_4 = 3.355861(9) d^-1, and f_5 = 6.311376(7) d^-1, in addition to the IW And phenomenon.
Notably, f_5 is approximately 2 × f_2, confirming it as the second harmonic of f_2. The relationship among f_2, f_3, and f_4 can be expressed as f_2 ≈ f_4 - f_3, reflecting the connection between the precession period of the accretion disk, the orbital period, and the NSH period (1/P_prec = 1/P_nsh - 1/P_orb).
From this, we infer that f_2 corresponds to a precession signal with a period of 4.9588(2) days for the tilted disk, f_3 represents an orbital period of 0.3168895(13) days, f_4 corresponds to the NSH period of 0.2979861(8) days, and f_5 denotes the second harmonic of the orbital period. No eclipses were observed in Karachurin 12, aside from the ellipsoidal modulation, suggesting that it is a low-inclination CV.
Excess is a key parameter in the study of NSHs, defined as ϵ^- = (P_nsh - P_orb) / P_orb. For Karachurin 12, ϵ^- is determined to be -0.059653(7). Comparing this value with the empirical relationship between orbital period and ϵ^- derived by <cit.> from a large sample of NSHs, we find a general agreement with their results (see Fig. <ref>).
We have identified for the first time that Karachurin 12 not only exhibits the IW And phenomenon but also represents a new NSH system with a tilted disk precession signal. Recent studies suggest that the IW And phenomenon may be linked to tilted disks <cit.>. The observation of both the IW And phenomenon and the tilted disk in Karachurin 12 supports these recent findings and aligns with these expectations.
§ IW AND-TYPE PHENOMENON
§.§ Identification of dips
Typical IW And objects are characterized by standstill or quasi-standstill phases interrupted by brightening events or outbursts, which are collectively referred to as outbursts. Unlike Z Cam-type stars that return to a quiescent state, these systems undergo damping oscillations following outbursts, resulting in a cycle of standstill–outburst–damping oscillations <cit.>.
In the case of Karachurin 12, we observe a notable IW And phenomenon where the typical cyclic sequence includes a quasi-standstill phase interrupted by an outburst, followed immediately by a dip. Unlike other IW And objects, the quasi-standstill phase in Karachurin 12 does not show significant damping oscillations. Instead, it reflects the signal of accretion disk precession, although this precession signal is not prominent in every quasi-standstill phase (see Fig. <ref>). In addition to the standard cycles, Karachurin 12 exhibits unique variations, with relatively stable outburst and quasi-standstill phases, and variations mainly arising from the dip. Therefore, we use the dip as an index to study these special cycles.
The IW And phenomenon has been documented through surveys conducted by ASAS-SN, ZTF, and TESS. ASAS-SN data, being relatively dispersed, contrasts with the more precise photometric results provided by TESS. Our initial approach involved identifying the positions of the dips based on their occurrence times and labeling them accordingly (e.g., dip2459830, as shown in Table <ref>). To determine the precise location of these dips, we employed Gaussian fitting and identified the minimum points of the dips.
For TESS data, where the dip profiles were relatively complete, we used Gaussian fitting to calculate the dip parameters. For ZTF and ASAS-SN data, we determined the timing of the dips from the minimum points. For ZTF data, the ZTF_zg band, with its greater detail and larger amplitude variations, was used to define the dip parameters. In the ASAS-SN dataset, only one particularly complete profile was identified. In total, we recognized 23 dips, 8 of which were determined through Gaussian fitting (see Table <ref>).
§.§ Different types of dip and corresponding cycles
Next, we use dips as indices to study the cyclical nature of the IW And phenomenon. Among the identified dips, 16 are classified as normal cycles, characterized by a quasi-standstill phase interrupted by an outburst, followed by a dip. We designate these as Type_I dips (see Fig. <ref>). For Type_I, eight dips were analyzed using Gaussian fitting (see Figs. <ref> and <ref>). The width of each dip was determined as twice the full width at half maximum (FWHM) of the fit, with an average width of 3.86(10) days and a depth of 0.32(8) mag. The quasi-standstill duration typically ranges from about 14.0 to 14.5 mag and lasts between 10 and 20 days.
The deepest dip observed in Karachurin 12 is dip2459057 (see Fig. <ref>), with a depth of approximately 1.5 mag and a duration of about 15 days. This dip is notably 4 to 5 times wider and deeper than a typical dip. Unusually, it was discovered not following an outburst phase but rather within a cycle that included an outburst, a quasi-standstill, and then the dip (outburst - quasi-standstill - dip - outburst). This deviation from the Type_I pattern leads us to classify it as Type_II. Another example of Type_II is dip2457977 (see Fig. <ref>a).
Another distinct case is dip2458284 (see Fig. <ref>), where the cycle consists of an outburst followed by a 30-day standstill, which is then truncated by a dip. This is followed by a new standstill phase, truncated by another outburst, resulting in the cycle: outburst - quasi-standstill - dip - quasi-standstill - outburst. We designate this as Type_III. The key difference between Type_III and Type_II is that Type_III is followed by another standstill. Similar variations may be present in dip2458375 (see Fig. <ref>), though the data is less clear due to gaps.
In dips 2459336 and 2459344, two dips occur within a single IW And cycle, approximately 8 days apart (see Fig. <ref>). This pattern can be attributed to precession (4.9588(2) days) followed by a quasi-standstill. A similar pattern is observed in dips 2458318 and 2458326 (see Fig. <ref>a). Unlike Type_I, where only a single dip is observed, these double dips are classified as Type_IV. This type of variation may be analogous to the damping oscillations observed in IM Eri <cit.>. However, further observational data is needed to confirm these double-dip occurrences.
To compare the four types of dips, we calculated the time intervals between clearly defined dips and the preceding and following outburst peaks, with an uncertainty of 2 days. Results show that Type_I dips typically occur about 11 days after the previous outburst peak and 15 to 35 days before the next outburst peak (see Fig. <ref>b). This suggests that the interval between Type_I dips and the preceding outburst is relatively stable, while the duration of the quasi-standstill phase varies.
In contrast, Type_II dips occur much further from the preceding outburst and closer to the subsequent one compared to Type_I (see Fig. <ref>b). Type_III dips are separated from both preceding and following outbursts by more than 20 days (see Fig. <ref>b). For Type_IV, the first dip in the double-dip structure shows intervals to the preceding and following outbursts consistent with those of Type_I.
By indexing dips according to their position within the cycle, we observed various dip types and their corresponding cycles in Karachurin 12. Our study reveals the following:
(i) Among the 23 dips analyzed, 18 occur after an outburst, indicating that dips are more likely to follow an outburst phase.
(ii) However, a dip does not always follow an outburst phase, as seen in Type_II and Type_III (see Fig. <ref>).
(iii) A dip does not always have to be followed by a quasi-standstill and can directly transition into an outburst phase, as observed in Type_II (see Fig. <ref>).
(iv) Not every cycle must include a dip (see the yellow regions of dip2459057 in Fig. <ref>).
§ EVOLUTION OF NSH WITH THE IW AND PHENOMENON
§.§ Extraction of NSH information
Current research indicates that the IW And phenomenon is associated with tilted accretion disks. To explore the relationship between NSHs and the IW And phenomenon and to provide new observational evidence, we propose using NSHs as a diagnostic tool. The origin of NSHs is closely linked with mass transfer streams from the secondary star and variations within the accretion disk. Given their higher completeness and continuity compared to precession signals observed in Karachurin 12, NSHs are an ideal probe for this investigation.
To study the evolution of NSHs, we follow the methodology outlined by <cit.> for excluding long-term trends and orbital signals. We employed locally weighted regression (LOWESS; ) to remove the long-term trend (see Fig. <ref>a). Orbital signals were subsequently removed using software (details in ). The evolution of NSH amplitude and period is calculated as follows:
(i) Frequency Analysis: Using , we performed a frequency analysis of the light curves after removing the long-term trend for each sector, identifying signals with signal-to-noise ratios greater than 3. The periodogram for each sector is shown in Figure <ref>, and the statistical results are listed in Table <ref>.
(ii) Amplitude and Phase Determination: The amplitude and phase information for each signal were obtained using the fitting equation provided by (see Fig. <ref>c):
mag(t) = Z + ∑Amplitude_i ·sin(2π· (frequency_i · t + phase_i))
where Z, Amplitude_i, frequency_i, and phase_i are the fitted intercept, amplitude, frequency, and phase, respectively.
(iii) Isolation of NSH Signal: Based on the fit results, we adjusted the detrended light curve to isolate the NSH signal by subtracting both the orbital signal and its second harmonic (as illustrated in Fig. <ref>c). A sinusoidal fit was then applied to the residual data (see Fig. <ref>d).
(iv) Gaussian Fitting: To determine the NSH maxima and minima, we applied a Gaussian fit (see Fig. <ref>e). The NSH amplitude was calculated using the following equation:
ΔAmplitude_nsh = ( minima_before, mag + minima_after, mag) / 2 - maxima_mag/2
Here, minima_before, mag and minima_after, mag refer to the minima before and after the NSH maxima, respectively. The calculated amplitudes are shown as black points in Figure <ref>b. Note that some NSH amplitudes could not be determined due to significant weakening of the NSHs.
(v) Segmented Frequency Analysis: To validate the NSH results from step (iv) and address issues with unclear contours, we performed a segmented frequency analysis of the light curves after removing long-term trends and orbital signals. The data were initially divided into one-day segments for periodic analysis (indicated by the blue points in Fig. <ref>b and Fig. <ref>c). The data were then re-divided into two-day segments for further analysis (shown by the red points in Figs. <ref>b and <ref>c). This approach aimed to provide a clearer profile of the frequency and amplitude evolution of the NSHs.
(vi) Continuous Wavelet Transform (CWT): Finally, in agreement with <cit.>, we applied the CWT <cit.> to the light curves after removing long-term trends and orbital signals. This method complemented steps (iv) and (v) and facilitated cross-validation of the NSH information (see Fig. <ref>c).
§.§ The evolution of NSH
A comparison of the results from steps (iv), (v), and (vi) reveals consistent trends with the following characteristics:
(a) The NSH shows a marked weakening starting from sector 73, becoming undetectable in sectors 76 and 77, and then weakly recovering in sectors 78 and 79. For detailed information, refer to Figures <ref> and <ref>c. Notably, no significant IW And phenomenon is observed after the outburst ends in sector 75 (see Fig. <ref>d), suggesting a potential link between the IW And phenomenon and NSHs.
(b) No discernible regularity in the period of NSHs was detected (see Fig. <ref>c), indicating that the NSH frequency does not vary with the IW And cycle in Karachurin 12.
(c) The amplitude of the NSH is significantly correlated with the outburst phases, decreasing during the rise of an outburst and increasing during its recession. The maximum NSH amplitude is observed during the quasi-standstill phase (see Figs. <ref> and <ref>).
(d) We analyzed the dip in three phases: ingress dip, minima, and egress dip. The ingress and egress dips are highlighted in yellow and magenta, respectively, in Figures <ref>a and <ref>b. A linear fit was applied to the NSH amplitude during the relatively complete ingress dip (fitting the black points). The results indicate that the NSH amplitude for the ingress dip continues to rise, reflecting the trend of the outburst recession phase. There is no evidence that the NSH amplitude evolution reverses at the dip minima, suggesting that the dip minima may not represent a turning point. Additionally, the NSH amplitudes during the egress dips of dip2458870 and dip2459711 continue to show an increasing trend.
§ DISCUSSION
The DIM explains the standstill phase observed in regular Z Cam systems by proposing that Z Cam has a mass transfer rate that approaches a critical value <cit.>. Fluctuations in this mass transfer rate can drive the system into a hot state, resulting in a brightness standstill <cit.>. According to the DIM, such a standstill should eventually end with a return to quiescence. However, the IW And phenomenon, where the standstill ends with an outburst, poses a challenge to this model. The DIM alone cannot account for recurring outbursts while the accretion disk remains in a hot state <cit.>.
The origin of NSHs is theorized to be linked to periodic variations in the energy released by the impact of mass streams from the secondary star interacting with a tilted, retrogradely precessing disk <cit.>. Several researchers have successfully modeled NSHs based on this framework <cit.>. However, there is still no consensus on the specific physical mechanisms responsible for the tilt and retrograde precession of the disk.
In this section, we propose using NSHs as a diagnostic tool to explore the origins of the IW And phenomenon, integrating recent discoveries to gain further insights.
§.§ Mass Transfer Outburst
Initially, variations in the mass transfer rate were proposed to explain DN outbursts through the mass-transfer burst model (MTB; and ). However, this explanation is not universally accepted.
<cit.> used simulations of V513 Cas, assuming that mass-transfer bursts followed by dips could reproduce the outbursts and dips seen in the IW And cycle (see figure 2 in ). Despite this approach, they did not identify the underlying physical processes and suggested that the magnetic activity of the secondary star is a more likely cause. They explained the termination of standstills with outbursts (a scenario not predicted by the DIM) as a result of mass-transfer bursts from the secondary superimposed on the thermally stable state of Z Cam, with dips resulting from fluctuations in mass transfer.
In our study, we observed that the amplitude of the NSH varied with the IW And phenomenon in Karachurin 12. Specifically, the NSH amplitude decreased as the outburst intensified and increased as the outburst diminished. Notably, the NSH amplitude continued to rise during the ingress dip, with the maximum amplitude observed during the standstill.
It is widely accepted that the accretion disk radius expands during DN outbursts, a factor frequently considered in DN outburst models (e.g., ) and crucial in explaining the superoutbursts of SU UMa systems (e.g., ). During outbursts, the disk expands due to significant angular momentum transfer to the outer edge from the accretion of large amounts of matter at the inner edge. Conversely, during quiescence, the disk's outer radius contracts as it accumulates low angular momentum mass from the secondary star’s gas stream, which has less specific angular momentum than the outer disk (e.g., ).
If, as suggested by MTB, a standstill is interrupted by an outburst due to a burst in the mass transfer rate, the behavior of the accretion disk during this period becomes crucial. Specifically, mass-transfer rate bursts would lead to a shrinking of the accretion disk, which should increase the energy released by the mass streams from the secondary as the outburst rises and decrease it as the outburst weakens. This scenario would predict that the NSH amplitude should increase as the outburst rises and decrease as the outburst weakens, contrary to our observations in Karachurin 12.
Moreover, MTB implies that a dip is caused by a decrease in the mass transfer rate. According to this model, the combined effects of disk expansion and reduced mass streams should lead to a significant dip in NSH amplitude. However, our observations of Karachurin 12 show an increase in NSH amplitude during the ingress dip, contradicting the predictions of MTB. Therefore, we conclude that MTB does not adequately explain the variations in NSH amplitude observed in Karachurin 12.
Instead, we propose that the variation in NSH amplitude during outbursts in Karachurin 12 is consistent with observations from AH Her, ASAS J1420, TZ Per, and V392 Hya <cit.>. In this scenario, the mass transfer remains relatively stable. As the outburst progresses, the accretion disk expands, causing the Lagrangian point L1 to move closer to the disk. This reduces the energy released by the mass streams, leading to a decrease in NSH amplitude as the outburst intensifies and an increase in NSH amplitude as the outburst subsides.
§.§ Tilted Thermally Unstable Disk
In studies of V507 Cyg, IM Eri, and FY Vul, <cit.> proposed that the IW And-type phenomenon represents a previously unknown limit cycle oscillation. Kato suggested that the standstill observed in IW And-type systems corresponds to an extended period during which the inner region of the accretion disk remains in a hot state. He also proposed that this standstill phase is eventually terminated by a thermal instability originating from the disk's outer regions, which leads to outbursts that interrupt the standstill.
Building on Kato's ideas, <cit.> explored different scenarios using a tilted, thermally unstable disk model (referred to as the tilted-DIM). Their three-dimensional hydrodynamic simulations, particularly Model B1, partially reproduced the IW And phenomenon under conditions of high mass transfer rates. Although they were able to reproduce some aspects of the IW And phenomenon, they acknowledged that their simulations needed further refinement to fully explain the details. They proposed that the tilted disk allows mass streams to enter the inner region, keeping it in a hot state for extended periods, thereby maintaining the standstill phase. Meanwhile, the outer disk remains relatively cool. Once enough matter accumulates, it triggers an outburst that interrupts the standstill. Oscillations during the standstill phase are caused by alternating cold and hot waves propagating through the middle region of the disk. A sufficiently strong cold wave reaching the inner disk can lead to a dip in brightness.
In our study of Karachurin 12, we analyzed IW And cycles indexed by dips and found diversity in their patterns:
(i) A dip is more likely to follow an outburst.
(ii) A dip can occur after a standstill.
(iii) A dip can be followed by an outburst without a preceding standstill.
(iv) Not every cycle includes a dip.
Except for the first pattern, the observed details in Karachurin 12 align with the simulation results of <cit.>. For example, their Model B1 shows multiple dips following a standstill (referred to as mid-brightness), and a dip can transition directly into an outburst (see figures 11 and 12 of ). Additionally, their Model C1 does not include a dip. The radius of the accretion disk increases with outbursts and decreases during dips, consistent with the changes in NSH amplitude observed in Karachurin 12 (where NSH amplitude decreases during disk expansion and increases during disk contraction). Most importantly, we observed a significant weakening of the NSH amplitude starting from sector 73, with complete undetectability and disappearance of the IW And phenomenon in sectors 76 and 77. This supports the existence of a correlation between the tilted disk and the IW And phenomenon.
Regarding the observation that a dip is more likely to occur after an outburst in Karachurin 12, this can still be explained by the tilted-DIM. During a standstill, the inner disk remains hot, the outer disk is cool, and the middle of the disk has an intermediate temperature. When a significant amount of mass accumulates in the outer disk and an outburst begins, the entire accretion disk transitions into a high-viscosity state (higher than during the standstill phase). This enhanced mass transport facilitates the propagation of a cooling wave into the inner disk as the outburst subsides, leading to the formation of a deep dip following the outburst.
In <cit.>'s simulations, they also estimated the precession rate, suggesting that the precession rate (v_pre / v_orb) of the accretion disk varies periodically with outbursts. However, we found that the NSH frequency (v_nsh = v_pre + v_orb) for Karachurin 12 does not vary significantly, indicating no substantial change in the precession rate. Therefore, we suggest that the tilted-DIM simulations need further refinement.
In conclusion, while the tilted-DIM is a valid framework for explaining the IW And phenomena observed in Karachurin 12, further optimization of the simulations is required to account for the detailed variations in the IW And cycle.
§ CONCLUSIONS
This paper presents a detailed analysis of the newly identified IW And object, Karachurin 12, using photometric data from ASAS-SN, ZTF, and TESS. Our main findings are summarized as follows:
(1) Frequency analysis of the data reveals that the IW And cycle period for Karachurin 12 is 35.69(3) days. TESS data analysis identifies Karachurin 12 as a new NSH system with an accretion disk precession signal. We have determined the accretion disk precession period, orbital period, and NSH period for Karachurin 12 to be 4.9588(2) days, 0.3168895(13) days, and 0.2979861(8) days, respectively.
(2) This paper analyzes the IW And cycles in Karachurin 12 using dips as an index. A Gaussian fit to the relatively complete dip observed in the TESS photometry yields an average dip width of 3.86(10) days and a depth of 0.32(8) mag. The primary cycle patterns observed are as follows:
(i) outburst - dip - quasi-standstill - outburst;
(ii)outburst - quasi-standstill - outburst;
(iii) outburst - quasi-standstill - dip - quasi-standstill - outburst;
(iv) outburst - dip - dip - quasi-standstill - outburst;
(v) outburst - quasi-standstill - outburst. We counted the times of different dip distances from the peak of the outburst (both pre- and post-dip), again revealing the presence of different cycles.
These patterns highlight the diversity and complexity of IW And cycles. It is important to note that some of these cycles in Karachurin 12 have limited samples and require further validation.
(3) We used the difference between the maxima and minima, segmented frequency analysis, and the Continuous Wavelet Transform method to calculate the information of NSH. The results show that NSH amplitude decreases with outburst rise and increases with outburst recession. In the ingress dip phase NSH amplitude is allowed to continue to increase seems to continue the trend of outburst recession. No significant changes in the NSH period with the IW And cycle were observed. We suggest that this behaviour is caused by a decrease in the energy released from the mass stream due to a larger radius of the accretion disk during the outburst interval, and an increase in the energy released from the mass stream due to a smaller radius of the outburst recession. Regarding the continuous increase during the ingress dip we still suggest that it corresponds to a decrease in the radius of the accretion disk.
(4) We discuss the two dominant theories on the origin of the IW And phenomenon in conjunction with the phenomenon in Karachurin 12. We find that the mass-transfer burst model leads to changes in NSH amplitudes that are the opposite of those observed in Karachurin 12. Therefore, we suggest that the mass transfer burst model cannot explain the IW And cycle in Karachurin 12. Except for the feature of an outburst followed by a dip, most of the observed cycle details are consistent with the results from <cit.>'s simulations of a tilted, thermally unstable disk. For instance, their simulations reproduce scenarios where the standstill is interrupted by a dip, dips transition directly into outbursts, and some cycles lack significant dips. Furthermore, the tilted thermally unstable disk model can also account for the outburst followed by a dip scenario. In this model, the viscosity coefficient during an outburst is significantly higher than during the standstill phase, enhancing the material transport capability of the accretion disk. This increased transport capability facilitates the propagation of cooling waves into the inner disk as the outburst wanes.
Additionally, our results reveal that NSHs begin to decrease in sector 73 of the TESS data, with undetectability in sectors 76 and 77 coinciding with the disappearance of the IW And phenomenon. This suggests a potential link between the IW And phenomenon and a tilted disk. Therefore, we propose that the tilted thermally unstable disk model effectively explains the IW And phenomenon in Karachurin 12. However, improvements are needed in the simulations of this model to better capture the detailed dynamics of the IW And cycles.
§ ACKNOWLEDGEMENTS
This work was supported by National Key R&D Program of China (grant No. 2022YFE0116800), the National Natural Science Foundation of China (Nos. 11933008). We are grateful to the All-Sky Automated Survey for Supernovae for their valuable V-band photometric data of Karachurin 12, which were crucial for our analysis. Our thanks also go to the Zwicky Transient Facility for their high-cadence observations using their custom filters, which provided important temporal coverage of Karachurin 12. Additionally, we appreciate the contributions of the Transiting Exoplanet Survey Satellite for its comprehensive monitoring of variable stars, including Karachurin 12, which greatly enhanced our study. These datasets were instrumental in advancing our understanding of the IW And phenomenon and the behavior of Karachurin 12.
cas-model2-names
§ APPENDIX
The appendix includes the following figures and tables:
Figure <ref>: Displays 16 standard cycles of Karachurin 12.
Figure <ref>: Shows special cycles and their corresponding dips relative to outburst distance.
Figure <ref>: Presents the periodograms for each sector.
Table <ref>: Contains details of the TESS observations.
Table <ref>: Provides information on the dips.
Table <ref>: Lists statistics of significant signals in each sector.
cccccccccccccccccc
Frequency analysis results for different sectors.
Sector Fre. err Amp. err Phase err S/N Sector Fre. err Amp. err Phase err S/N
[1/d] [1/d] [mag] [mag] [rad] [rad] [1/d] [1/d] [mag] [mag] [rad] [rad]
16c
Table continued from previous page
Sector Fre. err Amp. err Phase err S/N Sector Fre. err Amp. err Phase err S/N
[1/d] [1/d] [mag] [mag] [rad] [rad] [1/d] [1/d] [mag] [mag] [rad] [rad]
16c
ALL 0.026 1.10E-06 0.115 4.11E-04 0.267 5.68E-04 20.56 49 3.150 1.68E-02 0.009 2.71E-04 0.249 3.04E-02 4.14
ALL 0.202 6.17E-06 0.020 4.11E-04 0.149 3.20E-03 4.13 49 3.368 4.65E-03 0.040 3.34E-04 0.359 8.44E-03 18.76
ALL 3.156 1.33E-05 0.009 4.11E-04 0.722 6.90E-03 14.09 49 6.314 1.28E-02 0.013 2.95E-04 0.057 2.31E-02 13.74
ALL 3.356 8.87E-06 0.014 4.11E-04 0.293 4.60E-03 22.05 50 3.177 2.47E-02 0.006 2.75E-04 0.566 4.47E-02 3.04
ALL 6.311 8.62E-06 0.015 4.11E-04 0.585 4.47E-03 43.07 50 3.356 6.03E-03 0.031 3.38E-04 0.179 1.09E-02 15.29
14 3.159 1.05E-02 0.018 3.43E-04 0.348 1.90E-02 9.09 50 6.312 1.04E-02 0.016 3.02E-04 0.348 1.89E-02 13.34
14 3.360 6.11E-03 0.034 3.75E-04 0.437 1.11E-02 17.74 51 3.341 7.39E-03 0.032 4.34E-04 0.423 1.34E-02 13.26
14 6.313 1.17E-02 0.016 3.28E-04 0.781 2.12E-02 14.63 51 6.307 1.55E-02 0.013 3.67E-04 0.530 2.82E-02 9.01
15 3.154 1.45E-02 0.011 2.96E-04 0.942 2.63E-02 6.39 52 3.370 6.50E-03 0.035 4.14E-04 0.623 1.18E-02 16.69
15 3.355 8.98E-03 0.021 3.37E-04 0.851 1.63E-02 12.05 52 6.312 1.51E-02 0.013 3.66E-04 0.340 2.75E-02 11.96
15 6.307 1.04E-02 0.017 3.15E-04 0.260 1.89E-02 14.12 53 3.353 4.45E-03 0.043 3.48E-04 0.131 8.08E-03 20.67
16 3.156 1.40E-02 0.011 2.77E-04 0.385 2.54E-02 8.09 53 6.311 1.14E-02 0.015 3.09E-04 0.714 2.07E-02 13.78
16 3.348 1.15E-02 0.014 2.86E-04 0.254 2.08E-02 10.42 54 3.359 5.24E-03 0.039 3.70E-04 0.044 9.50E-03 17.81
16 6.311 1.14E-02 0.015 3.02E-04 0.246 2.07E-02 13.43 54 6.312 1.30E-02 0.013 3.09E-04 0.937 2.35E-02 11.47
17 3.155 1.01E-02 0.017 3.09E-04 0.235 1.84E-02 9.22 55 3.365 5.41E-03 0.039 3.78E-04 0.135 9.82E-03 18.09
17 3.360 7.84E-03 0.024 3.39E-04 0.927 1.42E-02 13.30 55 6.313 1.32E-02 0.014 3.44E-04 0.544 2.40E-02 12.84
17 6.313 9.85E-03 0.016 2.87E-04 0.794 1.79E-02 15.12 56 3.145 3.06E-02 0.005 2.89E-04 0.097 5.55E-02 3.12
19 3.154 8.33E-03 0.018 2.75E-04 0.010 1.51E-02 10.98 56 3.357 6.93E-03 0.026 3.29E-04 0.603 1.26E-02 15.22
19 3.352 6.88E-03 0.024 3.00E-04 0.302 1.25E-02 14.77 56 6.315 1.09E-02 0.016 3.06E-04 0.422 1.97E-02 14.04
19 6.311 1.03E-02 0.014 2.60E-04 0.307 1.87E-02 15.43 57 3.148 3.12E-02 0.007 4.23E-04 0.792 5.67E-02 3.51
20 3.152 1.00E-02 0.014 2.50E-04 0.780 1.82E-02 9.22 57 3.357 8.39E-03 0.030 4.59E-04 0.037 1.52E-02 14.41
20 3.352 6.95E-03 0.023 2.85E-04 0.334 1.26E-02 15.39 57 6.310 1.93E-02 0.012 4.34E-04 0.641 3.51E-02 10.99
20 6.313 9.99E-03 0.015 2.64E-04 0.695 1.81E-02 16.20 58 3.156 2.29E-02 0.009 3.82E-04 0.312 4.15E-02 5.11
21 3.157 7.66E-03 0.020 2.72E-04 0.580 1.39E-02 11.83 58 3.354 9.26E-03 0.025 4.21E-04 0.182 1.68E-02 14.32
21 3.360 5.80E-03 0.028 2.95E-04 0.399 1.05E-02 17.26 58 6.309 1.81E-02 0.012 4.00E-04 0.025 3.29E-02 12.32
21 6.314 8.54E-03 0.016 2.55E-04 0.693 1.55E-02 17.95 59 3.164 2.57E-02 0.011 5.14E-04 0.614 4.66E-02 5.37
22 3.157 5.97E-03 0.028 3.00E-04 0.544 1.08E-02 13.78 59 3.347 1.79E-02 0.017 5.36E-04 0.213 3.25E-02 8.20
22 3.359 5.90E-03 0.030 3.24E-04 0.119 1.07E-02 15.29 59 6.310 2.48E-02 0.012 5.25E-04 0.184 4.49E-02 8.73
22 6.314 1.04E-02 0.014 2.70E-04 0.647 1.88E-02 15.86 60 3.160 2.01E-02 0.017 6.03E-04 0.775 3.64E-02 8.15
23 3.161 1.05E-02 0.016 2.97E-04 0.763 1.90E-02 7.59 60 3.356 2.38E-02 0.014 5.84E-04 0.359 4.32E-02 6.71
23 3.357 5.32E-03 0.036 3.45E-04 0.918 9.65E-03 17.61 60 6.310 2.71E-02 0.012 5.67E-04 0.733 4.92E-02 8.65
23 6.310 1.09E-02 0.016 3.16E-04 0.342 1.97E-02 14.97 73 3.155 2.43E-02 0.011 4.88E-04 0.970 4.41E-02 5.73
24 3.152 1.39E-02 0.015 3.67E-04 0.341 2.52E-02 7.31 73 3.349 4.86E-02 0.006 4.75E-04 0.449 8.81E-02 3.01
24 3.349 1.03E-02 0.021 3.84E-04 0.714 1.86E-02 10.65 73 6.314 2.01E-02 0.014 5.06E-04 0.664 3.65E-02 9.22
24 6.311 1.50E-02 0.013 3.53E-04 0.417 2.72E-02 12.50 74 3.159 2.52E-02 0.010 4.44E-04 0.838 4.58E-02 5.98
25 3.157 9.91E-03 0.019 3.43E-04 0.485 1.80E-02 8.26 74 3.368 1.98E-02 0.013 4.53E-04 0.914 3.60E-02 8.15
25 3.364 5.68E-03 0.041 4.19E-04 0.912 1.03E-02 17.81 74 6.311 1.51E-02 0.017 4.75E-04 0.593 2.74E-02 12.92
25 6.311 9.73E-03 0.021 3.72E-04 0.422 1.76E-02 17.89 75 3.158 2.02E-02 0.013 4.76E-04 0.920 3.67E-02 5.91
26 3.156 7.72E-03 0.022 3.02E-04 0.463 1.40E-02 12.59 75 3.354 2.94E-02 0.009 4.67E-04 0.287 5.33E-02 4.20
26 3.365 7.54E-03 0.024 3.24E-04 0.998 1.37E-02 13.99 75 6.310 1.72E-02 0.016 4.89E-04 0.980 3.12E-02 12.83
26 6.313 1.19E-02 0.013 2.87E-04 0.410 2.16E-02 14.78 76 3.154 2.82E-02 0.010 5.02E-04 0.727 5.11E-02 5.66
40 3.153 8.71E-03 0.015 2.31E-04 0.652 1.58E-02 10.08 76 6.313 1.38E-02 0.021 5.19E-04 0.889 2.50E-02 14.25
40 3.358 5.71E-03 0.026 2.67E-04 0.569 1.04E-02 17.86 77 3.158 1.64E-02 0.019 5.54E-04 0.843 2.97E-02 9.86
40 6.312 8.62E-03 0.016 2.43E-04 0.363 1.56E-02 18.98 77 6.311 1.90E-02 0.015 5.30E-04 0.998 3.44E-02 9.68
41 3.158 9.31E-03 0.014 2.36E-04 0.386 1.69E-02 9.69 78 3.151 1.67E-02 0.019 5.84E-04 0.006 3.03E-02 7.87
41 3.355 6.80E-03 0.022 2.66E-04 0.845 1.23E-02 15.14 78 3.357 3.96E-02 0.008 5.49E-04 0.478 7.18E-02 3.27
41 6.312 9.34E-03 0.015 2.48E-04 0.370 1.69E-02 18.34 78 6.306 2.27E-02 0.014 5.68E-04 0.932 4.12E-02 8.54
47 3.159 1.06E-02 0.016 3.02E-04 0.518 1.92E-02 7.70 79 3.360 3.75E-02 0.005 3.64E-04 0.478 6.80E-02 4.57
47 3.349 5.66E-03 0.033 3.34E-04 0.883 1.03E-02 15.99 79 6.311 1.12E-02 0.019 3.84E-04 0.985 2.03E-02 17.86
47 6.309 1.21E-02 0.013 2.79E-04 0.870 2.20E-02 13.30 - - - - - - - -
48 3.161 1.66E-02 0.011 3.35E-04 0.209 3.01E-02 4.91 - - - - - - - -
48 3.358 5.21E-03 0.040 3.77E-04 0.849 9.44E-03 17.80 - - - - - - - -
48 6.315 1.62E-02 0.011 3.25E-04 0.195 2.94E-02 9.95 - - - - - - - -
|
http://arxiv.org/abs/2409.02467v1 | 20240904063202 | Kolmogorov-size particles in homogeneous and isotropic turbulence | [
"Alessandro Chiarini",
"Simone Tandurella",
"Marco Edoardo Rosti"
] | physics.flu-dyn | [
"physics.flu-dyn",
"physics.comp-ph"
] |
UAV-Mounted Movable Antenna: Joint Optimization of UAV Placement and Antenna Configuration
Xiao-Wei Tang, Yunmei Shi, Yi Huang, and Qingqing Wu
Xiao-Wei Tang, Yunmei Shi, Yi Huang ({xwtang, ymshi, and huangyi718b}@tongji.edu.cn) are with the Department of Information and Communication Engineering, Tongji University, Shanghai, China.
Qingqing Wu ([email protected]) is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
3 September 2024
======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We investigate the fluid-solid interaction of suspensions of Kolmogorov-size spherical particles moving in homogeneous isotropic turbulence at a microscale Reynolds number of Re_λ≈ 140. Two volume fractions are considered, 10^-5 and 10^-3, and the solid-to-fluid density ratio is set to 5 and 100.
We present a comparison between interface-resolved (PR-DNS) and one-way-coupled point-particle (PP-DNS) direct numerical simulations.
We find that the modulated energy spectrum shows the classical -5/3 Kolmogorov scaling in the inertial range of scales and a -4 scaling at smaller scales, with the latter resulting from a balance between the energy injected by the particles and the viscous dissipation, in an otherwise smooth flow. An analysis of the small-scale flow topology shows that the particles mainly favour events with axial strain and vortex compression. The dynamics of the particles and their collective motion studied for PR-DNS are used to assess the validity of the PP-DNS.
We find that the PP-DNS predicts fairly well both the Lagrangian and Eulerian statistics of the particles motion for the low-density case, while some discrepancies are observed for the high-density case. Also, the PP-DNS is found to underpredict the level of clustering of the suspension compared to the PR-DNS, with a larger difference for the high-density case.
§ INTRODUCTION
Particle-laden turbulent flows have been extensively investigated over the years, because of their relevance from both the fundamental and applicative viewpoints. They are indeed ubiquitous in several natural and engineering scenarios <cit.>, such as in volcanic ash and cloud droplets in atmospheric turbulence, dust particles in protoplanetary disks, sandstorms, ocean microplastics, and fuel droplets in spray combustion.
§.§ The Maxey-Riley-Gatignol (MRG) Equation
In particle-laden turbulent flows the turbulent scales of the fluid phase are coupled in a non trivial manner with the solid phase. Properly resolving the flow around each particle is thus crucial to capture the fluid-solid interaction and describe the dynamics of the particles. Due to the prohibitive computational cost, however, most of the theoretical and numerical studies rely on approximations and models.
As an example, in the context of particle clustering and preferential sampling — i.e. the tendency of particles to explore flow regions with specific properties — we mention the recent theoretical works by <cit.>, <cit.>, <cit.> and <cit.>. Most of the models used are based on the seminal works of <cit.> and <cit.>, where the equation for small rigid spheres in a non-uniform flow (hereafter referred to as MRG equation) has been derived by exploiting linear perturbation theory. In these models, each particle is treated as a mathematical point source of mass, momentum and energy. In point-particle models, particles are indeed assumed to be much smaller than any structure of the flow, as the MRG equation holds when the fluid velocity field does not show a turbulent behaviour at the particle scale. In other words, the Reynolds number based on the particle diameter and the particle-fluid relative velocity has to be small, i.e. 𝒪(1).
The models based on the MRG equation do not resolve the flow around the particles, and the influence of the solid phase on the fluid phase has to be modelled; see for example <cit.>, <cit.> and <cit.>. However, the back-reaction of the particles on the carrier flow and the inter-particle collisions are usually negligible in the limit of very dilute regimes with Φ_V = V_s/(V_s + V_f) ≤ 10^-5 (where Φ_V is the volume fraction, and V_s and V_f are the volumes of the solid and fluid phases, respectively), small particles D_p ≤η (where η is the turbulent Kolmogorov scale) and small solid-to-fluid density ratios ρ_p/ρ_f <cit.>. In this case, the influence of the solid phase on the carrier fluid can be neglected and the fluid-particle interaction is often modelled with one-way coupling models, in which the particles move under the action of the flow, but they do not modulate it. Starting from the MRG equation, several corrections have been proposed over the years to account for several effects, and extend its range of validity to a wider range of parameters. For example, <cit.> introduced a lift force which is crucial to properly model the particle dynamics in the presence of a linear shear. This force has been later extended by <cit.> to fit the numerical data of <cit.> at large Reynolds numbers. Other corrections have been introduced to account for finite values of the particle Reynolds number. For example, we refer to the corrections to the drag term reported in <cit.>, and to the different convolutional kernels for the Basset time-history force contribution proposed by <cit.>.
The point particle approximation coupled with Direct Numerical Simulations of the Navier–Stokes equations (PP-DNS) has been widely used to investigate turbulent particle-laden flow in the one-way coupling regime in several scenarios <cit.>. Despite the large number of studies based on PP-DNS, however, clear understanding of the range of validity of the underlying assumptions of the point-particle model is not yet present, and studies that investigate its limitations are needed. In this respect, a step forward has been done in the last years thanks to the introduction of several numerical methods (PR-DNS) which resolve the flow around each particle by coupling the direct numerical simulations of the Navier–Stokes equations with the immersed boundary method (IBM); see for example <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Unlike experiments, PR-DNS allows to explain the underlying physical mechanisms and investigate the limitations of the point-particle approximation.
In the context of forced homogeneous isotropic turbulence (HIT) with fixed particles, <cit.> and <cit.> found that, when compared with PR-DNS, the two-way coupled PP-DNS based on the Schiller-Naumann drag correlation captures fairly well the turbulence attenuation and the fraction of the turbulence dissipation rate due to the particles.
Surprisingly, they found a good agreement also for D_p/η = 1.5 <cit.> and D_p/η≈ 2.2 <cit.>, although the model is expected to largely under-perform for these particle sizes. <cit.> used PR-DNS to assess the validity of PP-DNS in a decaying isotropic turbulent particle-laden flow, focusing on the particle acceleration model. They found that the predictions of the PP-DNS models they considered are in excellent agreement compared with PR-DNS for small Stokes numbers. For large Stokes numbers, however, they found that PP-DNS under-predicts the true particle acceleration and that second moment quantities are not properly captured. They showed that the predictions improve once considering finite Reynolds number corrections to the model. <cit.> tested the one-way point-particle approximation in a turbulence channel flow laden with small inertial particles, with high particle-to-fluid density ratios. They considered a volume fraction of Φ_V ≈ 10^-5 to ensure that the feedback of the particles on the fluid phase was negligible. They found that in the bulk of the channel the model predicts fairly well the statistic of the particles velocity. Close to the wall, however, they observed that the model fails, as it is not able to capture the shear-induced lift force acting on the particles, which instead is well predicted by the lift force model introduced by <cit.>.
In this work, we further address the limit of the one-way point-particle approximation, and we use PR-DNS to investigate the reliability of one-way coupled PP-DNS in the context of forced homogeneous isotropic turbulence laden with Kolmogorov-size particles.
§.§ Particles in homogeneous isotropic turbulence
The fluid-solid interaction of suspension of spherical particles moving in homogeneous isotropic turbulence has been widely investigated over the years by means of both simulations and experiments. In the following, we list some of the main contributions, and we refer the interested reader to <cit.> and <cit.> for a comprehensive review.
The majority of the numerical studies dealing with small particles D_p ≪η are based on the point-particle approximation, as particle-resolved methods are prohibitively expensive in this case. Although the related model error is not known, the available numerical studies have shown that small particles may either amplify or damp the turbulence of the carrier flow, in agreement with previous experimental studies <cit.>. <cit.> used two-way coupled PP-DNS to investigate homogeneous isotropic turbulence laden with small heavy spherical particles at Re_λ = u' λ / ν≈ 38 (where u' is the average velocity fluctuation, λ is the Taylor length scale, and ν is the kinematic fluid viscosity).
Compared to the particle-free case, they reported a significant attenuation of the fluid kinetic energy and of the dissipation rate. By looking at the energy spectrum, they found that the addition of the particles results into a relative enhancement of the energy at the small scales compared to the energy content at the large scales. They also showed that heavier particles cause a less selective modification of the turbulence properties. Heavy particles are indeed more uniformly dispersed by the turbulence, and cause a more homogeneous modification of the flow properties compared to lighter particles, that instead show a stronger preferential collection in regions of low vorticity and high strain <cit.>. <cit.> used two-way coupled PP-DNS to investigate decaying homogeneous isotropic turbulence laden with small spherical particles. Besides confirming the non-uniform modulation of the energy spectrum, they observed that the energy enhancement at the small scales is accompanied by an increase of the viscous dissipation rate and, thus, by an enhancement of the rate of energy transfer from larger to smaller scales. <cit.> made use of two-way coupled PP-DNS to study the influence of small and heavy particles on forced homogeneous isotropic turbulence at Re_λ = 62.
They reported that the influence of the particles changes with their inertia, with the small-scale energy content being attenuated/enhanced by large/small particles. By investigating the spectrum of the fluid-particle energy exchange rate, they observed that particles act as a sink of kinetic energy at large scales, while they add kinetic energy to turbulence at the smallest scales. The large scale motions of the fluid drag the particles, while the small-scale fluctuations are driven by the presence of the solid phase.
<cit.> investigated the influence of small and heavy particles in decaying isotropic homogeneous turbulence at the initial Reynolds number of Re_λ = 30 and Re_λ = 50, with a focus on
particles with very small inertia and small relaxation time. Unlike the previous works, they found that the turbulent kinetic energy and the viscous dissipation rate increase at all times compared to the particle-free case. The presence of the particles, indeed, largely enhances the small-scale energy content, while slightly reduces the large-scale energy content, with positive integral variation.
This was later confirmed by <cit.>, that investigated by PP-DNS the influence of particles on decaying homogeneous isotropic turbulence with an initial Reynolds number of Re_λ = 75.
For large particles with D_p>η the numerical studies are based on PR-DNS, as in this case the point-particle approximation does not hold. <cit.> and <cit.> investigated the influence of particles with size D_p ≈λ (where λ is the Taylor length scale) in decaying homogeneous turbulence. They observed that in contrast to what happens when D_p<η, the presence of the particles damps the turbulent kinetic energy of the fluid compared to the particle-free case at all times, and that the two-way coupling rate of change is always positive. <cit.> investigated the influence of particles with a solid-to-fluid density ratio of ρ_p/ρ_f = 1.15 and 1.73 on forced homogeneous isotropic turbulence at Re_λ = u' λ / ν = 61, varying the volume fraction between Φ_V = 0.02 and Φ_V = 0.1. They found that
the energy spectrum is enhanced for wavenumbers κ > κ_p ≈ 0.75 κ_D, where κ_D = 2 π /D_p, while it is attenuated for κ < κ_p. These results were later confirmed by <cit.> at Re_λ≈ 60, ρ_p/ρ_f = 1.4 and Φ_V = 0.06. <cit.> considered particles with D_p/η≈ 5-8 and ρ_p/ρ_f = 1.5 at the larger Reynolds number of Re_λ≈ 130. They focused on the dynamics of the particles, and observed that finite-size inertial particles exhibit a moderate level of clustering, as later confirmed also by <cit.>. <cit.> considered the effect of particles on homogeneous isotropic turbulence at the larger Reynolds number of Re_λ≈ 400, which ensures a well developed inertial range of scales. They set the volume fraction at Φ_V = 0.079, and investigated the turbulence modulation by particles with size D_p/η=123 and solid-to-fluid density ratio between 1.3 ≤ρ_p/ρ_f ≤ 100. They showed that the solid phase modifies the energy cascade described by Richardson and Kolmogorov; the fluid-solid coupling drives the energy cascade at large scales, while the classical energy cascade is restored at scales smaller than the particle size. <cit.> studied the turbulence modulation due to spherical particles with 7.8 ≤ D_p/η≤ 64 by setting the volume fraction at Φ_V = 8.1 × 10^-3 and the reference Reynolds number at Re_λ≤ 100. They found that the turbulent kinetic energy content monotonically decreases with D_p, due to the increase of the energy dissipation rate in the wake of the particles. More recently, <cit.> and <cit.> investigated by PR-DNS how the flow modulation changes with D_p and ρ_p. They set the Reynolds number to Re_λ≈ 400 and the volume fraction to Φ_V = 0.079, and varied the particles size and the solid-to-fluid density ratio in the 16 ≤ D_p/η≤ 123 and 1.3 ≤ρ_p/ρ_f ≤ 100 range. <cit.> found that, in presence of an inhomogeneous mean shear, particles might modulate the largest scales of the flow towards an anisotropic state, while preserving homogeneity and isotropy at the smaller scales. <cit.> observed that interface-resolved particles enhance flow intermittency favouring events with large localised velocity gradients. For the smallest and heaviest particles, they found that the classical energy cascade is subdominant at all scales, and that the energy transfer is completely driven by the fluid-solid coupling term. <cit.> investigated the effect of the Reynolds number on the flow modulation by finte-size particles in homogeneous isotropic turbulence. Notably, they observed that the modulation of the turbulent kinetic energy has little dependence on Re_λ, and that particles modulate turbulence also at the smallest Reynolds numbers.
While a relatively larger body of literature has investigated the dynamics of particle of size larger and smaller than Kolmogorov size, fewer works have considered Kolmogorov-size particles with D_p ≈η, which are the focus of the present work. From an experimental point of view the D_p ≈η case is complex, as it requires a resolution of sub-Kolmogorov scales when measuring the velocity perturbations near the particles <cit.>. Numerical schemes based on the MRG equation, which are commonly used for D_p ≪η, are generally thought of as not valid when D_p ≈η <cit.>. PR-DNS, on the other hand, becomes prohibitively expensive as D_p decreases when Re is sufficiently large, due to the extra resolution required to properly resolve both the flow perturbations induced by the particles and all the turbulence scales. Among the few works available, we mention <cit.> that experimentally investigated the influence of a dilute dispersion of particles with D_p ≈η in forced homogeneous isotropic turbulence at Re_λ≈ 230. They observed that Kolmogorov-size particles attenuate the turbulent global kinetic energy and the viscous dissipation rate up to 40% and 50% for a mass loading Φ_M = ρ_p V_p/(ρ_f V_f) of Φ_M = 0.3. <cit.> studied by PR-DNS the interaction of decaying isotropic turbulence with finite-size D_p ≈η particles. They set the initial Reynolds number of the flow at Re_λ = 79, and varied the solid-to-fluid density ratio between 40 ≤ρ_p/ρ_f ≤ 5000 and the mass loading between 0.01 ≤Φ_M ≤ 1. They observed that in the vicinity of the particles the viscous dissipation rate of the fluid is amplified due to the large velocity gradients that are generated by the boundary conditions at the surface of the particles <cit.>. Particles also release kinetic energy to the fluid by locally accelerating the surrounding flow, similarly to what seen by <cit.> for larger particles. From a global viewpoint, <cit.> observed that, for large ρ_p/ρ_f, particles with D/η≈ 1 induce local velocity disturbances that significantly modulate the distribution and the decay of the fluid kinetic energy at all scales.
§.§ Present study
In this study, we investigate the fluid-solid interaction of suspension of Kolmogorov-size spherical particles moving in homogeneous isotropic turbulence at the relatively large microscale Reynolds number of Re_λ≈ 140 by use of direct numerical simulations. The study is based on both PR-DNS and PP-DNS. The objective of the present study is twofold; we aim (i) to investigate the modulation of forced homogeneous isotropic turbulence by finite Kolmogorov-size particles at a Reynolds number which is large enough to ensure a proper separation of scales, and (ii) to address the limits and the range of validity of the one-way-coupled PP-DNS in the simplest configuration of homogeneous isotropic turbulence. To do this, we consider a portion of the parameter space which is on the edge of the range of validity of the one-way-coupled PP-DNS <cit.>.
The structure of the work is as follows. After this introduction, the computational set up and the numerical methods are described in <ref>. Then, section <ref> is devoted to the assessment of the flow modulation, and discusses the results of the PR-DNS. The influence of the particles on the energy spectrum, on the scale-by-scale energy budget and on the local structure of the flow are discussed. Sections <ref> and <ref> deal respectively with the dynamics of the particles and with the inhomogeneity of their distribution in the flow. In these sections, we assess the validity of the one-way coupled PP-DNS. Eventually, concluding remarks are provided in <ref>.
§ MATHEMATICAL FORMULATIONS AND NUMERICAL METHOD
We consider a turbulent flow in a triperiodic box of size L=2π laden with N spherical particles; see figure <ref>. The carrier flow is governed by the incompressible Navier–Stokes equations
∂u/∂ t + ∇·uu =
- 1/ρ_f∇ p + ν∇^2 u + f + f^← p, ∇·u = 0,
where u=(u,v,w) is the fluid velocity, p is the reduced pressure, and ρ_f and ν are the fluid density and kinematic viscosity. At the right-hand-side of the momentum equation f is the Arnold-Beltrami-Childress (ABC) cellular forcing <cit.> used to inject energy at the largest scales and sustain turbulence, while f^← p is the force the particles exert on the fluid phase.
§.§ Particle-resolved simulations (PR-DNS)
The motion of a rigid particle can be described using the translational velocity u_p and the rotational velocity ω_p of its centre of mass, that obey the classical Newton-Euler equations for rigid body dynamics,
m_p du_p/d t = f^← f + f_p^↔ p,
I_p dω_p/d t = L_p^← f,
where m_p = πρ_p D_p^3/6 and I_p = m_p D_p^2/10 are the mass and inertial moment of the particle, with ρ_p being the particle density and D_p the particle diameter. Here f^↔ p is the force due to particle collisions, while f^← f and L_p^← f are the force and momentum due to the fluid-solid interaction, namely
f^← f = ∮_∂ V_pσ·ndA, L_p^← f = ∮_∂ V_pr×( σ·n) d A,
where σ= -p ℐ + 2 μ𝒟 is the Cauchy stress tensor, with ℐ being the identity tensor, μ the fluid kinematic viscosity, 𝒟 the strain rate tensor, and n the unit vector normal to the surface of the particle.
§.§ One-way-coupled point particle simulations (PP-DNS)
The interface-resolved simulations are complemented with one-way-coupled point particle simulations (PP-DNS). In this work, we consider the complete governing equation for a point particle as introduced by <cit.> and <cit.>, i.e.
[t]
ρ_p V_p du_p/dt = 3 π D_p ρ_f ν(u - u_p + 1/6( D_p/2)^2 ∇^2 u) _Stokes drag
+ ρ_f V_p/2( 3Du/Dt - du_p/dt + 1/10( D_p/2)^2 d/d t∇^2 u)_Added mass
+ 3/2 D_p^2 ρ_f √(πν)∫_-∞^t K_B(t-τ) ( du/dτ - du_p/dτ+
1/6( D_p/2)^2 d/d t∇^2 u) dτ_Basset force.
Here, V_p = πρ_p D_p^3/6 is the volume of the particle, and D/Dt denotes the material derivative. Note that the Faxén correction <cit.> proportional to ∇^2 u has been included in the Stokes drag, added mass and Basset forces. According to <cit.>, this correction reproduces dominant finite-size effects on velocity and acceleration fluctuations for neutrally buoyant particles with diameter up to D_p/η≈ 4. For the added mass, we have used the form described by <cit.>. The computation of the Basset history force presents some challenges, and its evaluation can become extremely time consuming and memory demanding; indeed, this term requires at each time step the computation of an integral over the complete time history of the particle. Over the years, several attempts have been proposed to approximate this term; see for example <cit.>, <cit.> and <cit.>. In this work, we resort on the second-order and memory-efficient algorithm developed by <cit.>, whose details are briefly reported in appendix <ref> for completeness.
§.§ Computational details
We consider a single-phase micro-scale Reynolds number of Re_λ = u' λ / ν≈ 140 to ensure a relatively large inertial range of scales; here u' is the root mean square of the velocity fluctuations and λ is the Taylor length scale. The particle diameter is set to D_p/η≈ 0.9, where η is the Kolmogorov length scale for the single-phase case. Two volume fractions are considered, i.e. Φ_V = V_s/(V_s + V_f)= 10^-5 and Φ_V = 10^-3 for a total number of particles of N=742 and N=74208, respectively. For each volume fraction two values of the particle density are considered, ρ_p/ρ_f = 5 and ρ_p/ρ_f=100 to consider both light and heavy particles. This leads to a total of four PR-DNS. In PP-DNS the volume fraction is not a parameter, and only two simulations have been carried out for the different density ratios.
The governing equations are numerically integrated in time using the in-house solver Fujin (<https://groups.oist.jp/cffu/code>). It solves the Navier–Stokes equations using an incremental pressure-correction scheme. The governing equations are written in primitive variables on a staggered grid, and second-order finite differences are used in all the directions. The Adams-Bashforth time scheme is used for advancing the momentum equation in time. The Poisson equation for the pressure enforcing the incompressibility constraint is solved using a fast and efficient approach based on the Fast Fourier Transform.
For the PR-DNS the governing equations for the particles are dealt with the immersed boundary method introduced by <cit.>. The fluid-solid coupling is achieved in an Eulerian framework, and accounts for the inertia of the fictitious fluid inside the solid phase, so as to properly reproduce the particles' behaviour in both the neutrally-buoyant case and in the presence of density difference between the fluid and solid phases. The soft sphere collision model <cit.> is used to prevent the interpenetration between particles. A fixed-radius near neighbours algorithm <cit.> is used for the particle interaction to avoid an otherwise prohibitive increase of the computational cost when the number of particles becomes large.
For the one-way-coupled PP-DNS the governing equation for the particle velocity is advanced in time using the second-order Adams-Bashforth time scheme. At each time step, the fluid velocity is evaluated at the position of the particle using a second-order linear interpolation.
For the PR-DNS the fluid domain is discretised on a cubic domain using N_p = 2048 points along each direction, leading to η / Δ x ≈ D_p /Δ x ≈ 6-7, where Δ x denotes the grid spacing. The accuracy of the results has been verified by running an additional simulation on a coarser grid with N_p=1440 for the case with Φ_V = 10^-3 and ρ_p/ρ_f = 100, resulting in a negligible difference in the scale-by-scale fluid energy spectrum and budget in figures <ref> and <ref>, and in the Lagrangian and Eulerian particles' statistics in figures <ref> and <ref>. For the single-phase case and the PP-DNS the fluid domain is discretised using N_p = 1024 points in the three direction, leading to η/Δ x ≈ 3-4. Excluding the initial transient period, all simulations are advanced in time for approximately 50 τ_f, where τ_f = ℒ/√(2E/3) is the average turnover time of the largest eddies; ℒ =π/(4E/3)∫_0^∞ℰ(κ)/κdκ is the fluid integral scale with ℰ(κ) being the energy spectrum, E(x,t) is the local and instantaneous fluid kinetic energy, and the · and · operators denotes average in space and time, respectively.
Details of the PR-DNS are reported in table <ref>. Note that, when looking at the bulk quantities, the flow modulation due to the solid phase is rather low.
§ FLOW MODULATION
In this section we discuss the PR-DNS results and focus on the influence of the particles on the carrier flow. First, we show the influence of the solid phase on the energy spectrum, on the structure functions and on the the scale-by-scale energy budget. Next, the influence of the particles on the local structure of the small scales of the flow is addressed.
§.§ Energy Spectrum
Figure <ref> shows the influence of the solid phase on the energy spectrum ℰ(κ), and highlights how Kolmogorov-size particles modulate the turbulent fluctuations scale-by-scale. The top and bottom panels are for Φ_V = 10^-5 and Φ_V=10^-3, respectively. The black solid line refers to the reference unladen case; note the inertial range of scales where ℰ∼κ^-5/3 extends for more than one decade of wavenumbers, confirming that the present Reynolds number is large enough to ensure a proper separation of scales. For validation purposes, in the bottom panel we also plot with symbols the energy spectrum obtained for the Φ_V=10^-3 and ρ_p/ρ_f=100 case with the coarser grid, showing good agreement with those obtained with the standard grid, thus ensuring the suitability of the chosen grid resolution (see <ref>).
At large scales, the spectra of the particle-laden cases substantially overlap with the unladen spectrum. We indeed observe only a weak depletion of the energy content at the intermediate scales; see the insets in the two panels. At scales smaller than a certain wavenumber κ_p, the energy spectra of the particle-laden cases deviate above the reference spectrum. Solid particles enhance the energy content of the small scales by means of their wake. Notably, this mechanism is amplified as Φ_V and/or ρ_p/ρ_f increase, as conveniently visualised by the larger values of ℰ for κ>κ_p and by the shift of κ_p towards smaller wavenumbers. However, notice that due to the low values of Φ_V considered, the flow modulation is rather low for all cases, being substantially negligible for Φ_V=10^-5 and/or ρ_p/ρ_f=5.
Figure <ref> shows that the modulated energy spectrum exhibits multiscaling behaviour. The classical ℰ(κ) ∼κ^-5/3 decay in the inertial range of scales is indeed followed by a steeper decay ℰ(κ) ∼κ^-4 for κ > κ_p.
A similar steep decay has been observed in bubbly flows at scales smaller than the bubble diameter, by means of both experiments <cit.> and simulations <cit.>.
Additionally, a similar multiscaling behaviour has also been observed in a turbulent planar Couette flow laden with small particles <cit.>, and in homogeneous isotropic turbulence laden with slender fibres <cit.>.
In the context of bubbly flows, the emergence of the κ^-α decay with α≥ 3 has been attributed to the wakes the bubbles generate in a otherwise smooth flow <cit.>; here the velocity fluctuations produced by the bubbles are directly dissipated by viscosity <cit.>. Accordingly, this scaling has been indeed observed only when the bubble Reynolds number is large enough and is in the 10 ≤ Re_bub≤ 1000 range <cit.>.
To compare, we computed the local particle Reynolds number using the relative velocity between the particle and the surrounding flow, and found that it is in the 0.25 ⪅ Re_p ⪅ 2.5 range (see table <ref>). Further investigation on the link between the fluctuations induced by particles and the ℰ(κ) ∼κ^-4 decay is provided in <ref> by looking at the scale-by-scale energy budget.
Overall, figure <ref> shows that particles almost do not modulate the inertial range of scales with the present parameters, where the classical energy cascade described by Richardson and Kolmogorov is preserved, but mainly affect the (otherwhise smooth) smallest scales of the flow.
§.§ Structure function and intermittency
We extend the analysis done in the spectral domain by computing the longitudinal structure functions defined as S_p(r) = δ u(r)^p where δ u(r) = (u(x+r) - u(x) )·r/r and r = |r|. In particular, figure <ref> plots S_2, S_4 and S_6 as a function of r for (top) Φ_V=10^-5 and (bottom) Φ_V = 10^-3. In the single-phase case, we observe that S_p ∼ r^p/3 in the inertial range of scales is in agreement with the Kolmogorov prediction <cit.>, and that S_p ∼ r^p at the small scales, as a result of the differentiability of the fluid velocity field <cit.>. Recall that although S_2(r) is commonly referred to as scale energy <cit.>, its meaning slightly differs from ℰ(κ); indeed while ℰ(κ) dκ refers to the amount of energy associated with the scale r = 2π/κ, S_2(r) can be interpreted as the amount of energy associated with scales up to r.
In agreement with the modulation of the energy spectrum, figure <ref> shows that particles enhance the energy content at small scales compared to the unladen case. The energy enhancement is more intense for larger Φ_V and ρ_p/ρ_f, and becomes more evident when considering higher order structure functions. At the same time, the presence of the particles decreases the amount of energy stored at the largest scales (as seen by the blue and red curves laying below the black one). By interacting with the vortical structures of the flow, particles drain energy from scales larger than D and reinject it back at smaller scales by means of their wake.
Structure functions are commonly employed to quantify the flow intermittency, i.e. the relevance of extreme events that are localised in space and time and break the Kolmogorov similarity hypothesis <cit.>. In figure <ref>, we use the extended self similarity introduced by <cit.>, and plot S_6/S_2^3 as a function of r. In the limit case where extreme events do not occur, the S_6 ∼ S_2^3 power law holds, i.e. S_6/S_2^3 ∼ constant, and deviations from this behaviour are a measure of the flow intermittency. Accordingly with the intrinsic intermittent nature of turbulent flows, figure <ref> shows that S_6/S_2^3 deviates from the Kolmogorov prediction also in the single-phase case, and this deviation increases in the particle-laden cases, similarly to what is found for suspension of particles with size in the inertial range of scales <cit.>. This is due to the no-slip and no-penetration boundary conditions at the particles surface that give origin to localised and intense velocity gradients. Figure <ref> shows that for ρ_p/ρ_f = 5 the deviation from the single-phase case is rather small for both Φ_V = 10^-5 and Φ_V=10^-3, in agreement with the low level of flow modulation discussed above. For ρ_p/ρ_f = 100, instead, the S_6/S_2^3 curve significantly deviates from the single-phase case at small scales. For the considered parameters, heavy Kolmogorov-size particles increase the intermittency of the small scales of the flow.
§.§ Scale-by-scale energy budget
We now detail the influence of the particles on the organisation of the fluctuations, by studying the scale-by-scale energy budget equation. For the present case with three homogeneous directions, the energy balance reads
P(κ) + Π(κ) + Π_fs(κ) - D_v(κ) = 0,
where P(κ) is the scale-by-scale turbulent energy production due to the external forcing, Π(κ) is the energy flux associated with the non linear convective term, Π_fs(κ) is the fluid-solid coupling term, and D_v(κ) is the scale-by-scale viscous dissipation. Specifically, these terms are defined as
P( κ) = ∫_κ^∞1/2( f̂·û^* + f̂^* ·û) d k,
Π(κ) = ∫_κ^∞ -1/2( Ĝ·û^* + Ĝ^* ·û) d k,
Π_fs(κ) = ∫_κ^∞1/2( f̂^↔ p·û^* + f̂^↔ p,*·û) d k,
D_v(κ) = ∫_κ^∞( 2 ν k^2 ℰ) d k,
where ·̂ denotes the Fourier transfrom operator, and the superscript ·^* denotes complex conjugate. The term Ĝ is the Fourier transform of the non linear term ∇· ( uu ). Note that here we integrate all the terms from κ to ∞. Π(κ) and Π_fs(κ) do not produce nor dissipate energy at any scale, but redistribute it among scales by means of the classical energy cascade and of the fluid-solid interaction. Also, note that since we integrate the viscous term from κ to ∞, D_v(0) = ϵ. For the complete derivation we refer the reader to <cit.>.
Figure <ref> plots the Π, Π_fs and D_v terms of equation <ref> as a function of κ for the four particle-laden cases investigated. For validation purposes, we also plot with symbols the terms obtained with the coarser grid for Φ_V=10^-3 and ρ_p/ρ_f=100. In agreement with the multiscaling behaviour shown in the energy spectra (see figure <ref>), the energy budgets exhibit two distinct behaviours. Energy is injected at the largest scales at a rate that equals the dissipation rate P(0) = ϵ (not shown). In the inertial range of scales κ_L < κ< κ_p, the fluid-solid coupling term is subdominant and
Π(κ) ≈ D_v(κ) ≈ϵ.
Thus, Π(κ) ≈ϵ is constant with κ at these scales, exhibiting a plateau. In agreement with the Kolmogorov theory, the viscous effects are negligible 2 νκ^2 ℰ(κ) ≈ 0 and energy is transferred from larger to smaller scales at a rate that matches the energy injection rate ϵ. This corresponds to the range of scales where ℰ(κ) ∼κ^-5/3. Similarly to what observed in the energy spectrum, the range of scales where this relation holds shrinks as Φ_V and ρ_p/ρ_f increase. For the small scales with κ > κ_p where the spectrum shows the ℰ(κ) ∼κ^-4 decay, instead, the non linear flux is negligible Π(κ) ≈ 0. In this range of scales, the viscous effects and the fluid-solid coupling term are not negligible, and the energy budget reduces to
Π_fs(κ) ≈ D_v(κ).
Here the fluctuations that are produced by the fluid-solid interaction are directly dissipated by viscosity.
In agreement with the energy spectrum, the range of scales where this regime holds widens as Φ_V and/or ρ_p/ρ_f are increased.
§.§ The local structure of the flow
As shown in <ref>, Kolmogorov-size particles mainly modify the organisation of the velocity fluctuations at the smallest scales. Particles indeed modulate the energy spectrum and the structure functions for κ⪆κ_p only. Here we characterise the smallest scales of the flow to provide new insights of the influence of the dispersed phase on the structure of the velocity fluctuations. For the sake of brevity, in this section we consider the Φ_V = 10^-3 cases only,
and we investigate how particles modify the velocity gradient field A_ij = ∂ u_j/∂ x_i. In the neighbourhood of a given point (x_0,t), the velocity field can be approximated as u_i(x,t) = u_i(x_0,t) + A_ij(x_0,t)(x_j-x_0,j) + 𝒪(|x-x_0|^2).
This linear expansion is valid in the region around x_0, where the fluid is sufficiently smooth and the variations of A_ij are small <cit.>; for a turbulent flow the extent of this region is of the order of the Kolmogorov scale η. Based on these arguments, we study the influence of the particles on the smallest scales of the flow, by inspecting their effect on the A_ij tensor.
We decompose A_ij into its symmetric and antisymmetric parts, namely the strain-rate tensor S_ij = (A_ij + A_ji)/2 and the rotation rate tensor W_ij = (A_ij - A_ji)/2. The field of the velocity gradient is completely addressed when knowing: (i) the three principal rates of strain α≥β≥γ, i.e. the three eigenvalues of S_ij, (ii) the magnitude of the vorticity ω^2 = ω·ω, i.e. the enstrophy, and (iii) the orientation of ω relative to the three principal axes of strain, i.e. the eigenvectors of S_ij <cit.>. Note that, due to the incompressibility constraint α + β + γ = 0, meaning that α is always nonnegative, γ is always nonpositive while β can have any sign depending on the local straining state.
We start by characterising α, β and γ in figure <ref>. Following <cit.>, we use s^* which is defined as
s^* = - 3 √(6)αβγ/(α^2 + β^2 + γ^2)^3/2.
For a random velocity gradient field with no preferred structure, the distribution of s^* is uniform. When s^*=1, α = β= - γ/2, meaning that the state of straining is an axisymmetric extension, in which a small spherical fluid element moving in the flow extends symmetrically in two directions and contracts in the third one, forming thus a disk-like structure. When s^*=-1, instead, γ = β = - α/2 <0, and the state of straining is an axisymmetric compression, in which a small fluid element contracts in two directions and extends in the third one, forming thus a vortex tube. Finally, when s^*=0 we have β=0, meaning that the straining state is two-dimensional, as typical for shear dominated regions.
In the absence of particles, the distribution peaks at s^*=1, in agreement with the fact that for purely Newtonian turbulence the most likely state of straining is an axisymmetric extension <cit.>. Figure <ref> shows that the addition of Kolmogorov-size particles with Φ_V ≤ 10^-3 leads to a rather small variation of the distribution of s^* for the lighter particles with ρ_p/ρ_f = 5. We observe instead that particles with ρ_p/ρ_f = 100 decrease the probability of events with large positive s^* and increase the probability of events with s^* ≤ 0. This agrees with the observation of <cit.> who found that particles with size in the inertial range favour the occurrence of events with two-dimensional straining states and with axisymmetric compression. These events indeed are associated with the shear layers that separate from the surface of the particles, that strengthen as ρ_p/ρ_f increases.
Figure <ref> shows the influence of the solid phase on (top) the square of the vorticity magnitude ω^2, i.e. the enstrophy, and on (bottom) the alignment between the vorticity ω and the principal axes of strain. The distribution of ω^2 shows that the tail becomes longer and the probability of large events increases due to the presence of the particles. This agrees with the enhanced flow intermittency discussed in <ref>. The tail of the distribution is longer for the ρ_p/ρ_f = 100 case, as the velocity gradients at the particles surface are more intense because of the larger relative velocity between the particles and the surrounding fluid phase. However, even the lighter particles with ρ_p/ρ_f = 5 are able to produce a non-negligible change of the distribution.
Instead, the bottom panel of figure <ref> shows that the alignment between ω and the principal axes of strain are only slightly influenced by the presence of the particles. The results for the single-phase case perfectly overlaps with that of ρ_p/ρ_f = 5.
The presence of the heavy particles, instead, slightly reduces the alignment between ω and ê_β, as well as the anti-alignment between ω and ê_γ. Our results suggest that for D_p/η≈ 1 the perturbation field induced by the heavy particles is characterised by events where vorticity is more aligned with the direction of extension (ê_α) and compression (ê_γ), and more anti-aligned with the intermediate eigenvector (ê_β). Interestingly, this differs from what observed by <cit.>, who found the opposite trend for particles with size in the inertial range. We presume that the difference is due to the different particle Reynolds number, which results into a different kind of wake.
We now move and consider the entire velocity gradient tensor A_ij. Any second-order tensor possesses three invariants P, Q and R, which are directly related to its eigenvalues λ by the characteristic polynomial function
λ^3 + P λ^2 + Q λ + R = 0.
Following <cit.>, it can be shown that
P = α + β + γ,
Q = - 1/2( α^2 + β^2 + γ^2 ) + ω^2/4,
R = - 1/3( α^3 + β^3 + γ^3 ) - 1/4ω_i ω_j S_ij.
Note that, P=0 due to the incompressibility constraint. The Q and R invariants are commonly used to distinguish between regions of intense vorticity and regions of strong strain. In particular, the discriminant of equation <ref> Δ = 27/4R^2 + Q^3 is used to distinguish between regions where motions are mainly vortical (i.e. regions where Δ<0, meaning that A_ij has one real and two complex conjugate eigenvalues) or characterised by a node-saddle streamline pattern (i.e. regions where Δ >0, and all the eigenvalues are real). When Q is large and negative the strain is intense, while the vorticity is weak; in this case, R ∼ - (α^3 + β^3 + γ^3) = - αβγ <cit.>, and a positive R implies a region of biaxial strain (γ<0, α> β > 0), while a negative R implies a region of axial strain (γ < β < 0 and α>0). When instead Q is large and positive the strain is locally weak and R ∼ - ω_i ω_j S_ij. In this case, a positive R implies vortex compression, while a negative R implies vortex stretching.
Figure <ref>, plots the joint distribution of Q and R for Φ_V = 10^-3; the black solid lines denote the left- and right-Vieillefosse tails with Δ=0. For completeness the bottom panels report the distributions of Q (left) and R (right). In absence of the particles, the Q-R joint distribution takes a tear-drop pattern, with a clear point at the right-Vieillefosse tail with R>0 and Q<0 <cit.>. The distribution is skewed towards positive Q, but rather evenly distributed among positive and negative values of R (see the bottom panels). The largest probability is observed in the second and fourth quadrants, i.e. Q<0 R>0 and Q>0 and R<0. Thus, in a purely Newtonian turbulent flow there is a strong negative correlation between Q and R, and the two most common states are vortex stretching ω_i ω_j S_ij>0, and biaxial strain αβγ <0 <cit.>. Also, the points in the Q-R map are distributed around the origin, since the mean values of Q and R are zero in homogeneous flow <cit.>. The presence of the particles enlarges the range of possible Q and R values, in agreement with the increase of the probability of events with intense velocity gradients associated with the boundary conditions at the particles' surface. In particular, Kolmogorov-size particles mainly favour events that lay in the first and third quadrant, resulting into a joint distribution that is more symmetric with respect to an inversion of the R axis (see the bottom right panel in figure <ref>). Compared to the unladen case, particles mainly promote events with axial strain αβγ > 0 (R<0 and large negative Q<0) and with vortex compression ω_i ω_j S_ij<0 (R>0 and Q>0). The probability of events with R<0 and Q<0 is particularly enhanced, as visualised in the top panel of figure <ref> by the occurrence of a point at the left-Vieillefosse tail. This is consistent with the increase of the probability of events with s^*=-1 shown in figure <ref>.
A visual investigation of the Q and R fields around the particles (figure <ref>) helps to explain this effect by highlighting the local contribution of the particles. By comparing the two fields in the surroundings of the particles, we observe that they are both almost axisymmetric along the axis aligned with their travelling direction. However, across the velocity-normal median plane, Q is symmetric, while R is antisymmetric. This relation between the two fields implies contributions across all four quadrants of the Q-R distribution. The effect, however, is particularly apparent in the third quadrant, as the region is otherwise not explored by the single-phase flow. In particular, regions of Q<0 and R<0 are associated with axial strain and appear to be found at one of the two ends of the particles' along their travelling direction. Consistently with the above-mentioned symmetries, at the opposite end of the particles, a region of biaxial strain (Q<0 and R>0) is found.
This scenario only partially agrees with the results of <cit.> for decaying homogeneous isotropic turbulence laden with Kolmogorov-size particles. They also found that particles favour events with axial strain (Q<0 and R<0) and biaxial strain (Q<0 and R>0), as shown by the occurrence of a point at the left-Vieillefosse tail and by the more pronounced right-Vieillefosse tail in figure 20 of their paper. However, they did not report an increase of the probability of events with Q>0 like in the present case. It is worth mentioning, however, that they do report a global increase of the vortex stretching. Compared to larger particles with size in the inertial range of scales, the scenario is completely different: in fact, <cit.> found that when large particles are added both Q and R are reduced. Besides energising the small scales, large particles indeed behave also as obstacles for the large flow structures, largely weakening thus the energy content at the large scales. <cit.> only observed an increase of the probability in the strain-dominated region (that we also observe), which is an indication of the intense dissipation regions that arise around the particles.
Additional insights are provided by looking separately at the invariants of S_ij and W_ij; see figure <ref>. In particular, we consider their second invariants, i.e.
Q_S = - 1/2( α^2 + β^2 + γ^2 ) andQ_W = 1/4ω^2.
These invariants are related with the fluid dissipation ϵ = -4 ν Q_S and with the fluid enstrophy ω^2 = 4Q_W. Therefore, the Q_S-Q_W joint distribution determines whether the flow is dominated by dissipation (extensional dominated regions with Q_S>Q_W) or by enstrophy (rigid rotation regions with Q_W>Q_S). In shear dominated regions, dissipation and enstrophy balance and -Q_S = Q_W <cit.>. For simplicity, we follow <cit.> and introduce 𝒦 = (-Q_W/Q_S)^1/2; when 𝒦 = 0 the flow is extension dominated, when 𝒦 = ∞ the flow is dominated by rigid rotation events, and when 𝒦 = 1 the rotation and the stretching are equal, as typical of vortex sheets and shear layers. In the unladen case, events with Q_W>-Q_S (𝒦→∞) are more frequent, meaning that for purely Newtonian turbulence the flow is mainly dominated by rigid rotations; see also the distributions of Q_S and Q_W in the bottom panels of figure <ref>. In the presence of the particles the scenario slightly changes. A first observation is that the probability of events with large Q_S (or 𝒦 = 0) increases, indicating that the perturbation field induced by these small particles is extensional dominated. A second observation is that the solid phase favours also events with -Q_S ≈ Q_W (𝒦≈ 1), that agrees with the presence of the shear layers induced by the presence of the particles.
§ PARTICLE DYNAMICS
This section is devoted to the dynamics of the particles, and we compare the results of the PR-DNS with those of the PP-DNS. Besides characterising the motion of the particles, indeed, the objective is to address the reliability of the one-way-coupled PP-DNS at the present parameters.
§.§ Lagrangian velocity increments
The Lagrangian statistics of the particles motion are of fundamental importance in the understanding of transport and mixing.
In order to investigate this, we study the Lagrangian velocity increments, defined here as δ_τ u_p,i = u_p,i (t + τ) - u_p,i(t), with u_p,i(t) being the instantaneous velocity of a particle along direction i at time t. The symmetries of the present problem make the statistics of δ_τ u_p,i independent on both t and i; for simplicity hereafter we drop the i index. Figures <ref> and <ref> describe the particle dynamics at different time scales, by plotting δ_τ u_p for different values of the time scale τ in the 0.2 τ_η≤τ≤ 30 τ_η range, where τ_η = (ν/ϵ)^1/2 is the Kolmogorov time scale. For small time scales, the velocity increment provides information about the particle acceleration, i.e. δ_τ u_p ∼ a_pτ. A first observation is that the distributions are symmetric, in agreement with the symmetries of the flow. The probability density function of δ_τ u_p continuously deforms from the Gaussian at large time scales (see τ≈ 30 τ_η) to the development of stretched exponential tails at dissipative time scales (see τ≈ 0.2 τ_η), which are the statistical signature of an intermittent Lagrangian dynamics; see <cit.>, <cit.> and <cit.> for small tracers and <cit.> for finite-size neutrally buoyant particles. The wide stretched exponential tails for the smallest τ show that the finite-size particles with D_p ≈η experience very high acceleration events, with a probability which is higher than Gaussian, similarly to what was found for small tracers by <cit.> and for finite-size particles by <cit.>.
We start looking at the influence of ρ_p/ρ_f and Φ_V on the Lagrangian intermittency of Kolmogorov-size particles.
The left panels of figure <ref> show that heavier particles (ρ_p/ρ_f=100) are less likely to experience intermediate values of the acceleration compared to lighter particles (ρ_p/ρ_f=5). In contrast, they are more likely to exhibit very low or very intense accelerations. On one side, the larger inertia of these particles opposes to large accelerations and favours small values of a_p. On the other side, heavy particles enhance the flow intermittency (see <ref>), promoting extreme events in the flow that are in turn responsible for rare but large particle accelerations. It is worth noticing that for Φ_V=10^-5 the latter effect is barely visible (see figure <ref>), in agreement with the weak flow modulation shown in <ref>. For light particles ρ_p/ρ_f=5 figure <ref> shows that the distribution of δ_τ u_p obtained for Φ_V=10^-5 and 10^-3 overlap almost perfectly, in agreement with the low level of backreaction.
The left panels of figure <ref> show that for ρ_p/ρ_f=5 the distributions obtained by means of PP-DNS and PR-DNS almost perfectly overlap for all time scales τ: for light particles the complete MRG equation properly predicts the Lagrangian intermittency of the particles dynamics. For heavier particles the match between the PP-DNS and the PR-DNS is rather good at large time scales, but differences are observed at small τ, where the PP-DNS does not predict the large tails for τ⪅ 2 τ_η. As discussed above, these extreme events are associated with the flow modulation which is not modelled in our PP-DNS.
The comparison between the PP-DNS and PR-DNS results is further detailed in figure <ref> where the evolution of the δ_τ u_p distribution with τ is quantified by means of the excess kurtosis 𝒦_δ_τ u_p(τ) = δ_τ u_p^4/δ_τ u_p^2 ^2 -3. At large scales 𝒦_δ_τ u_p≈ 0 in agreement with the Gaussian-like shape of the distribution, while it steeply increases at small scales. For ρ_p/ρ_f=5 the good agreement between the PP-DNS and the PR-DNS is again clear, with a small deviation for the Φ_V=10^-3 case, which is due to the non-zero flow modulation. For ρ_p/ρ_f=100 the agreement is good at large scales, while the three curves substantially deviate for small τ, in agreement with the larger tails of δ_τ u_p found in the PR-DNS.
§.§ The particle-velocity Structure function
We now consider the statistics of the particle-particle relative velocity δu_p = u_p(x_p,j(t),t) - u_p(x_p,i(t),t), where x_p,i(t) and x_p,j(t) denote the position of any two particles i and j at time t. The distribution of δu_p across all particle couples plays a key role in several theories regarding the tendency of particles to form clusters; see for example <cit.>, <cit.> and <cit.>.
Figure <ref> plots the second-order structure function of the particle velocity, i.e.
S_2,p(r) = ( δu_p(r) ·r/r)^2 ,
where r is the separation vector between particle i and j, and r = |r|. The top panel is for ρ_p/ρ_f=5, while the bottom panel is for ρ_p/ρ_f=100.
For ρ_p/ρ_f=5, S_2,p resembles the fluid second-order structure function S_2 (see the top panel in figure <ref>): light particles have small inertia and closely follow the fluid motion. S_2,p exhibits the r^2 scaling at small scales and the r^2/3 scaling predicted by the Kolmogorov theory in the inertial range of scales. In agreement with the negligible flow modulation, the results of the PP-DNS match almost perfectly those of the PR-DNS at these parameters.
The bottom panel of figure <ref> deals with the ρ_p/ρ_f=100 cases. A first observation is that the results from the PR-DNS with Φ_V=10^-5 and Φ_V=10^-3 do not collapse; this is consistent with the larger flow modulation observed for the larger volume fraction, and agrees with the above discussed results for the single-particle statistics. Notably, for heavy particles S_2,p differs from the fluid structure function S_2 at the small scales. According to both the PP-DNS and the PR-DNS, S_2,p does not exhibit a r^2 scaling at the smallest scales, being substantially flat at small r. The relative motion between couples of heavy particles placed at a small distances r is substantially uncorrelated as well as decoupled from the small scale fluid motion due to their large inertia. Note that, the absence of the S_2,p∼ r^2 scaling indicates that at small scales the Eulerian particle velocity field cannot be described with a Taylor expansion. Notably, figure <ref> shows that S_2,p recovers the S_2-slope at larger r, exhibiting the classical Kolmogorov r^2/3 scaling in the inertial range of scales. This suggests that, despite the large inertia, the relative particle-particle velocity δu_p between two particles is driven by turbulent eddies having size comparable to r, provided that r is large enough.
When comparing the results of the PP-DNS with those of the PR-DNS, we note that the slope of S_2,p matches for small (r/η⪅ 5) and large (r ⪆ 20) scales. For intermediate scales 5 ⪅ r/η⪅ 20, instead, the PR-DNS predict a steeper slope for both Φ_V=10^-5 and Φ_V=10^-3. The finite size of the particles does not influence S_2,p for large and small scales where r/D_p = 𝒪(100) and r/D_p =𝒪(1), but it does for intermediate scales r/D_p = 𝒪(10).
Figure <ref> sheds further light on the relative particle-particle velocity by plotting the distribution of δu_p ·r/r, i.e. the component of δu_p projected along the vector separating the two particles, for different values of r. When δu_p ·r/r>0, the two particles depart, whereas they get closer when δu_p ·r/r<0. We consider the case with ρ_p/ρ_f = 100 to provide further insights of the distribution of S_2,p, shown in the right panels of figure <ref>. A first observation is that, similarly to what found for larger particles by <cit.>, the distribution of δu_p ·r/r is left skewed, with a slightly positive mode and a long negative tail. The distribution becomes progressively more flat when increasing r, in agreement with a lower level of the correlation between the velocity of the two particles. When comparing the distributions for Φ_V = 10^-5 and Φ_V = 10^-3, figure <ref> shows that for all r the tails are shorter for the larger Φ_V, with the difference decreasing as r increases. This is consistent with the stronger flow modulation that globally leads to a weaker level of the flow fluctuations; see table <ref>. Also, in agreement with the evolution of S_2,p with r (see figure <ref>), for Φ_V=10^-5 the distribution with the PP-DNS collapses nicely with that obtained with the PR-DNS at small scales (see the top panels), with some substantial differences arising for intermediate scales when the finite-size effects are relevant.
§ THE COLLECTIVE MOTION OF THE PARTICLES
In this section we focus on the collective motion of the particles. First, we investigate whether Kolmogorov-size particles agglomerate and form clusters. Then, we relate the presence of the clusters with the tendency of particles to preferentially sample particular regions of the flow.
§.§ Clustering
Over the years, several tools have been used to characterise the spatial arrangement of the particles in the flow <cit.>. We use the Voronoï tessellation, which has been extensively used by several authors <cit.>. The position of each particle is identified with its centre and the computational domain is divided in subdomains, such that each grid cell is associated with the closest particle. The Voronoï volume V_V of each particle is thus defined as the collective volume of grid cells that are closer to it than to other particles. The inverse of the Voronoï volumes provides a measure of the local concentration: particles placed in void regions possess a large Voronoï volume, while particles that are part of a cluster have a small Voronoï volume. Based on these observations, the intensity of clustering of a suspension can be measured by comparing the distribution of its Voronoï volumes to that of a control consisting of an equivalent, uniformly random, non-overlapping suspension of particles. More intense clustering leads to a variance of the distribution of V_V that is larger than that of the control.
Figure <ref> presents the clustering intensity for the different values of Φ_V and ρ_p/ρ_f considered. A first observation is that the PP-DNS underestimate the level of clustering for all cases. Our computations show that the discrepancy between the PP-DNS and the PR-DNS increases with ρ_p/ρ_f and/or Φ_V (see <ref> for further details).
We now move to the effect of the volume fraction Φ_V and of the particle density ρ_p/ρ_f. As expected, the level of clustering increases with Φ_V. When fixing Φ_V, instead, figure <ref> shows that heavier particles with ρ_p/ρ_f = 100 cluster more than lighter particles with ρ_p/ρ_f =5. For light particles, the low level of clustering σ/σ_rand≈ 1 agrees with the previous results of <cit.>, <cit.> and <cit.>, who considered larger particles 5 ≤ D_p/η≤ 123 over a wide range of Reynolds numbers 105 ≤ Re_λ≤ 430. Light particles have small inertia and are less likely to drift from the trajectories of the fluid elements. In contrast, the larger level of clustering observed when increasing the particle density from ρ_p/ρ_f = 5 to ρ_p/ρ_f=100 is not consistent with what found for larger particles. For suspensions of particles with size in the inertial range of scales, indeed, <cit.> found that the level of clustering exhibits a non-monotonous dependence on ρ_p/ρ_f, with the maximum occurring at intermediate densities (see figure 24 of their paper), and the minimum being at the largest density ratio they considered, i.e. ρ_p/ρ_f = 100. However, they found that in the D_p-ρ_p space of parameters the maximum level of clustering moves towards larger ρ_p as D_p decreases, suggesting that the tendency of particles to cluster is driven by the Stokes number of the particles, rather than by their density. Accordingly, their data show that the level of clustering is maximum when St = 𝒪(1-10). This agrees with the early works of <cit.>, <cit.> and <cit.>, and it is consistent with the our results.
The complete distributions of the Voronoï volumes are provided in figure <ref>. Compared to the corresponding random arrangement of particles, the tails of the V_V distribution are longer, and grow ever more so with increasing Φ_V and/or ρ_p/ρ_f. This is in agreement with the above discussion, since stronger clustering corresponds to a larger number of small and large Voronoï volumes. Note that, the PP-DNS underestimation of the level of clustering is visualised in figure <ref> with the shorter tails. Figure <ref> can be used to determine which particles are part of clusters and which are part of void regions <cit.>. This information is used in <ref>, when discussing the particle preferential sampling. In presence of clusters, two cross-over points arise between the V_V distributions of the actual suspension and that of the corresponding random arrangement of particles. Particles with a Voronoï volume smaller than the left cross-over point V_th,l are part of a cluster, while those with a Voronoï volume larger than the right cross-over point V_th,r are part of void regions. Particles that are part of a cluster and have Voronoï volumes that share at least one vertex are part of the same cluster. Note that, as the level of clustering increases, the threshold of the Vonoroï volume that delimits the particles entrapped in clusters decreases.
A different type of information regarding the spatial arrangement of the particles can be provided by means of the radial distribution function g(r), also referred to as pair correlation function (see figure <ref>). It describes how the particle density varies as a function of the distance away from a reference particle. In other words, it is a measure of the probability of finding particles at a distance r relative to that of a homogeneous distribution. Following <cit.> and <cit.>, the radial distribution function is defined as
g(r) = N_s(r)/Δ V(r) / N_pa/V ,
where N_s(r) is the number of particle pairs separated by a distance between r-Δ r and r+Δ r, Δ V(r) is the volume of the spherical shell of inner and outer radius r-Δ r and r+Δ r respectively, N_pa is the total number of particle pairs present in the system N_pa = N(N-1)/2, and V is the volume of the computational domain. In a uniform distribution where overlapping between particles is allowed g(r)=1 for all r.
The radial distribution function (see figure <ref>) shows that for all cases the accumulation is maximum at the smallest distances. Note that, the maximum of g(r) occurs at r ≈ D_p for PR-DNS, as the overlap between particle is not allowed. In agreement with the above discussion, the heavy particles with ρ_p/ρ_f = 100 show a larger level of clustering compared to the lighter ones with ρ_p/ρ_f = 5. The level of accumulation is also slightly larger for Φ_V = 10^-3 at all r. Similarly to what observed with the Voronoï tessellation, figure <ref> shows that the PP-DNS underpredict the level of particle accumulation. As clearly visible for Φ_V = 10^-3 and ρ_p/ρ_f = 100, the discrepancy between the PP-DNS and PR-DNS is maximum at the smallest separations.
§.§ Preferential sampling
In the previous section we have shown that the solid phase is not homogeneously distributed in space, and that the particles exhibit a mild level of clustering. In this section we relate the presence of the clusters with the tendency of the particles to preferentially sample certain regions of the flow. In doing this, we also provide a possible explanation of the different level of clustering predicted by the PR-DNS and PP-DNS for the Φ_V = 10^-3 and ρ_p/ρ_f = 100 case. Over the years several mechanisms have been proposed as governing the particles' preferential sampling of the flow, most of them justified using the MRG equation and thus, strictly speaking, valid only in the context of sub-Kolmogorov particles.
In the following we use the centrifuge mechanism <cit.> to explain the tendency of Kolmogorov-size particles to form clusters.
In the limit where the point-particle approximation holds, <cit.> has shown that particles with large density tend to collect in regions of high strain rate and low vorticity. In the presence of a vortex, indeed, heavy particles cannot follow the flow streamlines because of their large inertia, and tend to drift from the vortex core. Similarly, in the case of a pure straining flow, heavy particles drift towards the stagnation point at the centre.
We quantify the tendency of particles to sample regions of high strain by using the second invariant of the deformation tensor (see <ref>), i.e. Q = - S_ijS_ij/2 + ω^2/4; recall that regions where Q is large and positive are regions of high vorticity (Q ∼ω^2/4), while regions where Q is large and negative are regions of high strain (Q ∼ - S_ijS_ij). We investigate the particle preferential sampling by computing the probability density function of Q at the particles position <cit.>. For the sake of brevity, in this section we limit the investigation to the largest volume fraction Φ_V = 10^-3. For PP-DNS the value of Q at each particle position is obtained after linear interpolation. For PR-DNS, instead, the value of Q seen by each particle is estimated as the average value within a spherical shell centred with the particle and having a radius R_sh>R_p, where R_p=D_p/2 is the radius of the particles. It is important to note, however, that due to the particles' backreaction, in the case of PR-DNS, the value of Q seen by each particle is actually the result of three different effects: (i) the larger scale flow properties of the region that the particle is sampling, (ii) the smaller scale influence of the particle on the surrounding flow, and (iii) the effect of nearby particles on the flow. Also, a suitable choice of R_sh should be made: when R_sh is too small, only the influence of the particle on the surrounding flow is considered <cit.>, while when R_sh is too large, spurious contributions that do not affect the particle location are instead captured. In order to obtain a more complete picture, we have tested different values of R_sh.
Figure <ref> shows the probability density function of Q_ℓ, i.e. Q computed at the particle position. We start by looking at the dependence of the PR-DNS results on the radius of the shell R_sh. For ρ_p/ρ_f = 5, the curves computed for values of R_sh between 2 ≤ R_sh/R_p ≤ 7 show an almost perfect overlap. This is consistent with the low level of modulation discussed in <ref>, and indicates that for these light particles the main contribution to Q_ℓ comes from the larger scale properties of the flow region sampled by the particles (note that due to the low volume fraction the influence of the particle-particle interaction is negligible). For heavy particles with ρ_p/ρ_f = 100, instead, the distribution of Q_ℓ largely varies with R_sh: as R_sh decreases, the left tail of the distribution becomes longer, meaning that particles are more likely to see large negative values of Q. These large negative values of Q are, at least partially, the result of the influence of the particles on the neighbouring flow; see the Q-R map in figures <ref>. Note that, for R_sh/R_p ⪆ 5 the distribution of Q_ℓ shows only marginal variations, as for these R_sh the large scale flow contribution dominates. This suggests that for ρ_p/ρ_f = 100 the influence of particles on the surrounding flow extends for less than 5R_p. Overall, for both light and heavy particles the distribution of Q_ℓ is left skewed and shows an almost null probability of positive Q_ℓ. At the present parameters, both PP-DNS and PR-DNS give evidence that Kolmogorov-size particles preferentially sample regions of high strain rate. This is also visible in the instantaneous snapshot shown in figure <ref>, with particles sampling regions with low ω^2. In other words, in the context of Kolmogorov-size particles the formation of clusters is, at least partially, governed by the centrifuge mechanism.
Let us now focus on the differences between the PR-DNS and PP-DNS results. For particles with ρ_p/ρ_f=5, the Q_ℓ distribution obtained with PP-DNS overlaps almost perfectly with that obtained with PR-DNS; the point-particle approximation predicts fairly well the tendency of light particles to sample the Q<0 regions of the flow. Note that this is consistent with the good agreement found in figure <ref> when discussing the distribution of the Voronoï volumes. For ρ_p/ρ_f = 100, instead, the Q_ℓ distribution obtained with the PP-DNS significantly deviates from that obtained with the PR-DNS. For all R_sh, the Q_ℓ distribution obtained with PR-DNS shows a shorter right tail and predicts a higher probability of negative Q: finite-size heavy particles are less/more likely to see positive/negative values of Q. Based on this, one may conclude that, at the present parameters, the PP-DNS underestimates the tendency of the particles to preferentially sample regions of high-strain, and this may explain the larger level of clustering predicted by the PR-DNS for the Φ_V = 10^-3 and ρ_p/ρ_f = 100 case (see figure <ref>).
A last comment regards the influence of ρ_p/ρ_f on the distribution of Q_ℓ. According to both PP-DNS and PR-DNS, heavier particles exhibit a somewhat larger tendency to sample regions with more negative Q, as shown by the left tail being longer for the ρ_p/ρ_f=100 case. Due to their larger inertia, indeed, heavier particles enhance the centrifuge mechanism, being more likely to drift from the high-vorticity regions of the flow.
To provide additional insights regarding the relation between the presence of clusters and the particle preferential sampling, figure <ref> shows the joint probability density function of Q_ℓ and V_V for Φ_V = 10^-3. Based on the above discussion, here we set R_sh/R_p = 3 for the computation of Q_ℓ, since it is large enough to account for the particle preferential location and small enough to avoid spurious contributions. We recall that according to <cit.>, particles with V_V ≤ V_th,l are part of a cluster, while particles with V_V ≥ V_th,r are in void regions of the flow. For all cases, the most negative values of Q_ℓ well correlate with small and intermediate Voronoï volumes, with V_V ⪅ V_th,r. Particles that are in void regions and are not part of a cluster are less likely to see large negative values of Q. This agrees with the above observation that the tendency of Kolmogorov-size particles to form clusters is governed by the centrifuge mechanism. We now focus on ρ_p/ρ_f=100 (see the bottom panels). The joint distribution shows that the larger probability of negative Q_ℓ predicted by the PR-DNS is concentrated at the smallest Voronoï volumes with V_V ⪅ V_th,l. Again, this shows that the higher level of clustering detected in this case with the PR-DNS well correlates with the larger tendency of finite-size particles to sample regions of the flow with intense strain.
§ CONCLUSION
We have investigated by direct numerical simulations the fluid-solid interaction of suspensions of Kolmogorov-size spherical particles moving in homogeneous isotropic turbulence. The work is based on both interface-resolved (PR-DNS) and one-way-coupled point-particle (PP-DNS) direct numerical simulations. In PR-DNS the presence of the particles is dealt with the immersed boundary method introduced by <cit.>. In PP-DNS the motion of the particles is described by solving the complete Maxey-Riley-Gatignol equation <cit.>, including the time-history Basset term and the Faxén correction. The objective of the study is twofold. On one side, we aim to shed light on how Kolmogorov-size particles influence the organisation of the velocity fluctuations. Few work have indeed considered particles with D_p/η≈ 1 due to the intrinsic complexity of the problem: in experiments it requires the access to sub-Kolmogorov measurements, and in simulations it requires an extremely fine grid with a resulting prohibitive computational cost. On the other side, we aim to assess the limits of the one-way-coupled PP-DNS that, despite the large number of works present in literature, have not been completely addressed yet. For this reason, we consider a portion of the parameter space that is on the edge of the range of validity of the one-way coupled point-particle models <cit.>. The micro-scale Reynolds number is Re_λ≈ 140, being large enough to ensure a proper separation of scales. The volume fraction of the suspension has been set to the small values of Φ_V=10^-5 and Φ_V = 10^-3 to guarantee that the backreaction of the solid phase on the carrier flow is low. Two solid-to-fluid density ratios are considered, i.e. ρ_p/ρ_f = 5 and ρ_p/ρ_f = 100, to investigate the role of inertia.
The PR-DNS shows that at the present parameters the modulation of the flow is rather low and mainly involves the smallest scales. The modulated energy spectrum ℰ(κ) shows a multi-scaling behaviour: the classical -5/3 scaling in the inertial range of scales is indeed followed by a steeper -4 scaling, that resembles what has been observed by several authors in the context of bubbly flows <cit.>. Accordingly, the scale-by-scale energy budget shows two different regimes: in the inertial range of scales the fluid-structure interaction term is negligible and the non linear term equals the dissipation rate, i.e. Π(κ) ∼ϵ; at these scales energy is transferred from larger to smaller scales by means of the classical energy cascade described by Richardson and Kolmogorov. At small scales, where ℰ(κ) ∼κ^-4, the non linear term is negligible and the fluid-structure interaction term is balanced by the viscous dissipation, i.e. Π_fs(κ) ∼ D_v(κ); at these scales the energy injected into the flow by the particles is directly dissipated by viscosity.
The small-scale topology of the flow has been investigated by inspecting the influence of the particles on the invariants of the velocity gradient tensor A_ij = ∂ u_i/∂ x_j <cit.>. The effect of the solid phase on the eigenvalues of the strain-rate tensor shows that the presence of the particles favours axisymmetric compression rather than axisymmetric extension. The joint p.d.f of the second and third invariants of A_ij reveals that, particles mainly enhance events with axial strain and vortex compression. Accordingly, the inspection of the joint p.d.f. of the second invariants of the symmetric and antisymmetric parts of A_ij indicates that the presence of the particles favours dissipation events dominated by extensional events rather than rotational ones.
The limits of the one-way-coupled point-particle models have been addressed by looking at the dynamics of the particles and at their collective motion. We find that the PP-DNS predicts fairly well the Lagrangian and Eulerian statistics of the particles velocity field for the low-density case. For heavy particles, however, some discrepancies are observed, particularly for the larger volume fraction. These differences are due to a combination of the finite-size of the particles and of the flow modulation, that are not accounted for in the PP-DNS. By using the Voronoï tessellation method and the radial distribution function, we find that the PP-DNS underpredicts the level of clustering; the discrepancy with the PR-DNS results increases with the volume fraction and the particle density. In the attempt to have a clearer picture, we have investigated the tendency of the particles to preferentially sample particular regions of the flow. By plotting the distribution of the second invariant Q of the fluid velocity gradient tensor at the particle position, we find that, according to both PP-DNS and PR-DNS, the particles preferentially sample regions of high strain rate. This suggests that the presence of the clusters is driven by the centrifuge mechanism introduced by <cit.>. Accordingly with the larger level of clustering, we find that PR-DNS shows a larger tendency of the particles to sample these regions of the flow compared to PP-DNS. Note, however, that some care is needed when dealing with these results. In PR-DNS, indeed, the value of Q seen by each particle is the result of three different contributions that cannot be easily isolated, i.e. (i) the larger scale flow properties of the region that the particle is sampling, (ii) the smaller scale influence of the particle on the surrounding flow, and (iii) the effect of nearby particles on the flow.
By characterising the fluid-solid interaction of Kolmogorov-size particles in homogeneous isotropic turbulence, the present study aims to serve as a stepping stone for further investigations. A natural extension of this work is to use the present PR-DNS database to assess the limits of two-way-coupled PP-DNS, that account for the backreaction of the solid phase to the carrier flow; see for instance the models introduced by <cit.> and <cit.>. In addition, the present results may be used as a ground truth for studies in the spirit of <cit.>, that investigate the relevance of each term at the right-hand-sides of the MRG equation in predicting the different statistics of the particles. This knowledge will help guide the choice of suitable models for engineering applications. Eventually, it would be of interest to investigate whether Kolmogorov-size particles modulate the energy spectrum also in the inertial range of scales at larger volume fractions, influencing thus the -5/3 scaling range and the classical energy cascade as observed for larger particles <cit.>. Despite the computational challenge that would such a study would provide, the field would benefit from the investigation. Overall, the present results can be exploited for the development of improved point-particle models for the one- and two-way-coupling regimes.
§ ACKNOWLEDGMENTS
The authors acknowledge the computer time provided by the Scientific Computing and Data Analysis section of the Core Facilities at Oist and the computational resources on Fugaku provided by HPCI (project ID: hp230536). A.C. acknowledges Marildo Kola for discussions and suggestions.
§ FUNDING
The research was supported by the Okinawa Institute of Science and Technology Graduate University (OIST) with subsidy funding to M.E.R. from the Cabinet Office, Government of Japan.
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
§ THE BASSET TIME HISTORY FORCE
In the PP-DNS we resort on the second-order and memory-efficient algorithm developed by <cit.> to deal with the Basset time history force.
The Basset force is split into two parts, denoted as window and tail. In particular, at time t̃ the first part consists of a numerical integration over the t̃-t_w≤ t ≤t̃ interval, considering thus N_w=t_w/Δ t previous steps. The second integral, instead, considers the -∞≤ t ≤t̃ - t_w interval and is approximated using recursive exponential functions.
The kernel K_B(t) in equation <ref> is thus replaced with an approximated kernel K(t) such that
K(t) =
K_B(t) if t < t_win
K_tail(t) if t ≥ t_twin,
with
lim_t → +∞ K_tail(t) = 0.
The Basset force, therefore, reads
𝐅_B(t) = c_B ∫_t-t_win^t K_B(t-τ) g(τ) dτ_𝐅_B-win(t) +
c_B ∫_-∞^t-t_win K_tail(t-τ) g(τ) dτ_𝐅_B-tail(t),
where c_B = 3/2 D_p^2 ρ_f √(πν) and g(t) = d(𝐮-𝐮_p + (1/6) (D_p/2)^2 ∇^2 𝐮) / dt. The window term is integrated in time using the diffusive Basset kernel. The integration exploits a modified trapezoidal method, which allows the kernel's singularity to be taken into account. Thus, following the work of <cit.>, the window contribution reads:
𝐅_B-win = 4/3 c_B √(Δ t)𝐠_0 + 4/3 c_B √(Δ t)∑_n = 1^N_w-1[ (n-1)√(n-1) - 2n√(n) + (n+1)√(n+1)] 𝐠_n
+ c_B √(Δ t)[ 4/3(N_w - 1)√(N_w-1) + (2-4/3N_w)√(N_w)]𝐠_N_w,
where g_n = g(t - n Δ t ) with n = 0,1,..., N_w. Here Δ t = t_win/N_w and N_w is the number interval in which the window is discretised.
As stated above, the tail term is integrated in a recursive manner and, exploiting exponential kernels, it reads:
𝐅_B-tail(t) = ∑_i=1^m a_i 𝐅_i(t) = ∑_i=1^m a_i ( 𝐅_i-di(t) + 𝐅_i-re(t) ),
where 𝐅_i-di is computed directly as
𝐅_i-di(t) = 2 c_B √(e t_i)exp(-t_win/2t_i) {𝐠_N [ 1 - ϕ( Δ t/2t_i) ] + 𝐠_N+1[ϕ(- Δ t/2t_i) - 1 ] },
and 𝐅_i-re is computed recursively as
𝐅_i-re(t) = exp( -Δ t/2 t_i) 𝐅_i(t-Δ t ).
Here, ϕ(z) = (exp(z)-1)/z, and for a given value of m the coefficients {a_i,t_i }_i=1^m are chosen to minimize the error. For a detailed explanation, the reader is referred to <cit.> and <cit.>. We choose m=10 and set the values of the a_i and t_i parameters to the ones proposed in the work of <cit.>. As suggested by <cit.> the point-particle equation is written in a semi-implicit manner to guarantee numerical stability when integrating in time.
jfm
101
natexlab#1#1#1#1 #1#1 #1#1#1#1#1#1
#1#1#1#1 #1#1 #1#1
#1#1#1#1#1#1#1#1
[Alised et al.(2002)Alised, Cartellier, Hainaux &
Lasheras]aliseda-etal-2002
Alised, A., Cartellier, A., Hainaux, F. & Lasheras,
J.C. 2002 Effect of preferential concentration on the settling
velocity of heavy particles in homogeneous isotropic turbulence. J.
Fluid Mech. 468, 77–105.
[Alméras et al.(2017)Alméras, Mathai, Lohse &
Sun]almeras-etal-2017
Alméras, E., Mathai, V., Lohse, D. & Sun, C.
2017 Experimental investigation of the turbulence induced by a
bubble swarm rising within incident turbulence. J. Fluid Mech.
825, 1091–1112.
[Auton et al.(1988)Auton, Hunt &
Prud’Homme]auton-hunt-1988
Auton, T. R., Hunt, J. C. R. & Prud’Homme, M. 1988
The force exerted on a body in inviscid unsteady non-uniform rotational
flow. J. Fluid Mech. 197, 241–257.
[Balachandar(2009)]balachandar-2009
Balachandar, S. 2009 A scaling analysis for
point–particle approaches to turbulent multiphase flows. Int. J.
Multiph. Flow 35, 801–810.
[Balachandar & Eaton(2010)]balachandar-eaton-2010
Balachandar, S. & Eaton, John K. 2010 Turbulent
Dispersed Multiphase Flow. Annu. Rev. Fluid Mech.
42 (1), 111–133.
[Benzi et al.(1993)Benzi, Ciliberto, Tripiccione, Baudet,
Massaioli & Succi]benzi-1993
Benzi, R., Ciliberto, S., Tripiccione, R., Baudet, C.,
Massaioli, F. & Succi, S. 1993 Extended self-similarity
in turbulent flows. Phys. Rev. E 48 (1), R29–R32.
[Betchov(1956)]betchov-1956
Betchov, R. 1956 An inequality concerning the production
of vorticity in isotropic turbulence. J. Fluid Mech. 1 (5),
497–504.
[Boivin et al.(1998)Boivin, Simonin &
Squires]boivin-simonin-squires-1998
Boivin, M., Simonin, O. & Squires, K.D. 1998
Direct numerical simulation of turbulence modulation by particles in
isotropic turbulence. J. Fluid Mech. 375, 235–263.
[Bragg & Collins(2014)]bragg-collins-2014
Bragg, A. D. & Collins, L. R. 2014 New insights from
comparing statistical theories for inertial particles in turbulence: I.
Spatial distribution of particles. New J. Phys. 16 (5),
055013, publisher: IOP Publishing.
[Bragg et al.(2015)Bragg, Ireland &
Collins]bragg-ireland-collins-2015
Bragg, A. D., Ireland, P. J. & Collins, L. R. 2015
Mechanisms for the clustering of inertial particles in the inertial range
of isotropic turbulence. Phys. Rev. E. 92 (2), 023029,
publisher: American Physical Society.
[Brandt & Coletti(2022)]brandt-coletti-2022
Brandt, L. & Coletti, F. 2022 Particle-Laden
Turbulence: Progress and Perspectives. Annu. Rev. Fluid
Mech. 54 (1), 159–189.
[Breugem(2012)]breugem-2012
Breugem, W.P. 2012 A second-order accurate immersed
boundary method for fully resolved simulations of particle-laden flows.
J. Comput. Phys. 231, 4469–4498.
[Burton & Eaton(2005)]burton-eaton-2005
Burton, T.M. & Eaton, J.K. 2005 Fully resolved
simulations of particle-turbulence interaction. J. Fluid Mech.
545, 67–111.
[Cannon et al.(2024)Cannon, Olivieri &
Rosti]cannon-olivieri-rosti-2024
Cannon, I., Olivieri, S. & Rosti, M.E. 2024
Spheres and fibers in turbulent flows at various reynolds numbers.
Phys. Rev. Fluids 9, 064301.
[Casas et al.(2018)Casas, Ferrer &
Oñate]casas-ferrer-onate-2017
Casas, G., Ferrer, A. & Oñate, E. 2018
Approximating the basset force by optimizing the method of van hinsberg
et al. J. Comput. Phys. 352, 142–171.
[Chevillard et al.(2003)Chevillard, Roux, Levêque, Mordant,
Pinton & Arneodo]chevillard-etal-2003
Chevillard, L., Roux, S. G., Levêque, E., Mordant,
N., Pinton, J.-F. & Arneodo, A. 2003 Lagrangian
Velocity Statistics in Turbulent Flows: Effects of
Dissipation. Phys. Rev. Lett. 91 (21), 214502.
[Chiarini et al.(2024)Chiarini, Cannon &
Rosti]chiarini-etal-2023
Chiarini, A., Cannon, I. & Rosti, M. E. 2024
Anisotropic mean flow enhancement and anomalous transport of finite-size
spherical particles in turbulent flows. Phys. Rev. Lett. 132,
054005.
[Chiarini & Rosti(2024)]chiarini-rosti-2024
Chiarini, A. & Rosti, M.E. 2024 Finite-size inertial
spherical particles in turbulence. J. Fluid Mech. 988,
A17.
[Coleman & Vassilicos(2009)]coleman-vassilicos-2009
Coleman, S. W. & Vassilicos, J. C. 2009 A unified
sweep-stick mechanism to explain particle clustering in two- and
three-dimensional homogeneous, isotropic turbulence. Phys. Fluids
21 (11), 113301.
[Costa et al.(2020)Costa, Brandt &
Picano]costa-brandt-picano-2020
Costa, P., Brandt, L. & Picano, F. 2020
Interface-resolved simulations of small inertial particles in turbulent
channel flow. J. Fluid Mech. 883, A54.
[Cundall & Strack(1979)]cundall-strack-1979
Cundall, P. A. & Strack, O. D.L. 1979 A discrete
numerical model for granular assemblies. Geotechnique 29 (1),
47–65.
[Davidson(2004)]davidson-2004
Davidson, P.A. 2004 Turbulence: An Introduction for
Scientists and Engineers. Oxford University Press.
[Davidson & Pearson(2005)]davidson-pearson-2005
Davidson, P. A. & Pearson, B. R. 2005 Identifying
Turbulent Energy Distributions in Real, Rather than Fourier,
Space. Phys. Rev. Lett. 95 (21), 214501, publisher:
American Physical Society.
[De Lillo et al.(2014)De Lillo, Cencini, Durham, Barry,
Stocker, Climent & Boffetta]delillo-etal-2014
De Lillo, F., Cencini, M., Durham, W. M., Barry, M.,
Stocker, R., Climent, E. & Boffetta, G. 2014
Turbulent Fluid Acceleration Generates Clusters of Gyrotactic
Microorganisms. Phys. Rev. Lett. 112 (4), 044502.
[Dorgan & Loth(2007)]dorgan-loth-2007
Dorgan, A.J. & Loth, E. 2007 Efficient calculation
of the history force at finite reynolds numbers. Int. J. Multiphase
Flow 33, 833–848.
[Druzhinin(2001)]druzhinin-2001
Druzhinin, O. A. 2001 The influence of particle inertia on
the two-way coupling and modification of isotropic turbulence by
microparticles. Phys. Fluids 13 (12), 3738–3755.
[Dung et al.(2023)Dung, Waasdorp, Sun, Lohse &
Huisman]dung-etal-2022
Dung, O.-Y., Waasdorp, P., Sun, C., Lohse, D. &
Huisman, S.G. 2023 The emergence of bubble-induced scaling in
thermal spectra in turbulence. J. Fluid Mech. 958, A5.
[Elghobashi & Truesdell(1993)]elghobashi-truesdell-1993
Elghobashi, S. & Truesdell, G. C. 1993 On the
two-way interaction between homogeneous turbulence and dispersed solid
particles. I: Turbulence modification. Phys. Fluids A: Fluid
Dyn. 5 (7), 1790–1801.
[Faxén(1922)]faxen-1922
Faxén, H. 1922 Der widerstand gegen die bewegung einer
starren kugel in einer zähen flüssigkeit, die zwischen zwei parallelen
ebenen wänden eingeschlossen ist. Annalen der Physik
373 (10), 89–119.
[Ferrante & Elghobashi(2003)]ferrante-elghobashi-2003
Ferrante, A. & Elghobashi, S. 2003 On the physical
mechanisms of two-way coupling in particle-laden isotropic turbulence.
Phys. Fluids 15 (2), 315–329.
[Fessler et al.(1994)Fessler, Kulick &
Eaton]fessler-etal-1994
Fessler, J. R., Kulick, J.D. & Eaton, J.K. 1994
Preferential concentration of heavy particles in a turbulent channel
flow. Phys. Fluids 6 (11), 3742–3749.
[Fiabane et al.(2012)Fiabane, Zimmermann, Volk, Pinton &
Bourgoin]fiabane-etal-2012
Fiabane, L., Zimmermann, R., Volk, R., Pinton, J.-F.
& Bourgoin, M. 2012 Clustering of finite-size particles in
turbulence. Phys. Rev. E 86 (3), 035301.
[Frisch(1995)]frisch-1995
Frisch, U. 1995 Turbulence: The Legacy of A.
N. Kolmogorov. Cambridge University Press.
[Gatignol(1983)]gatignol-1983
Gatignol, R. 1983 The faxen formulae for a rigid particle
in an unsteady non-uniform stokes flow. J. méc. théor. appl.
2 (2), 143–160.
[Gore & Crowe(1989)]gore-crowe-1989
Gore, R. A. & Crowe, C. T. 1989 Effect of particle
size on modulating turbulent intensity. Int. J. Multiph. Flow.
15 (2), 279–285.
[Goto & Vassilicos(2008)]goto-vassilicos-2008
Goto, S. & Vassilicos, J. C. 2008 Sweep-Stick
Mechanism of Heavy Particle Clustering in Fluid Turbulence.
Phys. Rev. Lett. 100 (5), 054503.
[Gualtieri et al.(2015)Gualtieri, Picano, Sardina &
Casciola]gualtieri-etal-2015
Gualtieri, P., Picano, F., Sardina, G. & Casciola,
C. M. 2015 Exact regularized point particle method for
multiphase flows in the two-way coupling regime. J. Fluid Mech.
773, 520–561.
[Gustavsson & Mehlig(2011)]gustavsson-mehlig-2011
Gustavsson, K. & Mehlig, B. 2011 Distribution of
relative velocities in turbulent aerosols. Phys. Rev. E.
84 (4), 045304, publisher: American Physical Society.
[van Hinsberg et al.(2011)van Hinsberg, ten Thije Boonkkamp &
Clercx]hinsberg-boonkkamp-clercx-2011
van Hinsberg, M.A.T., ten Thije Boonkkamp, J.H.M. & Clercx,
H.J.H. 2011 An efficient, second order method for the
approximation of the basset history force. J. Comput. Phys.
230, 1465–1478.
[Homann & Bec(2010)]homann-bec-2010
Homann, H. & Bec, J. 2010 Finite-size effects in the
dynamics of neutrally buoyant particles in turbulent flow. J. Fluid
Mech. 651, 81–91.
[Hori et al.(2022)Hori, Rosti &
Takagi]hori-rosti-takagi-2022
Hori, N., Rosti, M. E. & Takagi, S. 2022 An
Eulerian-based immersed boundary method for particle suspensions with
implicit lubrication model. Comput. Fluids 236, 105278.
[Huang et al.(2007)Huang, Shin & Sung]huang-etal-2007
Huang, W.-X., Shin, S. J. & Sung, H. J. 2007
Simulation of flexible filaments in a uniform flow by the immersed
boundary method. J. Comput. Phys. 226 (2), 2206–2228.
[Hwang & Eaton(2006)]hwang-eaton-2006
Hwang, W. & Eaton, J. K. 2006 Homogeneous and
isotropic turbulence modulation by small heavy particles. J. Fluid
Mech. 564, 361–393.
[Kajishima et al.(2001)Kajishima, Takiguchi, Hamasaki &
Miyake]kajishima-etal-2001
Kajishima, T., Takiguchi, S., Hamasaki, H. & Miyake,
Y. 2001 Turbulence Structure of Particle-Laden Flow in a
Vertical Plane Channel Due to Vortex Shedding. JSME Int. J.
B-Fluid T. 44 (4), 526–535.
[Kempe & Fröhlich(2012)]kempe-frolich-2012
Kempe, T. & Fröhlich, J. 2012 An improved immersed
boundary method with direct forcing for the simulation of particle laden
flows. J. Comput. Phys. 231, 3663–3684.
[Kidanemariam et al.(2013)Kidanemariam, Chan-Braun, Doychev
& Uhlmann]kidanemariam-etal-2013
Kidanemariam, A. G., Chan-Braun, C., Doychev, T. &
Uhlmann, M. 2013 Direct numerical simulation of horizontal
open channel flow with finite-size, heavy particles at low solid volume
fraction. New J. Phys. 15 (2), 025031.
[Kolmogorov(1941)]kolmogorov-1941
Kolmogorov, A.N. 1941 The Local Structure of
Turbulence in an Incompressible Viscous Fluid for Very Large
Reynolds Numbers. Dokl. Akad. Nauk. SSSR 30, 301–305.
[La Porta et al.(2001)La Porta, Voth, Crawford, Alexander &
Bodenschatz]laporta-etal-2001
La Porta, A., Voth, Greg A., Crawford, Alice M.,
Alexander, Jim & Bodenschatz, Eberhard 2001 Fluid
particle accelerations in fully developed turbulence. Nature
409 (6823), 1017–1019.
[Lance & Bataille(1991)]lance-betaille-1991
Lance, M. & Bataille, J. 1991 Turbulence in the
liquid phase of a uniform bubbly air–water flow. J. Fluid Mech.
222, 95–118.
[Lucci et al.(2010)Lucci, Ferrante &
Elghobashi]lucci-etal-2010
Lucci, F., Ferrante, A. & Elghobashi, S. 2010
Modulation of isotropic turbulence by particles of Taylor
length-scale size. J. Fluid Mech. 650, 5–55.
[Lucci et al.(2011)Lucci, Ferrante &
Elghobashi]lucci-etal-2011
Lucci, F., Ferrante, A. & Elghobashi, S. 2011
Is Stokes number an appropriate indicator for turbulence modulation
by particles of Taylor-length-scale size? Phys. Fluids
23 (2), 025101.
[Lund & Rogers(1994)]lund-rogers-1994
Lund, Thomas S. & Rogers, Michael M. 1994 An
improved measure of strain state probability in turbulent flows. Phys.
Fluids 6 (5), 1838–1847.
[Martinez Mercado et al.(2010)Martinez Mercado, Chehata Gomez,
Van Gils, Sun & Lohse]mercado-etal-2010
Martinez Mercado, J., Chehata Gomez, D., Van Gils, D.,
Sun, C. & Lohse, D. 2010 On bubble clustering and
energy spectra in pseudo-turbulence. J. Fluid Mech. 650,
287–306.
[Matsuda et al.(2024)Matsuda, Yoshimatsu &
Schneider]matsuda-etal-2024
Matsuda, K., Yoshimatsu, K. & Schneider, K. 2024
Heavy particle clustering in inertial subrange of high–reynolds number
turbulence. Phys. Rev. Lett. 132, 234001.
[Maxey(1987)]maxey-1987
Maxey, M. R. 1987 The gravitational settling of aerosol
particles in homogeneous turbulence and random flow fields. J. Fluid
Mech. 174, 441–465, publisher: Cambridge University.
[Maxey & Riley(1983)]maxey-riley-1983
Maxey, M. R. & Riley, J. J. 1983 Equation of motion
for a small rigid sphere in a nonuniform flow. Phys. Fluids
26 (4), 883–889.
[McLaughlin(1991)]mclaughlin-1991
McLaughlin, J.B. 1991 Inertial migration of a small sphere
in linear shear flows. J. Fluid Mech. 224, 261–274.
[Mehrabadi et al.(2018)Mehrabadi, Horwitz, Subramaniam &
Mani]mehrabadi-etal-2018
Mehrabadi, M., Horwitz, J. A. K., Subramaniam, S. &
Mani, A. 2018 A direct comparison of particle-resolved and
point-particle methods in decaying turbulence. J. Fluid Mech.
850, 336–369.
[Mei(1992)]mei-1992
Mei, R. 1992 An approximate expression for the shear lift
force on a spherical particle at finite reynolds number. Int. J.
Multiph. Flow 18, 145–147.
[Mei & Adrian(1992)]mei-adrian-1992
Mei, R. & Adrian, R. J. 1992 Flow past a sphere with
an oscillation in the free-stream velocity and unsteady drag at finite
reynolds number. J. Fluid Mech. 237, 323–341.
[Meneveau(2011)]meneveau-2011
Meneveau, C. 2011 Lagrangian Dynamics and Models
of the Velocity Gradient Tensor in Turbulent Flows. Annu. Rev.
Fluid Mech. 43 (1), 219–245.
[Michaelides(1992)]michaelides-1992
Michaelides, E.E. 1992 A novel way of computing the basset
term in unsteady multiphase flow computations. Phys. Fluids A
4 (7), 1579–1582.
[Monchaux et al.(2010)Monchaux, Bourgoin &
Cartellier]monchaux-etal-2010
Monchaux, R., Bourgoin, M. & Cartellier, A. 2010
Preferential concentration of heavy particles: A Voronoï analysis.
Phys. Fluids 22 (10), 103304.
[Monchaux et al.(2012)Monchaux, Bourgoin &
Cartellier]monchaux-etal-2012
Monchaux, R., Bourgoin, M. & Cartellier, A. 2012
Analyzing preferential concentration and clustering of inertial particles
in turbulence. Int. J. Multiph. Flow 40, 1–18.
[Monti et al.(2021)Monti, Rathee, Shen &
Rosti]monti-etal-2021
Monti, A., Rathee, V., Shen, A. Q. & Rosti, M. E.
2021 A fast and efficient tool to study the rheology of dense
suspensions. Phys. Fluids 33 (10), 103314.
[Mordant et al.(2001)Mordant, Metz, Michel &
Pinton]mordant-etal-2001
Mordant, N., Metz, P., Michel, O. & Pinton, J.-F.
2001 Measurement of Lagrangian Velocity in Fully Developed
Turbulence. Phys. Rev. Lett. 87 (21), 214501.
[Brändle de Motta et al.(2016)Brändle de Motta, Estivalezes,
Climent & Vincent]brandle-etal-2016
Brändle de Motta, J. C., Estivalezes, J. L., Climent, E.
& Vincent, S. 2016 Local dissipation properties and
collision dynamics in a sustained homogeneous turbulent suspension composed
of finite size particles. Int. J. Multiph. Flow 85,
369–379.
[Nomura & Post(1998)]nomura-post-1998
Nomura, K.K. & Post, G.K. 1998 The structure and
dynamics of vorticity and rate of strain in incompressible homogeneous
turbulence. J. Fluid Mech. 377, 65–97.
[Oka & Goto(2022)]oka-goto-2022
Oka, S. & Goto, S. 2022 Attenuation of turbulence in
a periodic cube by finite-size spherical solid particles. J. Fluid
Mech. 949, A45.
[Olivieri et al.(2022a)Olivieri, Cannon &
Rosti]olivieri-cannon-rosti-2022
Olivieri, S., Cannon, I. & Rosti, M. E.
2022a The effect of particle anisotropy on the
modulation of turbulent flows. J. Fluid Mech. 950, R2.
[Olivieri et al.(2022b)Olivieri, Mazzino
& Rosti]olivieri-mazzino-rosti-2022
Olivieri, S., Mazzino, A. & Rosti, M. E.
2022b On the fully coupled dynamics of
flexible fibres dispersed in modulated turbulence. J. Fluid Mech.
946, A34.
[Olivieri et al.(2014)Olivieri, Picano, Sardina, Iudicone &
Brandt]olivieri-etal-2014
Olivieri, S., Picano, F., Sardina, G., Iudicone, D. &
Brandt, L. 2014 The effect of the basset history force on
particle clustering in homogeneous and isotropic turbulence. Phys.
Fluids 26 (4), 041704.
[Pandey et al.(2022)Pandey, Mitra &
Perlekar]pandey-mitra-perlekar-2022
Pandey, V., Mitra, D. & Perlekar, P. 2022
Turbulence modulation in buoyancy-driven bubbly flows. J. Fluid
Mech. 932, A19.
[Pandey et al.(2023)Pandey, Mitra &
Perlekar]pandey-mitra-perlekar-2023
Pandey, V., Mitra, D. & Perlekar, P. 2023
Kolmogorov turbulence coexists with pseudo-turbulence in buoyancy-driven
bubbly flows. Phys. Rev. Lett. 131, 114002.
[Pandey et al.(2020)Pandey, Ramadugu &
Perlekar]pandey-ramadugu-perlekar-2020
Pandey, V., Ramadugu, R. & Perlekar, P. 2020
Liquid velocity fluctuations and energy spectra in three-dimensional
buoyancy-driven bubbly flows. J. Fluid Mech. 884, R6.
[Podvigina & Pouquet(1994)]podvigina-pouquet-1994
Podvigina, O. & Pouquet, A. 1994 On the non-linear
stability of the 1:1:1 ABC flow. Phys. D: Nonlinear Phenom.
75 (4), 471–508.
[Pope(2000)]pope-2000
Pope, S.B. 2000 Turbulent Flows. Cambridge
University Press, Cambridge.
[Prakash et al.(2016)Prakash, Martínez Mercado, van
Wijngaarden, Mancilla, Tagawa, Lohse & Sun]prakash-etal-2016
Prakash, V. N., Martínez Mercado, J., van Wijngaarden, L.,
Mancilla, E., Tagawa, Y., Lohse, D. & Sun, C. 2016
Energy spectra in turbulent bubbly flows. J. Fluid Mech.
791, 174–190.
[Prasath et al.(2019)Prasath, Vasan &
Govindarajan]prasath-etal-2019
Prasath, S. Ganga, Vasan, Vishal & Govindarajan, Rama
2019 Accurate solution method for the maxey–riley equation, and
the effects of basset history. J. Fluid Mech. 868,
428–460.
[Qureshi et al.(2007)Qureshi, Bourgoin, Baudet, Cartellier &
Gagne]qureshi-etal-2007
Qureshi, N.M., Bourgoin, M., Baudet, C., Cartellier,
A. & Gagne, Y. 2007 Turbulent Transport of Material
Particles: An Experimental Study of Finite Size Effects.
Phys. Rev. Lett. 99 (18), 184502.
[Riboux et al.(2010)Riboux, Risso &
Legendre]riboux-risso-legendre-2010
Riboux, G., Risso, F. & Legendre, D. 2010
Experimental characterization of the agitation generated by bubbles
rising at high reynolds number. J. Fluid Mech. 643,
509–539.
[Risso(2018)]risso-2018
Risso, F. 2018 Agitation, mixing, and transfers induced by
bubbles. Ann. Rev. Fluid Mech. 50, 25–48.
[Saffman(1965)]saffman-1965
Saffman, P. G. 1965 The lift on a small sphere in a slow
shear flow. J. Fluid Mech. 22 (2), 385–400.
[Salazar et al.(2008)Salazar, Jong, Cao, Woodward, Meng &
Collins]salazar-etal-2008
Salazar, J. P. L. C., Jong, J. D., Cao, L., Woodward,
S. H., Meng, H. & Collins, L. R. 2008 Experimental and
numerical investigation of inertial particle clustering in isotropic
turbulence. J. Fluid Mech. 600, 245–256.
[Saw et al.(2008)Saw, Shaw, Ayyalasomayajula, Chuang &
Gylfason]saw-etal-2008
Saw, E. W., Shaw, R. A., Ayyalasomayajula, S., Chuang,
P. Y. & Gylfason, Á. 2008 Inertial Clustering of
Particles in High-Reynolds-Number Turbulence. Phys. Rev. Lett.
100 (21), 214501.
[Schneiders et al.(2017)Schneiders, Meinke &
Schröder]schneiders-etal-2017
Schneiders, L., Meinke, M. & Schröder, W. 2017
Direct particle–fluid simulation of
Kolmogorov-length-scale size particles in decaying isotropic turbulence.
J. Fluid Mech. 819, 188–227.
[Schumacher et al.(2007)Schumacher, Sreenivasan &
Yakhot]schumacher-etal-2007
Schumacher, J., Sreenivasan, K.R. & Yakhot, V. 2007
Asymptotic exponents from low-Reynolds-number flows. New J.
Phys. 9 (4), 89.
[Sengupta et al.(2017)Sengupta, Carrara &
Stocker]sengupta-etal-2017
Sengupta, A., Carrara, F. & Stocker, R. 2017
Phytoplankton can actively diversify their migration strategy in response
to turbulent cues. Nature 543 (7646), 555–558.
[Soria et al.(1994)Soria, Sondergaard, Cantwell, Chong &
Perry]soria-etal-1994
Soria, J., Sondergaard, R., Cantwell, B. J., Chong,
M. S. & Perry, A. E. 1994 A study of the fine‐scale
motions of incompressible time‐developing mixing layers. Phys.
Fluids 6 (2), 871–884.
[Squires & Eaton(1990)]squires-eaton-1990
Squires, K.D. & Eaton, J.K. 1990 Particle response
and turbulence modification in isotropic turbulence. Phys. Fluids A
2 (7), 1191–1203.
[Tanaka & Eaton(2010)]tanaka-eaton-2010
Tanaka, T. & Eaton, J. K. 2010 Sub-Kolmogorov
resolution partical image velocimetry measurements of particle-laden forced
turbulence. J. Fluid Mech. 643, 177–206.
[Ten Cate et al.(2004)Ten Cate, Derksen, Portela & van
Den Akker]tenCate-etal-2004
Ten Cate, A., Derksen, J. J., Portela, L. M. & van
Den Akker, H. E. a. 2004 Fully resolved simulations of colliding
monodisperse spheres in forced isotropic turbulence. J. Fluid Mech.
519, 233–271.
[Truesdell(1954)]truesdell-1954
Truesdell, C. 1954 The Kinematics of Vorticity.
Indiana University Press.
[Tsuji et al.(1993)Tsuji, Kawaguchi & Tanaka]tsuji-etal-1993
Tsuji, Y., Kawaguchi, T. & Tanaka, T. 1993
Discrete particle simulation of two-dimensional fluidized bed.
Powder Technol. 77 (1), 79–87.
[Uhlmann(2005)]uhlmann-2005
Uhlmann, M. 2005 An immersed boundary method with direct
forcing for the simulation of particulate flows. J. Comput. Phys.
209 (2), 448–476.
[Uhlmann & Chouippe(2017)]uhlmann-chouippe-2017
Uhlmann, M. & Chouippe, A. 2017 Clustering and
preferential concentration of finite-size particles in forced
homogeneous-isotropic turbulence. J. Fluid Mech. 812,
991–1023.
[Uhlmann & Doychev(2014)]uhlmann-doychev-2014
Uhlmann, M. & Doychev, T. 2014 Sedimentation of a
dilute suspension of rigid spheres at intermediate Galileo numbers: The
effect of clustering upon the particle motion. J. Fluid Mech.
752, 310–348.
[Vreman(2016)]vreman-2016
Vreman, A. W. 2016 Particle-resolved direct numerical
simulation of homogeneous isotropic turbulence modified by small fixed
spheres. J. Fluid Mech. 796, 40–85.
[Wang et al.(2023)Wang, Jiang & Sun]wang-etal-2023
Wang, C., Jiang, L. & Sun, C. 2023 Numerical
study on turbulence modulation of finite-size particles in plane-couette
flow. J. Fluid Mech. 970, A7.
[Wang & Maxey(1993)]wang-maxey-1993
Wang, L.-P. & Maxey, M.R. 1993 Settling velocity and
concentration distribution of heavy particles in homogeneous isotropic
turbulence. J. Fluid Mech. 256, 27–68.
[Yeo et al.(2010)Yeo, Dong, Climent & Maxey]yeo-etal-2010
Yeo, K., Dong, S., Climent, E. & Maxey, M. R.
2010 Modulation of homogeneous turbulence seeded with finite size
bubbles or particles. Int. J. Multiph. Flow 36 (3),
221–233.
|
http://arxiv.org/abs/2409.03203v1 | 20240905025128 | An Effective Deployment of Diffusion LM for Data Augmentation in Low-Resource Sentiment Classification | [
"Zhuowei Chen",
"Lianxi Wang",
"Yuben Wu",
"Xinfeng Liao",
"Yujia Tian",
"Junyang Zhong"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
TC-LLaVA: Rethinking the Transfer from Image to Video Understanding with Temporal Considerations
Mingze Gao^1,2,3,† Jingyu Liu^2 Mingda Li^2 Jiangtao Xie^4 Qingbin Liu^2
Bo Zhao^2 Xi Chen^2 Hui Xiong^1,3, *
^1 The Hong Kong University of Science and Technology (Guangzhou), China
^2Tencent PCG
^3The Hong Kong University of Science and Technology, China
^4 Dalian University of Technology, China
Received 16 July 2024; accepted 04 September 2024
=================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Sentiment classification (SC) often suffers from low-resource challenges such as domain-specific contexts, imbalanced label distributions, and few-shot scenarios. The potential of the diffusion language model (LM) for textual data augmentation (DA) remains unexplored, moreover, textual DA methods struggle to balance the diversity and consistency of new samples. Most DA methods either perform logical modifications or rephrase less important tokens in the original sequence with the language model. In the context of SC, strong emotional tokens could act critically on the sentiment of the whole sequence. Therefore, contrary to rephrasing less important context, we propose DiffusionCLS to leverage a diffusion LM to capture in-domain knowledge and generate pseudo samples by reconstructing strong label-related tokens. This approach ensures a balance between consistency and diversity, avoiding the introduction of noise and augmenting crucial features of datasets. DiffusionCLS also comprises a Noise-Resistant Training objective to help the model generalize. Experiments demonstrate the effectiveness of our method in various low-resource scenarios including domain-specific and domain-general problems. Ablation studies confirm the effectiveness of our framework's modules, and visualization studies highlight optimal deployment conditions, reinforcing our conclusions.
§ INTRODUCTION
Sentiment classification is a crucial application of text classification (TC) in Natural Language Processing (NLP) and can play a crucial role in multiple areas. However, NLP applications in domain-specific scenarios, such as disasters and pandemics, often meet with low-resource conditions, especially domain-specific problems, imbalance data distribution, and data deficiency <cit.>. Recently, the birth of pre-trained language models (PLMs) and large language models (LLMs) have advanced the NLP field, giving birth to numerous downstream models based on them. On the one hand, these PLMs take the models to a new height of performance, on the other hand, since these models are highly data-hungry, they struggle to perform satisfactorily on most tasks under noisy, data-sparse and low-resource conditions <cit.>.
To address these challenges, one effective approach is data augmentation (DA), which enriches the diversity of the dataset without explicitly collecting new data <cit.>. Classic rule-based DA methods employ logical modifications to obtain pseudo samples, such as EDA <cit.>, and AEDA <cit.>. Model-based DA methods develop rapidly as the transformer architecture dominates the NLP field, most of these methods execute DA through corrupt-then-reconstruct (CTR), as examples shown in Table <ref>. Namely, masked language model (MLM) <cit.>, and GENIUS <cit.> which applies BART as the sample generator. Also, <cit.> finetunes GPT-2 and generates pseudo samples with label prompts, called LAMBADA.
However, these methods often struggle with domain-specific tasks and uneven label distributions. Some methods generate samples solely relying on pre-trained knowledge, like GENIUS. The other though finetuned on the downstream dataset, these methods generate samples only conditioned on the label itself, such as LAMBADA, leading to strong label inconsistency, especially in data-sparse settings. Also, most CTR methods focus on replacing minor tokens in sequences but keeping the crucial tokens stationary to generate high-quality pseudo samples.
In contrast, we corrupt the most label-related tokens first and reconstruct the whole sentence conditioned on the context and label prompt, as shown in Table <ref>, to diversify the key label-related tokens rather than less important contexts. This approach not only augments sample diversity but also upholds consistency through selective masking. Inspired by DiffusionBERT <cit.>, which is designed to recover the most informative tokens from those with less informatics, we propose DiffusionCLS. Additionally, building upon the findings of <cit.>, we further introduce consistency and diversity as crucial elements for quality of samples. High-quality pseudo samples must align with their labels and domain contexts, minimizing noise introduction. Integrating these samples enhances dataset diversity, thereby positively impacting the model performance.
DiffusionCLS initially finetunes PLM with a diffusion objective, functioning as a sample generator, followed by training the TC model in a noise-resistant manner. By fine-tuning the diffusion LM, we can then input original samples with their crucial tokens corrupted and use the label as a generation prompt to get new samples. This method diversifies the original dataset by replacing strong label-related tokens and also steers the model towards producing high-quality pseudo samples that comply with the diversity-consistency rule.
The major contributions of this paper can be summarized as follows:
* We propose DiffusionCLS, which comprises a diffusion LM-based data augmentation module for SC, generating diverse but consistent pseudo samples by substituting diverse strong label-related contexts.
* Designed and integrated a noise-resistant training method within the proposed DiffusionCLS, which significantly improves the SC model's performance with pseudo samples.
* Comprehensive experiments on domain-specific and multilingual datasets validate DiffusionCLS's superior performance in SC tasks. Detailed ablation studies highlights the effectiveness of its individual modules.
* A visualization study is conducted to discuss the diversity-consistency trade-off, which further validates the effectiveness of DiffusionCLS.
§ RELATED WORK
§.§ Low-Resource Text Classification
Motivated by the observation that data is often scarce in specific domains or emergent application scenarios, low-resource TC <cit.> has recently attracted considerable attention. Low-resource TC involves effectively categorizing text in scenarios where data is scarce or limited.
<cit.> and <cit.> have explored several methods for low-resource TC, which mainly involve traditional machine learning techniques to increase data quantity and diversity.
Recently, since the studies by <cit.> and <cit.> demonstrated the impressive performance of PLMs across various NLP tasks, a significant amount of work has leaned towards using PLMs to address low-resource TC problems <cit.>. However, PLMs requires amounts of annotated samples for finetuning, data-sparce significantly impacts models' performances and DA could mitigate such problems.
§.§ Textual Data Augmentation
To address low-resource challenges, various data augmentation methods have been proposed, including Easy-Data-Augmentation (EDA) <cit.>, Back-Translation (BT) <cit.>, and CBERT <cit.>. However, these methods, relying on logical replacements and external knowledge, often introduce out-domain knowledge and domain inconsistency. Moreover, these methods focus only on a specific original input, resulting in limited diversity.
Another type of data augmentation method includes representation augmentation approaches. These methods generate pseudo-representation vectors by interpolating or perturbing the representations of original samples. For instance, <cit.> proposed the groundbreaking technique known as mixup, and <cit.> recently proposed AWD, an advanced approach in textual DA.
Recent advancements in generative models have led to research on GPT-based paraphrasing data augmentation methods, such as LAMBADA <cit.>, which fine-tuned GPT-2 model to generate new samples. However, LAMBADA generates new samples based solely on specific labels, neglecting information from the original samples. Another research direction involves not fine-tuning PLMs but combining the language modeling capability of pretrained models with the generative diversity of diffusion models <cit.>, which significantly improves the capability of the generative encoder, i.e., MLM.
Since diffusion LMs can generate new sequences from masked original sequences, which matches the goal of retaining key information and rephrasing secondary information in generative data augmentation. Therefore, on top of diffusion LM, we propose DiffusionCLS, simultaneously considering label and domain consistency and generating pseudo samples by partially paraphrasing strong label-related tokens. Extensive experiments verify the effectiveness of our method and hopefully be extended to numerous NLP tasks.
§ METHODOLOGY
Sentiment classification models often overfit and lack generalization due to sample deficiency. To address this, we propose DiffusionCLS, consisting of Label-Aware Noise Schedule, Label-Aware Prompting, Conditional Sample Generation, and Noise-Resistant Training. A diffusion LM-based sample generator is integrated to generate new samples from the original dataset, enhancing TC model performance.
Figure <ref> illustrates DiffusionCLS. The diffusion LM-based sample generator generates new samples for data augmentation, while the TC model is trained for the specific TC task. Label-Aware Prompting and Label-Aware Noise Schedule are crucial for training the sample generator, and Conditional Sample Generation and Noise-Resistant Training contribute to the training of the TC model.
§.§ Sample Generator
To generate usable samples for further TC model training, there are two crucial rules of success to satisfy, diversity and consistency. Therefore, we expect the generated samples to be as diverse as possible with consistency to the TC label and original domain simultaneously. However, higher diversity also leads to a higher difficulty in maintaining consistency.
As <cit.> excavated the potential of combining diffusion models with LMs for sequence generation, we built the sample generator from the discrete diffusion model scratch. Precisely, we design the Label-Aware Noise Schedule for the diffusion LM, which helps the model to generate diverse and consistent samples. Additionally, we integrate Label-Aware Prompting into the training regime, enabling the model to grasp label-specific knowledge, subsequently serving as the guiding condition for sample generation. These two modules help the generator to surpass the diversity-consistency challenge and excel in performance.
§.§.§ Label-Aware Noise Schedule
A proper algorithm of noise schedule could guide the diffusion LM to capture more accurate semantic relations. Moreover, the effectiveness of time-agnostic decoding has been demonstrated, indicating that incorporating implicit time information in the noise schedule process is effective <cit.>. Since the generated samples are also expected to stay consistent with the TC label and the original domain, we proposed Label-Aware Noise Schedule.
The Label-Aware Noise Schedule begins by integrating a proxy model that has been fine-tuned for the TC task. This proxy model allows for the determination of the importance of each token in the TC process, quantified through attention scores between the [CLS] token and other tokens, which are derived from the last layer of proxy model and calculated as follows.
w_i = 1/H∑_h=1^Hs_i^h,
where s_i^h represents the i-th token attention score in the h-th attention head, and w_i denotes the weight measuring the importance of the i-th token.
Motivated by <cit.>'s DiffusionBERT, we incorporates absorbing state in the LM noise schedule. In our method, during the masking transition procedure, each token in the sequence remains unchanged or transitions to [MASK] with a certain probability. The transition probability of token i at step t can be denoted as:
q_t^i = 1 - t/T - λ· S(t) · w_i,
S(t) = sintπ/T,
where q_t^i represents the probability that a token is being masked, and T denotes the total step number. λ is introduced to control the impact of w_i, as a hyper-parameter.
By introducing strong label-related w_i, the diffusion model is guided to recover the tokens with lower weight first, then recover the tokens that are strongly related to the classification task later.
The probability of a token being masked is tied to its attention score relative to the [CLS] token, reflecting its contribution to the TC objective. Figure <ref> shows that masking probabilities depend on the token's label-related information. Label-Aware Noise Scheduling guides the model to recover the most label-related key tokens from those less crucial to the classification task.
§.§.§ Label-Aware Prompting
However, such a noise schedule still poses a challenge to the conditional generation process. The diversity-consistency trade-off becomes more intense when important tokens are masked. With fewer unmasked tokens provided, the model naturally has a higher possibility of generating tokens that would break the label consistency.
To address this challenge, we propose Label-Aware Prompting, a method that offers supplementary conditional information during both training and inference phrases. This additional information aids the model in generating samples that uphold label consistency.
As Figure <ref> illustrated, following the masking of samples in the noise schedule process, the labels of these samples are concatenated with their respective masked sequences.
§.§ Text Classification Model
In this work, we adopt encoder-based PLM as our backbone model and finetuned them for the TC task. Though diffusion LM is strong enough to maintain consistency and diversity at the same time, the introduction of pseudo samples unavoidably introduced noise data to the training of the TC model. To mitigate such a problem, we design a contrastive learning-based noise-resistant training method, further improving the scalability of the proposed DiffusionCLS.
§.§.§ Reflective Conditional Sample Generation
We implement label prompting as a prior for the sample generator, akin to Label-Aware Prompting. Additionally, we introduce a novel reflective conditional sample generation module within the training loop of the TC model. This module dynamically generates masked sequences for the sample generator, integrating insights from label annotations and attention scores derived from the TC model simultaneously, calculating weights for each token with Eq.<ref>.
However, generating pseudo samples from varying degrees of masking will result in various degrees of context replacement flexibility, thus impacting the consistency and diversity of pseudo samples. Essentially, providing a proper amount of conditional information will lead to plausible samples. Thus, we perform multiple experiments to search for the best condition, which will be further discussed in Section <ref>.
§.§.§ Noise-Resistant Training
The introduction of pseudo samples unavoidably introduced noise data to the training of the TC model. To mitigate such a problem, we design a contrastive learning-based noise-resistant training method, further improving the scalability of the proposed DiffusionCLS.
Figure <ref> demonstrates the Noise-resistant Training. Specifically, besides including supervision signals from labels of original and generated samples, we also guide the model to enlarge the gap between samples with different labels.
Consider a dataset comprising m distinct categories C = {c_1,c_2,...,c_m}, we can obtain k samples from the original training set, and the corresponding subscript list is I = {1, 2,..., k-1, k}. Essentially, a batch of sentences S = {s_1, s_2, ..., s_k-1, s_k}, their corresponding label sequence L = [l_1, l_2, ..., l_k-1, l_k] with l_i ∈ C, and negative set for each sample N_i = {j ∈ I| l_j≠ l_i}. From this, we derive semantic representations H = {h_1, h_2, ..., h_k-1, h_k} from the TC model. Furthermore, employing a sample generator yields B new samples for each original sample s_i, denoted as G_i = {g_0^s_i,g_1^s_i, ..., g_B-1^s_i, g_B^s_i}, where g_0^s_i = s_i.
Contrastive Loss. To avoid expanding the impact of noise samples, we calculate contrastive loss from only the original samples. With the aim to enlarge the gap between samples from different categories, the contrastive loss can be calculated as:
L_c = 1/Klog∑_i∈ I∑_j ∈ N_iexp(sim(h_i,h_j)/τ) ,
where sim() denotes the consine similarity function and τ is a hyper-parameter as a scaling term.
Classification Loss. We also allows supervision signals directly affects the training of the TC model through the cross entropy loss, which can be denoted as:
L_e = -1/K(B+1)∑_i∈ I∑_b=0^B∑_c∈ C y^i_b,clog(ŷ^i_b,c),
where y^i_b,c is the label indicator, and ŷ^i_b,c is the predicted probability of b-th pseudo sample of the original sample i being of class c.
Training Objective. From two losses mentioned above, we formulated the overall training objective for the TC model, which can be denoted as:
L = L_c + L_e.
§ EXPERIMENTS
§.§ Datasets and Baselines
To measure the efficiency of the propose DiffusionCLS, we utilize both domain-specific and domain-general datasets comprising samples in Chinese, English, Arabic, French, and Spanish. Namely, domain-specific SMP2020-EWECT[https://smp2020ewect.github.io], India-COVID-X[https://www.kaggle.com/datasets/surajkum1198/twitterdata], SenWave <cit.>, and domain-general SST-2 <cit.>. Additionally, to compare with the most cutting-edge low-resource TC methods, we utilize SST-2 dataset to evaluate our method in the few-shot setting. Dataset statistic and descriptions are demonstrated in Appendix <ref>.
To thoroughly explore and validate the capabilities of DiffusionCLS, we compare our method with a range of data augmentation techniques, from classic approaches to the latest advancements for low-resource TC. Specifically, we take Resample, Back Translation <cit.>, Easy Data Augmentation (EDA) <cit.>, SFT GPT-2 referenced to LAMBADA <cit.>, AEDA <cit.>, and GENIUS <cit.> as our baselines. Also, we compare our method in the few-shot setting with a couple of cutting-edge methods, namely, SSMBA <cit.>, ALP <cit.>, and SE <cit.>. More details of our baselines are demonstrated in Appendix <ref>.
§.§ Experiment Setup
We set up two experimental modes, entire data mode and partial data mode, to reveal the effectiveness of our method in different scenarios. Since severe imbalanced distribution challenges existed, we take macro-F1 and accuracy as our major evaluation metrics.
Also, we conduct 5-shot and 10-shot experiments on SST-2 to investigate the performance of DiffusionCLS in extreme low-resource conditions. For evaluation, we use accuracy as the metric and report the average results over three random seeds to minimize the effects of stochasticity.
Additionally, we setup comparisons between variant augmentation policies, namely, generate new samples until the dataset distribution is balanced, and generate n pseudo samples for each sample (n-samples-each), which denoted as B/D and G/E in Table <ref>, and n=4 in our experiments.
Other related implementation details are described in Appendix <ref>.
§.§ Results and Analysis
The results of entire-data-setting experiments on datasets SMP2020-EWECT and India-COVID-X are mainly demonstrated in Table <ref>, which we compare DiffusionCLS with other strong DA baselines. For experiments with partial-data and few-shot settings, results are majorly showed in Figure <ref> and Table <ref>.
Results in Entire Data Mode. As shown in Table <ref>, in general, the proposed DiffusionCLS outperforms most DA methods on domain-specific dataset SMP2020-EWECT and India-COVID-X, especially under G/E policy. Notably, the DiffusionCLS positively impacts the TC model across all policies and datasets, which most baselines fail.
Our method excels in dealing with the challenge of uneven datasets. Under severe uneven distribution and domain-specific scenarios, i.e., the dataset SMP2020-EWECT, most DA baselines fail to impact the classification model positively except DiffusionCLS, which achieves the best performance. Also, our method achieves competitive performance under data-sparse and domain-specific scenarios, i.e., in the dataset India-COVID-X, most DA methods bring improvement to the classification model, and our DiffusionCLS ranked second.
Rule-based DA methods such as EDA, rather lack diversity bringing overfit problems or solely relying on out-domain knowledge therefore breaking consistency and impacting the task model negatively. For model-based methods, though most methods significantly increase the diversity of the generated samples, they rather generate samples solely depending on pretraining knowledge and in-context-learning techniques or generate samples only conditioned on the label itself, posing a challenge of maintaining consistency.
Results in Partial Data Mode and Few-shot Settings. As shown in Figure <ref> and Table <ref> in Appendix <ref>, the proposed DiffusionCLS method consistently improves the classification model. Notably, DiffusionCLS matches the PLM baseline performance on the Arabic SenWave dataset using only 50% of the data samples.
We also compare DiffusionCLS with the most cutting-edge few-shot methods on SST-2 dataset under 5-shot and 10-shot setting, the results are shown in Table <ref>. Though our method fails to surpass all few-shot baselines, it still achieves competitive performance with those designed for the few-shot task.
Since DiffusionCLS requires diffusion training to adapt to domain-specific tasks, extreme sample insufficiency may introduce noise, negatively impacting the model. However, our method positively impacts the TC model in most low-resource cases by effectively utilizing pre-trained and in-domain knowledge, from severe imbalanced label distribution to severe sample insufficiency.
§.§ Ablation Study
To validate the effectiveness of modules in the proposed DiffusionCLS, we conduct ablation studies to study the impacts of each module. Table <ref> presents the results of the ablation experiments. In each row of the experiment results, one of the modules in DiffusionCLS has been removed for discussion, except D.A., which removes all modules related to the generator and only applies noise-resistance training.
Overall, all modules in the proposed DiffusionCLS works positively to the TC model, compared with the pure PLM model, the application of DiffusionCLS leads to 2.11% and 3.66% rises in F1 values on dataset SMP2020-EWECT and India-COVID-X respectively.
The results of ablation studies further validate that the Label-Aware Prompting effectively improves the quality of pseudo samples. Also, the Noise-Resistant Training reduces the impact of noise pseudo samples.
§.§ Discussions and Visualizations
Generating pseudo samples from more masked tokens provides more flexibility for generation and tends to result in more diverse samples, however, it will enlarge the possibility of breaking the consistency since less information is provided.
To analyze the optimal amount of masks for generating new pseudo samples, we conduct experiments on the India-COVID-X dataset. During conditional sample generation, we gather masked sequences from 32 noise-adding steps, group them into sets of eight, and evaluate how varying masking levels impact the model's performance.
As shown in Figure <ref>, our observations indicate a unimodal trend. The model's performance improves with increased masking, peaks at the 4th group, and then declines with further masking. This reflects the diversity-consistency trade-off, more masked tokens create more diverse samples, but overly diverse samples may be inconsistent with original labels or domain.
To explore the relationship between generated pseudo samples and original samples, we conduct 2D t-SNE visualization. Figure <ref> shows that as masking increases, pseudo samples gradually diverge from the original samples, indicating increased diversity.
§ CONCLUSION
In this work, we propose DiffusionCLS, a novel approach tackling SC challenges under low-resource conditions, especially in domain-specific and uneven distribution scenarios. Utilizing a diffusion LM, DiffusionCLS captures in-domain knowledge to generate high-quality pseudo samples maintaining both diversity and consistency. This method surpasses various kinds of data augmentation techniques. Our experiments demonstrate that DiffusionCLS significantly enhances SC performance across various domain-specific and multilingual datasets. Ablation and visualization studies further validate our approach, emphasizing the importance of balancing diversity and consistency in pseudo samples. DiffusionCLS presents a robust solution for data augmentation in low-resource NLP applications, paving a promising path for future research.
§ LIMITATIONS
Like most model-based data augmentation methods, the performance of data generators is also limited in extreme low-resource scenarios. This limitation persists because the model still necessitates training on the training data, even with the potential expansion of the dataset through the inclusion of unlabeled data, data deficiency impacts the data generator negatively.
§ EXPERIMENT SETUP, IMPLEMENTATION, AND DATASET STATISTICS
§.§ Experiment Setup
The low-resource challenge in TC includes problems like insufficient annotated samples, domain-specific adaptation problems, and imbalanced distribution. To measure the capability of the proposed DiffusionCLS to mitigate these problems, we conducted experiments on three domain-specific datasets with respect to the problems mentioned above, as shown in Table <ref>.
§.§ Implementation
For implementation, we take bert-base-uncased[https://huggingface.co./google-bert/bert-base-uncased] and chinese-roberta-wwm[https://huggingface.co./hfl/chinese-roberta-wwm-ext] from the huggingface platform respectively for English and Chinese dataset training.
Also, hyper-parameters settings of our work are demonstrated in Table <ref> and Table <ref>.
§.§ Datasets
For our experiments, we utilize multilingual datasets, both domain-specific and domain-general, to evaluate the proposed DiffusionCLS. Data statistics and their challenges are demonstrated in Table <ref> and Table <ref>.
* SMP2020-EWECT[https://smp2020ewect.github.io]. This Chinese dataset includes 8,606 pandemic-related posts, categorized into neutral, happy, angry, sad, fear, and surprise, with highly imbalanced label distribution.
* India-COVID-X[https://www.kaggle.com/datasets/surajkum1198/twitterdata]. This dataset contains cleaned English tweets from India X platform on topics such as coronavirus, COVID-19, and lockdown. The tweets have been labeled into four sentiment categories with relatively balanced label distribution.
* SenWave<cit.>. This dataset includes about 5,000 English tweets and approximately 3,000 Arabic tweets in the specific domain of the pandemic and lockdown, which are annotated with sentiment labels. English-translated French and Spanish annotated samples are also included. We extract all single label samples for experiments.
* SST-2<cit.>. It includes 11,855 movie review sentences parsed by the Stanford parser, with 215,154 unique phrases annotated by three human judges.
§ BASELINES
* Non-Generative Methods
* SSMBA<cit.>. Uses a corruption and reconstruction function to augment data by filling in masked portions.
* ALP<cit.>. Employs Lexicalized Probabilistic Context-Free Grammars to generate syntactically diverse augmented samples.
* SE<cit.>. Utilizes a self-evolution learning-based mixup technique to create adaptive pseudo samples for training.
* AEDA<cit.>. Randomly insert punctuations into the original sentences to produce new samples.
* Generative Methods
* GPT-2<cit.>. Fine-tunes GPT-2 with prompt-based SFT, prompting labels to generate pseudo samples.
* GENIUS<cit.>.
A conditional text generation model using sketches as input, which can fill in the missing context for a given sketch.
* Representation Augmentation Methods
* mixup<cit.>. Mixup is a representational DA technique that creates new training samples by linearly interpolating between pairs of examples and their labels.
* AWD<cit.>. AWD generates challenging positive examples for low-resource text classification by diluting strong positive word embeddings with unknown-word embeddings.
§ EXPERIMENT RESULTS WITH PARTIAL DATA MODE
The proposed DiffusionCLS method consistently enhances the classification model, achieving higher accuracy with only 50% training data than the raw PLM on dataset SMP2020-EWECT. Detailed results are shown in Table <ref>.
|
http://arxiv.org/abs/2409.02433v1 | 20240904042308 | From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption | [
"Thanh Nguyen",
"Luiz Fernando de Lima",
"Maria Teresa Badassarre",
"Ronnie de Souza Santos"
] | cs.SE | [
"cs.SE"
] |
University of Calgary
Calgary
AB
Canada
[email protected]
Università di Bari
Bari
BA
Italy
[email protected]
CESAR
Recife
PE
Brazil
[email protected]
University of Calgary
Calgary
AB
Canada
[email protected]
§ ABSTRACT
Context: The increasing integration of artificial intelligence and machine learning into software systems has highlighted the critical importance of ensuring fairness in these technologies. Bias in software can lead to inequitable outcomes, making fairness testing essential. However, the current landscape of fairness testing tools remains underexplored, particularly regarding their practical applicability and usability for software development practitioners. Goal: This study aimed to evaluate the practical applicability of existing fairness testing tools for software development practitioners, assessing their usability, documentation, and overall effectiveness in real-world industry settings. Method: We identified 41 fairness testing tools from the literature and conducted a heuristic evaluation and documentary analysis of their installation processes, user interfaces, supporting documentation, and update frequencies. Technical analysis included assessing configurability for diverse datasets. The analysis focused on identifying strengths and deficiencies to determine their suitability for industry use. Findings: Our findings revealed that most fairness testing tools show significant deficiencies, particularly in user-friendliness, detailed documentation, and configurability. These limitations restrict their practical use in industry settings. The tools also lack regular updates and possess a narrow focus on specific datasets, which constrains their versatility and scalability. Despite some strengths, such as cost-effectiveness and compatibility with several environments, the overall landscape of fairness testing tools requires substantial improvements to meet industry needs. Conclusion: There is a pressing need to develop fairness testing tools that align more closely with industry requirements, offering enhanced usability, comprehensive documentation, and greater configurability to effectively support software development practitioners.
LAY ABSTRACT. In today's world, we need to ensure that AI systems are fair and unbiased. Our study looked at tools designed to test the fairness of software to see if they are practical and easy for software developers to use. We found that while some tools are cost-effective and compatible with various programming environments, many are hard to use and lack detailed instructions. They also tend to focus on specific types of data, which limits their usefulness in real-world situations. Overall, current fairness testing tools need significant improvements to better support software developers in creating fair and equitable technology. We suggest that new tools should be user-friendly, well-documented, and flexible enough to handle different kinds of data, helping developers identify and fix biases early in the development process. This will lead to more trustworthy and fair software for everyone.
<ccs2012>
<concept>
<concept_id>10003456.10003457.10003580</concept_id>
<concept_desc>Social and professional topics Computing profession</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption
Ronnie de Souza Santos
September 9, 2024
================================================================================================
§ INTRODUCTION
In recent years, the rapid growth of artificial intelligence has highlighted the importance of fairness in software engineering with respect to all phases of the development life-cycle. Software fairness is the ethical practice of ensuring that software systems, algorithms, and their outcomes are just, equitable, and unbiased across different groups of people, regardless of characteristics such as race, gender, ethnicity, or socioeconomic status <cit.>. In software engineering, this involves preventing discrimination, promoting inclusivity, and mitigating potential biases in the design, development, deployment, and usage of software applications and systems <cit.>.
Software fairness is essential across various societal aspects, especially as many organizations integrate machine learning into their processes, such as job interviewing, calculating credit scores, and assessing recidivism risk <cit.>. The goal of software fairness is to eliminate biases so that when given a set of inputs that differ only on sensitive attributes (e.g., race, sex, age), the outcomes should be similar without targeting specific individuals or groups <cit.>.
Many researchers advocate treating software fairness as a first-class entity throughout the entire software engineering process <cit.>. However, despite its importance, fairness testing remains under-explored and under-studied. For instance, it plays an important role in ensuring software systems meet fairness requirements by detecting and reporting bugs or faults resulting from system biases for future improvement <cit.>. The challenge lies in testing software fairness, as biases can arise at any stage of the development process. For example, biases can originate from early stages, such as training data and biased algorithms, to later stages, where user interactions feed data back into the system, perpetuating further biases <cit.>.
As with any other type of testing in software development, the success of fairness testing is based on techniques, processes, and tools designed to support professionals in the industry <cit.>. Testing tools are important because they provide the necessary infrastructure to identify, analyze, and fix issues within software systems. Therefore, given the limited number of studies focused on software fairness testing, this paper explores 41 software testing tools documented in a previous literature review <cit.>. We used document analysis and heuristic evaluation to investigate how these tools could support software professionals in the industry who deal with fairness testing on a regular basis. Our goal is to answer the following research question: RQ. How can existing software testing tools available in the literature support practitioners in the industry in conducting fairness testing?
This study makes key contributions to practitioners regarding software development and software fairness. First, by exploring fairness testing tools documented in the literature, we provide an overview of the current landscape of these tools. Second, through document analysis and heuristic evaluation, we explore several characteristics of these tools in supporting fairness testing in practice, including their strengths and potential limitations. Therefore, our study has the potential to equip practitioners with valuable insights into selecting and utilizing the appropriate tools for fairness testing, thereby enhancing their ability to identify, analyze, and mitigate biases in software systems. Additionally, our findings contribute to the broader understanding of how existing tools can be adapted or improved to better support fairness practices in the software industry.
§ BACKGROUND
With the advancement of technology, computers and their applications have become smaller, more portable, affordable, and integrated into many parts of our lives <cit.>. For example, previously, we cared about the sound quality when buying a speaker, but now we also consider whether it supports virtual assistants. As the importance of computers and their applications has grown, so has the emphasis on the quality of these applications <cit.>.
Software testing is an essential element in the software development process, ensuring that products are bug-free before being released to the market. In this process, initially, software professionals used ad-hoc methods to manually detect errors, but today, advanced programs and testing tools have made the testing process easier and less time-consuming <cit.>. Testing tools have become increasingly popular compared to manual testing because they can integrate smoothly into the automated software testing process, reduce labor, and eliminate human errors <cit.>.
Today, testing tools are widely available, varying in capabilities and features, which can make it challenging to choose the right tool for a specific testing purpose. When adequate tools are not used during testing activities, the susceptibility to errors in the software increases, often reducing the potential success of the software <cit.>. To select the appropriate tools, testing professionals rely on several characteristics and metrics, including compatibility with existing systems, ease of integration, the range of testing features provided (such as functional, performance, and security testing), user-friendliness, cost, and the level of support and documentation available <cit.>.
Previous research has explored and compared testing tools across various contexts. These studies have focused on defining metrics to evaluate these tools <cit.> and have drawn comparisons among tools in specific testing scenarios <cit.>. However, the discussion on fairness testing is still in its early stages, and while several tools are being proposed, there are few studies focused on evaluating and comparing them based on defined features and metrics required by practitioners <cit.>. This gap highlights the need for systematic evaluations to ensure that the tools meet the practical needs of those working in the industry and effectively support fairness testing efforts.
§ METHOD
Our methodology builds upon established strategies from previous research that have explored and analyzed testing tools in various contexts <cit.>. We incorporated general characteristics commonly required by testing practitioners and adapted standard metrics to develop a tailored set of criteria for analyzing fairness testing tools available in the literature.
§.§ Tool Selection
In the initial step, we searched for fairness testing tools that are open-source and publicly available. The fairness testing tools were collected from <cit.> - Table <ref>. We began with 41 tools and then narrowed down the selection by evaluating each tool by following a set of inclusion and exclusion criteria.
§.§ Inclusion and Exclusion Criteria
Our inclusion and exclusion criteria were designed to ensure a practical evaluation of fairness testing tools, while also maximizing the number of tools analyzed. The criteria allowed us to simulate real-world conditions faced by testing professionals, focusing on the relevance and applicability of our findings.
Inclusion Criteria. We included tools that could be successfully installed and provided adequate instructions for use within a 2-hour evaluation timeframe. This criterion was chosen to simulate the behavior of a testing professional who is exploring a fairness testing tool available for their work in the industry. We also included tools that, although lacking comprehensive instructions, could be successfully installed and utilized within the 2-hour timeframe with the help of additional guidance from associated research papers. This approach allowed us to include tools that might otherwise be excluded due to minor documentation deficiencies, thereby exploring a broader range of available tools.
Exclusion Criteria. The exclusion criteria were applied to filter out tools that would not be practical for industry use. Tools were excluded if they could not be installed due to outdated programming languages, dependency conflicts, or broken source links. This ensured that only tools compatible with current technology standards were considered. We also excluded tools that lacked sufficient instructions for proper usage, as inadequate documentation could hinder the tool's usability and effectiveness in real-world projects.
By applying these criteria, we expected that our analysis focused on tools that were not only available but also feasible for use in a professional setting. This approach enabled us to explore the maximum number of tools possible, providing a comprehensive overview of the current landscape of fairness testing tools.
§.§ Data Analysis
We employed two data analysis methods on tools that passed our predefined inclusion and exclusion criteria: a heuristic evaluation focused on usability and documentary analysis. These methods ensured a practical evaluation of the selected fairness testing tools.
Heuristic Evaluation.
Heuristic evaluation is a usability inspection method used to assess a system against predefined characteristics <cit.>. In our study, these characteristics were identified from previous studies as essential for evaluating testing tools <cit.>. Our heuristic evaluation was designed to mirror the initial tool selection process performed by professional testers when considering tools for potential use in a project.
Often, heuristic evaluations use practical guidelines to assess the usability of interfaces through walkthroughs and issue reporting. This approach is grounded in established rules. In this paper, we evaluated several key characteristics of testing tools to ensure they are practical and effective for professionals in real-world settings <cit.>. We looked at the ease of installation, including package requirements and compatibility. We explored the necessity for programming knowledge and script access, specifically, whether professionals need to modify the tool's scripts to run it effectively. The user-friendliness of the interface was evaluated for intuitiveness and accessibility. We checked the quality of documentation, particularly the presence and comprehensiveness of tutorial files like Readme.md instructions. The frequency of software updates was verified to ensure the tools remain current with technological advancements. Finally, we evaluated the versatility of each tool by examining its ability to handle various types of datasets and its adaptability to different scenarios. These characteristics are important for ensuring the tools are practical and effective for professionals in real-world settings.
Document Analysis.
Documentary analysis is a qualitative method used to review and interpret documents to explore and discuss a research problem. This method involves locating, interpreting, integrating, and drawing conclusions from valid documents such as guidelines, official reports, and academic papers <cit.>. For this study, we conducted document analysis on the tool documentation, including user manuals, Readme files, and other instructional materials, guidelines associated with the tools, and the research papers in which the tools were initially proposed or evaluated. This approach enabled us to extract and synthesize relevant information, providing a comprehensive understanding of each tool's capabilities and limitations.
To support our documentary analysis, we employed thematic analysis <cit.>, a method used to identify and analyze patterns (themes) within qualitative data. Thematic analysis is widely used in software engineering research, helping to identify cross-references among different data sources <cit.>. By systematically reviewing the documentation for each tool, we were able to highlight relevant characteristics and summarize our findings. This structured approach allowed us to gather detailed information, synthesize data from multiple sources, and draw conclusions about the tools' applicability and effectiveness in fairness testing, providing actionable insights for practitioners.
Agreement Process. Two researchers independently analyzed the 41 tools to ensure a thorough and unbiased evaluation. Each researcher conducted their assessment separately to avoid any influence from the other's findings. Following their independent analyses, a third researcher compiled and summarized the findings. Agreements between the two initial analyses were combined, and complementary findings were integrated to provide a comprehensive overview. Discrepancies or disagreements between the analyses were addressed in a consensus meeting, which could include the participation of a fourth author. This meeting facilitated a collaborative discussion to resolve differences and ensure a unified interpretation of the results. This process was straightforward, particularly because many tools could not be installed and, therefore, did not undergo heuristic evaluation and document analysis. This limitation reduced the volume of information requiring agreement among researchers.
§ FINDINGS
After applying our inclusion and exclusion criteria, only five tools identified in the literature met the requirements for our heuristic and documentary analysis: LTDD, which focuses on identifying and excluding unfair features in binary classification models; Fairea, which uses mutation to balance fairness and accuracy in binary classification models; Scikit-fairness, a versatile toolkit integrated into Python for evaluating fairness in both classification and regression tasks; FairRepair, designed to transform decision trees and random forests into fairer models while maintaining accuracy; and RULER, which improves fairness in deep neural networks through a phased training process. Each tool has been tested on various datasets, demonstrating different strengths and capabilities in enhancing fairness in machine learning models.
An essential factor for these tools passing our criteria was the success of their installation and ease of use within the allocated two-hour evaluation timeframe. Many tools available in the literature failed to meet our criteria due to insufficient documentation or instructions, leading to unsuccessful installations or requiring extensive modifications to operate effectively. This lack of detailed guidance and high modification requirements determined the exclusion of these tools from our further analysis, as testing professionals typically prioritize simplicity and ease of use when initially engaging with a tool. Below, our findings include both fundamental and specific characteristics of these tools, which are summarized in Table <ref>.
§.§ Basic Requirement Assessment of Fairness Testing Tools
By comparing five tools using the defined characteristics, we explored the basic characteristics of each one, highlighting the differences and commonalities among the tools in terms of installation, programming knowledge, and access to the code. For instance, among the five tools, only Scikit-fairness does not require downloading the source repository. For LTDD, Fairea, FairRepair, and RULER, users need to download the repository to install and use the tools. This involves setting up a virtual environment and installing necessary packages like NumPy and pandas.
Additionally, we noticed that all the tools necessitate some programming and machine learning knowledge, particularly in Python, to be used effectively. This requirement underscores the need for testing professionals to be familiar with coding and understanding machine learning principles to utilize these tools properly. Regarding access to the code, except for Scikit-fairness, which does not require users to access its source code, the other tools provide their source code for users. This means that users of LTDD, Fairea, FairRepair, and RULER can directly access and modify the underlying code if needed. However, to run Scikit-fairness, users must write their program to leverage its capabilities.
§.§ Usability Characteristics and Limitations of Fairness Testing Tools
We explored the general usability characteristics of each tool, focusing on ease of installation, the presence of a user-friendly interface, the quality and availability of tutorials or documentation, and the frequency of software updates. This analysis allowed us to understand the practical aspects and limitations of each tool and provide insights into its usability for testing professionals.
When assessing the process of installation, we observed that all tools are easy to install, but they require virtual environments for some external libraries such as NumPy and Pandas. We also found that to run Scikit-fairness, we need to write the whole program to assess fairness with unrestricted classification datasets, while others do not need to set up a new program but only offer certain datasets.
In terms of user-friendly interfaces, none of the tools were designed with user interfaces as part of their development. None of the tools were created to support professionals in designing and running fairness testing cases or testing fairness in general software. Instead, these tools are more focused on research or specific development contexts. This focus limits their practical application for professionals who require more accessible and streamlined tools to integrate fairness testing into their workflow.
For instructions or tutorials on how to use the tools, we conducted a review of the README files, installation guides, and associated research papers. Our findings indicate significant variability in the quality and comprehensiveness of the documentation provided for these tools. Specifically, LTDD lacks detailed information on how to execute the tool, making it challenging for users to get started. Similarly, RULER provides some details on usage but fails to offer comprehensive instructions, which can hinder effective utilization. In contrast, tools such as Scikit-fairness, Fairea, and FairRepair include well-structured and detailed documentation with clear instructions and examples that facilitate both installation and operation.
The frequency of updating the tools was another aspect of our heuristic evaluation. Regular updates are essential for maintaining the relevance and functionality of software tools. Upon examining the sources of each tool, we found that only Scikit-fairness receives regular updates that reflect ongoing improvements. This regular maintenance ensures that Scikit-fairness remains robust and adaptable to new challenges in fairness assessment. Conversely, other tools like LTDD, Fairea, FairRepair, and RULER have not been updated since the conclusion of their respective research studies. This lack of updates limits their long-term usability and effectiveness in real-world applications.
Finally, we assessed the versatility of each tool in handling various datasets. We observed that, with the exception of Scikit-fairness, all the tools are primarily designed for binary datasets, such as the commonly used Adult and COMPAS datasets. This limitation restricts their applicability in diverse real-world scenarios. Specifically, LTDD is not suitable for datasets containing multiple sensitive attributes, as it can only assess one sensitive attribute at a time. Fairea has a rigid requirement for dataset placement, necessitating that datasets be stored in a specific folder, which can be cumbersome. FairRepair's tree-based methodology demands significant time for training models, particularly with large datasets, making it less efficient. Similarly, RULER also generally requires a longer evaluation time, which can be a hindrance in time-sensitive applications.
§ DISCUSSIONS
In this study, we attempted to install and use 41 fairness testing tools available in the literature but were only successful with 5. These 5 tools were assessed based on their capabilities in supporting testing practitioners in industry tasks. Among them, Scikit-fairness emerged as the most effective tool.
Scikit-fairness offers several advantages: it is easy to install as a built-in package within the program, provides an instructive webpage to guide users, and receives frequent updates with refined versions. Additionally, while other tools are limited to specific binary classification datasets, Scikit-fairness can be used as a general tool to test most binary classification datasets. Regarding other tools, FairRepair performs well with small datasets due to its reliance on decision trees and random forests but needs enhancement to handle larger datasets efficiently. Similarly, RULER, which is based on a deep learning model, also needs optimization for faster execution times.
Looking at limitations, we observed that clearer instructions or tutorials that explain how to use the tools would significantly benefit practitioners. Only a few tools provide direct guidance from the source, while the majority often necessitates the reading of related research papers, which might not be efficient for practitioners. Regarding versatility, we noted that most tools are designed to handle only binary classification datasets, which limits their applicability. Scikit-fairness stands out as a more versatile option that is capable of handling a wider range of binary classification datasets. However, other tools have specific requirements or limitations that reduce their versatility.
§.§ What Practitioners Need from Software Fairness Tools
According to the literature, practitioners require testing tools with several key characteristics, including applicability, compatibility, configurability, cost-effectiveness, cross-platform support, easy deployment, ease of use, expandability, maintenance of test cases, and test data, performance, popularity, and reporting features <cit.>. These characteristics ensure that tools can be integrated seamlessly into various development workflows, cater to diverse datasets and user requirements, and support continuous improvement and adaptation to new challenges, which now include fairness requirements.
The five tools we analyzed exhibit some of these desirable characteristics but also show significant gaps. Most tools are open-source, making them cost-effective, but they often lack user-friendly interfaces and detailed documentation, which impedes ease of use. Compatibility is generally high with Python environments, yet the tools tend to be narrowly focused, handling only specific types of datasets and scenarios. Configurability and expandability are limited, as many tools do not offer sufficient options for customization or scaling to larger and more complex datasets. Additionally, regular updates and maintenance are lacking in many of these tools, which raises concerns about their long-term viability and support.
Testing tools that support daily activities in the industry are essential for elevating fairness to a first-class entity in software development. By incorporating fairness tools into standard development workflows, practitioners can identify and address bias issues early in the development cycle, preventing them from being embedded into deployed systems. This approach helps ensure that software products are fair and equitable, reducing the risk of negative societal impacts and enhancing the credibility and trustworthiness of the technology. Moreover, tools that are easy to use, well-documented, and integrated into existing systems empower practitioners to consistently apply fairness principles, making fairness an integral part of software development rather than an afterthought.
However, our study demonstrates that this is not the current scenario. Currently, tools are primarily research-focused, with limited applicability to real-world industry settings. They often lack user-friendly interfaces, detailed documentation, configurability, and regular updates, which limits their usability and effectiveness for practitioners. To better support industry needs, fairness testing tools should be developed to integrate seamlessly into development workflows and provide comprehensive reporting features to help identify and mitigate bias early in the development process, making fairness a fundamental aspect of software development and leading to more equitable and trustworthy technology solutions.
§.§ Threats to Validity
Our analysis is inherently limited by the authors' specific expertise in software testing and software fairness. The first author is a junior practitioner working in the software development process for the government, including quality activities. The second author is a researcher specializing in empirical software engineering with a strong background in the human aspects of software development, including the perspectives of software practitioners such as testing professionals. The third author is a senior data scientist who is experienced in working on several machine learning projects. The fourth author has over eight years of professional experience in software testing, specializing in mobile testing, with over ten years of research in software quality and approximately three years focused on software fairness. This diverse combination of experience was leveraged to mitigate potential biases and provide a comprehensive evaluation of the tools.
Additionally, as a qualitative study that relies on heuristics based on previous studies and documentary analysis, our findings are subject to limitations inherent to these methods. The study focused exclusively on the official documentation of the tools, which may not capture all relevant information, and did not incorporate other data sources, such as the experiences of testers who used these tools. Nevertheless, this paper is designed for practitioners; hence, we chose a method that is straightforward and effective in producing actionable insights. Our goal was to inform practitioners about the available tools and highlight current needs for further research to enhance these tools in the context of software fairness.
§ CONCLUSIONS
The growing importance of fairness in software systems, particularly those powered by artificial intelligence and machine learning, created a pressing need for effective fairness testing tools to be used in the software development process. Based on this problem, this study aimed to evaluate the potential of existing fairness testing tools in the literature to be used by testing practitioners and provide them with insights into the current landscape of tools while identifying areas where research was needed to better support fairness in software development.
While we identified 41 fairness testing tools in the literature, only 5 could be evaluated. These tools demonstrated some strengths, such as cost-effectiveness and compatibility with Python environments, but they also exhibited notable deficiencies. Many tools lacked user-friendly interfaces, detailed documentation, and configurability, which limited their applicability in real-world industry settings. Additionally, the tools were generally not maintained regularly and focused narrowly on specific datasets, which constrained their versatility and scalability. Among the tools analyzed, Scikit-fairness emerged as the most robust, offering regular updates and broader applicability, but even it had room for improvement, particularly in ease of use and documentation.
The insights from this study suggest several opportunities for future research. Currently, there is a clear need for the development of fairness testing tools that are more aligned with industry requirements, particularly in terms of user-friendliness, comprehensive work, and configurability. Future research could focus on creating tools that assist testing professionals in creating testing plans, implementing test cases, and identifying as well as offering robust reporting features to help them identify and mitigate bias early in the development process. Additionally, exploring ways to expand the scope of these tools to handle more complex and diverse datasets could significantly enhance their utility. By addressing these gaps, researchers could contribute to making fairness a fundamental aspect of software development, ultimately leading to more equitable and trustworthy technology solutions.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03349v1 | 20240905085119 | Spectral signatures of structural change in financial networks | [
"Valentina Macchiati",
"Emiliano Marchese",
"Piero Mazzarisi",
"Diego Garlaschelli",
"Tiziano Squartini"
] | physics.soc-ph | [
"physics.soc-ph",
"q-fin.RM"
] |
Scuola Normale Superiore, P.zza dei Cavalieri 7, 56126 Pisa (Italy)
IMT School for Advanced Studies, P.zza San Francesco 19, 55100 Lucca (Italy)
Università degli Studi di Siena, P.zza S.Francesco 7-8, 53100 Siena (Italy)
IMT School for Advanced Studies, P.zza San Francesco 19, 55100 Lucca (Italy)
Lorentz Institute for Theoretical Physics, University of Leiden, Niels Bohrweg, 2, Leiden, NL-2333 CA, The Netherlands
INdAM-GNAMPA Istituto Nazionale di Alta Matematica `Francesco Severi', P.le Aldo Moro 5, 00185 Rome (Italy)
[email protected]
IMT School for Advanced Studies, P.zza San Francesco 19, 55100 Lucca (Italy)
Scuola Normale Superiore, P.zza dei Cavalieri 7, 56126 Pisa (Italy)
INdAM-GNAMPA Istituto Nazionale di Alta Matematica `Francesco Severi', P.le Aldo Moro 5, 00185 Rome (Italy)
§ ABSTRACT
The level of systemic risk in economic and financial systems is strongly determined by the structure of the underlying networks of interdependent entities that can propagate shocks and stresses. Since changes in network structure imply changes in risk levels, it is important to identify structural transitions potentially leading to system-wide crises. Methods have been proposed to assess whether a real-world network is in a (quasi-)stationary state by checking the consistency of its structural evolution with appropriate maximum-entropy ensembles of graphs. While previous analyses of this kind have focused on dyadic and triadic motifs, hence disregarding higher-order structures, here we consider closed walks of any length. Specifically, we study the ensemble properties of the spectral radius of random graph models calibrated on real-world evolving networks. Our approach is shown to work remarkably well for directed networks, both binary and weighted. As illustrative examples, we consider the Electronic Market for Interbank Deposit (e-MID), the Dutch Interbank Network (DIN) and the International Trade Network (ITN) in their evolution across the 2008 crisis. By monitoring the deviation of the spectral radius from its ensemble expectation, we find that the ITN remains in a (quasi-)equilibrium state throughout the period considered, while both the DIN and e-MID exhibit a clear out-of-equilibrium behaviour. The spectral deviation therefore captures ongoing topological changes, extending over all length scales, to provide a compact proxy of the resilience of economic and financial networks.
89.75.Fb; 89.65.-s; 02.50.Tt
Spectral signatures of structural change in financial networks
Tiziano Squartini
September 9, 2024
==============================================================
§ INTRODUCTION
As witnessed by two major recent crises (i.e. the global financial one in 2008 and the Covid-19 pandemic in 2020), having a clear understanding of the intricate structure of economic and financial systems - be they interbank <cit.>, interfirm <cit.> or trade networks <cit.> - is crucial, especially under stress conditions. The interconnectedness of economic and financial agents is, in fact, known to play a major role both during the phase of distress accumulation and after a crisis outbreak in sustaining and reinforcing shock propagation <cit.>. Back in 2008, banks sought to minimise individual risk by diversifying their portfolios: the simultaneous character of such diversification, however, led to an unexpected level of mutual dependency whose net consequence was that of amplifying the effects of individual defaults <cit.>.
A particularly relevant question addresses the (quasi-)stationarity of the temporal evolution of a given, real-world, economic or financial network, i.e. does the system undergo smooth, structural changes controlled by few driving parameters? Should this be the case, the behaviour of the network under analysis would be predictable solely in terms of the dynamics of those parameters; otherwise, the lack of stationarity may lead to abrupt - hence, uncontrollable - regime shifts.
The problem of the (non) stationarity of real-world, economic and financial networks has been addressed by studying whether they can be considered typical members of an evolving, (quasi-)equilibrium ensemble of graphs with given properties <cit.>: while such properties are treated as constraints - hence, assumed to be the `independent variables' undergoing an autonomous evolution - the other network properties are treated as `dependent vareiables' - hence, assumed to vary only as a consequence of the former ones. Broadly speaking, three different situations can occur:
∙ The observed network properties are systematically found to agree with what is expected from the evolution of the enforced constraints. In this case, one can conclude that the real-world network is (quasi-)stationary - and its evolution is driven by the dynamics of the constraints;
∙ The observed network properties slightly deviate from equilibrium expectations, but the deviating patterns remain coherent. In this case, the network can still be considered (quasi-)stationary - even if its evolution cannot be claimed to be completely driven by the chosen constraints (very likely, with the addition of other appropriate constraints, one would go back to the first situation);
∙ The observed network properties significantly deviate from the (quasi-)equilibrium expectations, showing different deviating patterns at different times. In this case, the network can be considered non-stationary.
Analyses of this kind have indeed led to individuate early-warning signals of upcoming, critical events, although the indicators considered so far have just involved dyadic and triadic `debt loops' with different levels of reciprocity <cit.>. The present paper aims to extend the study of early-warnings' emergence by considering closed walks of any length at once. Such a request can be handled by exploiting the theorem stating that a_ij^(n), i.e. the generic entry of the n-th power of the adjacency matrix 𝐀, counts the total number of closed walks of length n connecting node i with node j: all `debt loops' can be, then, compactly accounted for by carrying out a double sum, over n and over the diagonal entries of 𝐀.
From a computational perspective, such a calculation can be greatly sped up by proxying the trace of the adjacency matrix with its principal eigenvalue λ_1, which, then, becomes the only relevant statistics whose z-score needs to be explicitly calculated. Such an appealing simplification, however, comes at a price: the expressions of ⟨λ_1⟩ and Var[λ_1], i.e. of the expected value of λ_1 and its variance, are explicitly known in few cases only, i.e. i) when the random network model is the binary, undirected version of the Erdös-Rényi (ER) model <cit.>; ii) when the random network model is the Chung-Lu (CL) model, either in its binary, undirected version <cit.> or in its binary, directed version <cit.>; iii) if the edges are treated as i.n.i.d. (independent, non-identically distributed) random variables, each one obeying a different Poisson distribution <cit.>; iv) if the considered graphs are infinitely large, locally tree-like and directed <cit.>.
Let us remark that the existing estimations obtained under hypotheses are rarely satisfied by empirical configurations. For instance, the presence of cycles contradicts the assumption of observing locally tree-like structures, and the heterogeneity of the (in- and out-) degree distributions severely limits the applicability of the CL model. On a more general ground, the vast majority of the approaches above requires the knowledge of the (in- and out-) degree sequences, i.e. of a kind of information that data confidentiality issues make often (if not always) unavailable; moreover, none provides estimations of a network spectral properties taking its weighted marginals (i.e. in-strengths and out-strengths) as the sole input.
Motivated by the evidence that general results about the statistical properties of a network principal eigenvalue are currently missing, we propose an approach to their study that is applicable under any random network model. The generality of our approach comes at a price: our results rest upon the validity of several approximations that need to be explicitly verified whenever a particular configuration is studied. Still, although our assumptions may appear quite drastic, our approach works remarkably well for directed networks, be they binary (BDN) or weighted (WDN).
A BDN is described by an adjacency matrix 𝐀 whose generic entry satisfies the relationships a_ij=1 if a link points from node i towards node j and a_ij=0 otherwise. Moreover, a_ij will, in general, differ from a_ji. A WDN is described by an adjacency matrix 𝐖 whose generic entry satisfies the relationships w_ij>0 if a weighted link points from node i towards node j and w_ij=0 otherwise. Moreover, w_ij will, in general, differ from w_ji.
§ DETECTING STRUCTURAL CHANGES
Structural changes can be spotted by comparing the empirical abundance of a quantity of interest with the corresponding expected value, calculated under a properly defined benchmark model[Hereby, the expressions `random network model', `benchmark model' and `null model' will be used interchangeably.]. To this aim, a very useful indicator is represented by the z-score
z[X]=X-⟨ X⟩/σ[X]
where X is the empirical abundance of the quantity X, ⟨ X⟩ is its expected occurrence under the chosen null model and σ[X]=√(⟨ X^2⟩-⟨ X⟩^2) is the standard deviation of X under the same null model. In words, z[X] quantifies the number of standard deviations by which the empirical abundance of X differs from the expected one after checking for the Gaussianity of X under the null model - often ensured by the fact that X is the sum of several random variables - a result |X|≤2 (|X|≤3) indicates that the empirical abundance of X is compatible with the expected one, at the 5% (1%) level of statistical significance; on the other hand, a value |X|>2 (|X|>3) indicates that the empirical abundance of X is not compatible with the expected one, at the same significance level. In the latter case, a value z[X]>0 (z[X]<0) indicates the tendency of the pattern to be over-represented (under-represented) in the data with respect to the chosen benchmark.
§.§ Dyadic signature of structural changes
Moving from the observation that
∑_j=1^Na_ija_jk=[𝐀^2]_ik
we will pose
X =∑_i=1^N∑_j=1^Na_ija_ji=∑_i=1^N[𝐀^2]_ii=Tr[𝐀^2],
noticing that the total number of links having a partner pointing in the opposite direction coincides with the trace of the second power of the adjacency matrix. The position above leads to
⟨ X⟩=∑_i=1^N∑_j=1^Np_ijp_ji=∑_i=1^N[𝐏^2]_ii=Tr[𝐏^2]
and to
σ[X] =√(Var[∑_i=1^N∑_j=1^Na_ija_ji])
=√(Var[2·∑_i=1^N∑_j(>i)a_ija_ji])
=√(4·∑_i=1^N∑_j(>i)Var[a_ija_ji])
=2·√(∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji))
where 𝐏≡{p_ij}_i,j=1^N is the matrix of probability coefficients induced by the chosen null model, and the third passage follows from the evidence that the dyads induce independent random variables (see also Appendix AppAA).
§.§ Triadic signature of structural changes
Analogously to the dyadic case, let us move from the observation that
∑_j=1^N∑_k=1^Na_ija_jka_kl=[𝐀^3]_il
and pose
X =∑_i=1^N∑_j=1^N∑_k=1^Na_ija_jka_ki=∑_i=1^N[𝐀^3]_ii=Tr[𝐀^3],
noticing that the total number of triangles is proportional to the trace of the third power of the adjacency matrix. The position above leads to
⟨ X⟩=∑_i=1^N∑_j=1^N∑_k=1^Np_ijp_jkp_ki=∑_i=1^N[𝐏^3]_ii=Tr[𝐏^3]
and to
σ[X] =√(Var[∑_i=1^N∑_j=1^N∑_k=1^Na_ija_jka_ki])
=√(Var[3·∑_i=1^N∑_j(>i)∑_k(>j)(a_ija_jka_ki+a_ika_kja_ji)])
=√(9·Var[∑_i=1^N∑_j(>i)∑_k(>j)(a_ija_jka_ki+a_ika_kja_ji)])
=3·√(Var[∑_i=1^N∑_j(>i)∑_k(>j)(a_ija_jka_ki+a_ika_kja_ji)])
where 𝐏≡{p_ij}_i,j=1^N is the matrix of probability coefficients induced by the chosen null model. Since triads do not induce independent random variables, the explicit expression of σ[X] is derived in Appendix AppBB. Let us notice that, in case the considered networks are sparse, one can simplify the expression above upon posing
σ[X] ≃3·√(∑_i=1^N∑_j(>i)∑_k(>j)Var[a_ija_jka_ki+a_ika_kja_ji])
with
Var[a_ija_jka_ki+a_ika_kja_ji] ≃Var[a_ija_jka_ki]+Var[a_ika_kja_ji]
and
Var[a_ija_jka_ki] =p_ijp_jkp_ki(1-p_ijp_jkp_ki),
Var[a_ika_kja_ji] =p_ikp_kjp_ji(1-p_ikp_kjp_ji).
§.§ Spectral signature of structural changes
Let us now enlarge the set of patterns to be considered for detecting structural changes by accounting for closed walks of any length.
§.§.§ The trace of the matrix exponential
Let us start by considering the N× N adjacency matrix 𝐀 of a BDN, with a_ii=0, ∀ i: the following relationship
𝐈+𝐀+𝐀^2/2!+𝐀^3/3!+…+𝐀^n/n!+…=∑_k=0^∞𝐀^k/k!≡ e^𝐀,
where 𝐀^0≡𝐈, defines the exponential of 𝐀 <cit.>. Let us, now, calculate the trace of such a matrix exponential: since it is invariant under diagonalisation, one obtains that
Tr[e^𝐀]=∑_k=0^∞Tr[𝐀^k]/k!=∑_k=0^∞Tr[Λ^k]/k!=Tr[e^Λ]
where Λ is the matrix obtained upon diagonalising 𝐀 (see also Appendix AppCC). As the number of walks of length k starting from and ending at the same vertex can be counted by computing the trace of the k-th power of the adjacency matrix, i.e. Tr[𝐀^k]=∑_i=1^N[𝐀^k]_ii, eq. (<ref>) relates the number of walks of any length characterising a binary network 𝐀 with its spectral properties. Such a quantity, named Estrada index, represents a graph invariant quantifying the communicability of a given network, i.e. the `participation' of each node to the walks present in the network itself <cit.>.
Analogously, given the N× N adjacency matrix 𝐖 of a WDN with w_ii=0, ∀ i, the relationships
𝐈+𝐖+𝐖^2/2!+𝐖^3/3!+…+𝐖^n/n!+…=∑_k=0^∞𝐖^k/k!≡ e^𝐖
and
Tr[e^𝐖]=∑_k=0^∞Tr[𝐖^k]/k!=∑_k=0^∞Tr[Ω^k]/k!=Tr[e^Ω],
where Ω is the matrix obtained upon diagonalising 𝐖, hold true. As a result, concerning the number of walks of length k starting and ending at the same vertex can be extended to weighted networks, eq. (<ref>) generalises the Estrada index to weighted configurations.
Let us explicitly notice that
∙ the absence of self-loops, i.e. Tr[𝐀]=Tr[𝐖]=0, implies that, whenever present, complex eigenvalues must appear in conjugate pairs;
∙ eq. (<ref>) implies that Tr[e^𝐀]≥0, i.e. that the trace of the exponential of 𝐀 is a real, non-negative number. Analogously, eq. (<ref>) implies that Tr[e^𝐖]≥0, i.e. that the trace of the exponential of 𝐖 is a real, non-negative number;
∙ When computing the number of closed walks of a certain length, edges must be counted repeatedly. For example, the closed walks of length 4 in a binary, directed network are i) the proper cycles like i→ j→ k→ l→ i; ii) the pairs of dyads like i→ j→ k→ j→ i; iii) the single dyads like i→ j→ i. Equations (<ref>) and (<ref>) compactly account for all of them.
The third observation has relevant implications for economic and financial applications: when studying the propagation of a shock, in fact, it is extremely important to account for all possible patterns along which distress can propagate, including the ones leading to multiple reverberations among the same nodes <cit.>. As (combinations of) cycles are supposed to lower the resilience of financial networks by amplifying external shocks <cit.>, eqs. (<ref>) and (<ref>) suggest the trace of the exponential matrix to represent a compact proxy of the stability of the network itself.
§.§.§ Expected value of the trace of the matrix exponential
Let us now move to analyse the expected value of the quantity Tr[e^𝐀]=Tr[e^Λ], under a properly-defined benchmark model. We will suppose the latter one to be described by an N× N matrix 𝐏 whose generic entry p_ij, with i≠ j, indicates the probability that nodes i and j are connected via a directed link. Following the same steps as above, we find
Tr[e^𝐏]=∑_k=0^∞Tr[𝐏^k]/k!=∑_k=0^∞Tr[Π^k]/k!=Tr[e^Π]
where Π is the matrix obtained upon diagonalising 𝐏.
Let us now inspect the relationship between eq. (<ref>) and eq. (<ref>). Since we are considering binary, adjacency matrices, the matrix 𝐏 satisfies the relationship ⟨𝐀⟩=𝐏, a compact notation stating for ⟨ a_ij⟩=p_ij, ∀ i≠ j. To extend this result to higher powers of the adjacency matrix, an explicit expression for the quantity ⟨𝐀^n⟩=f(𝐏), ∀ n is needed. Here, we adopt the recipe defining the so-called delta method <cit.> and prescribing to identify f(𝐏) with 𝐏^n. According to it, the expected value of the number of closed walks of any length satisfies the chain of inequalities
⟨Tr[e^𝐀]⟩=∑_k=0^∞⟨Tr[𝐀^k]⟩/k!=∑_k=0^∞Tr[⟨𝐀^k⟩]/k!≥∑_k=0^∞Tr[⟨𝐀⟩^k]/k!=∑_k=0^∞Tr[𝐏^k]/k!=Tr[e^𝐏];
a relationship leading to ⟨Tr[e^Λ]⟩≥Tr[e^Π].
The inequality can be understood upon considering a reciprocated dyad and noticing that relationships like ⟨[𝐀^4]_ii⟩=⟨ a_ija_jia_ija_ji⟩=⟨ a_ija_ji⟩=⟨ a_ij⟩⟨ a_ji⟩=p_ijp_ji≥ p_ij^2p_ji^2=[𝐏^4]_ii hold true; in other words, estimating the number of closed walks of a certain length via the delta method implies overweighing the edges constituting them, whence the mismatch between the correct and the approximated expression. Such a mismatch is absent in case no link is reciprocated: given a square loop, in fact, ⟨[𝐀^4]_ii⟩=⟨ a_ija_jka_kla_li⟩=⟨ a_ij⟩⟨ a_jk⟩⟨ a_kl⟩⟨ a_li⟩=p_ijp_jkp_klp_li=[𝐏^4]_ii. In other words, the larger the number of reciprocal links[Let us remind that L^↔=∑_i=1^N∑_j(≠ i)a_ija_ji.], the less accurate the approximation provided by the delta method. Hereby, we will assume that the symbol ≳ can replace the symbol ≥.
Analogously, upon posing ⟨𝐖⟩=𝐐, the expected value of the quantity Tr[e^𝐖]=Tr[e^Ω] can be approximated as follows
⟨Tr[e^𝐖]⟩=∑_k=0^∞⟨Tr[𝐖^k]⟩/k!=∑_k=0^∞Tr[⟨𝐖^k⟩]/k!≥∑_k=0^∞Tr[⟨𝐖⟩^k]/k!=∑_k=0^∞Tr[𝐐^k]/k!=Tr[e^𝐐],
a relationship leading to ⟨Tr[e^Ω]⟩≥Tr[e^Ψ], where Ψ is the matrix obtained upon diagonalising 𝐐.
The inequality can be understood upon considering a weighted, reciprocated dyad and noticing that relationships like ⟨[𝐖^4]_ii⟩=⟨ w_ijw_jiw_ijw_ji⟩=⟨ (w_ijw_ji)^2⟩=⟨ w_ijw_ji⟩^2+Var[w_ijw_ji]=⟨ w_ij⟩^2⟨ w_ji⟩^2+Var[w_ijw_ji]≥⟨ w_ij⟩^2⟨ w_ji⟩^2=[𝐐]^4_ii hold true; as in the binary case, estimating the total weight of closed walks of a certain length via the delta method implies overweighing the edges constituting them. Such a mismatch is absent if no link is reciprocated, as evident upon considering a weighted, square loop. Hereby, we will assume that the symbol ≳ can replace the symbol ≥.
§.§.§ Expected value of the spectral radius
Let us now recall the statement of the generalised Perron-Frobenius (GPF) theorem <cit.>.
GPF Theorem. Whenever non-negative, irreducible matrices are considered, a unique, real, positive eigenvalue exists whose modulus is maximum and (only) the corresponding left and right eigenvectors have positive components.
Requiring irreducibility, sometimes stated as regularity, implies requiring the existence of a natural number n such that [𝐀^n]_ij>0, ∀ i,j. In other words, when directed networks are considered, requiring irreducibility is equivalent to requiring strongly connectedness. In case such a requirement is not satisfied, the Perron-Frobenius theorem must be weakened as follows.
WPF Theorem. Whenever non-negative matrices are considered, a real, non-negative eigenvalue exists whose modulus is maximum and with associated, non-negative left and right eigenvectors.
The eigenvalue mentioned in any variant of the Perron-Frobenius theorem will be referred to as the principal eigenvalue or spectral radius. The relationship between the matrices 𝐀 and Λ encoded into eq. (<ref>) can be further simplified upon noticing that, in case the spectral radius exists, is unique[A reducible square matrix 𝐌 can be written in a block triangular form <cit.>, each matrix 𝐁_ii on the diagonal being either irreducible or zero. As the spectrum of such a matrix is the union of the spectra of the 𝐁_iis, the GPF Theorem can be applied to each 𝐁_ii: the Perron–Frobenius eigenvalue of 𝐌 is, thus, the largest of the Perron–Frobenius eigenvalues of the 𝐁_iis, hence coinciding with the one of the maximal strongly-connected component of the network under study.] and the spectral gap is (much) larger than zero[Although the condition λ_1-λ_2≫0 can be relaxed, the formulas provided in the present paper hold for this case.], the sum Tr[e^Λ]=∑_i=1^Ne^λ_i is exponentially dominated by the addendum e^λ_1, an observation allowing us to write
Tr[e^𝐀]≳ e^λ_1;
analogously,
Tr[e^𝐖]≳ e^ω_1.
Let us now inspect the relationships between eqs. (<ref>) and (<ref>) and between eqs. (<ref>) and (<ref>). Putting everything together, we obtain
⟨ e^λ_1⟩≲⟨Tr[e^𝐀]⟩=∑_k=0^∞⟨Tr[𝐀^k]⟩/k! =∑_k=0^∞Tr[⟨𝐀^k⟩]/k!≳∑_k=0^∞Tr[𝐏^k]/k!=Tr[e^𝐏]≳ e^π_1,
⟨ e^ω_1⟩≲⟨Tr[e^𝐖]⟩=∑_k=0^∞⟨Tr[𝐖^k]⟩/k! =∑_k=0^∞Tr[⟨𝐖^k⟩]/k!≳∑_k=0^∞Tr[𝐐^k]/k!=Tr[e^𝐐]≳ e^ϕ_1.
The two chains of (in-)equalities above motivate us to explore the possibility of deriving an (approximated) expression for the expected value of the spectral radius. According to the delta method, the expected value of a function, f, of a random variable, x, can be computed by Taylor-expanding f(x) around ⟨ x⟩=μ, taking the expected value of the resulting expression and retaining only the lowest order of the expansion. Such a prescription allows us to write ⟨ e^λ_1⟩≃ e^⟨λ_1⟩ and ⟨ e^ω_1⟩≃ e^⟨ω_1⟩, two positions further leading to the results
⟨λ_1⟩≃π_1
and
⟨ω_1⟩≃ϕ_1.
Equations (<ref>) and (<ref>) are the main result of our paper, as they establish a (fundamental, although approximated) relationship between the empirical value of the spectral radius of a directed network, be it binary or weighted, and its expected counterpart: in words, the delta method suggests us to identify the latter with the spectral radius of the matrix defining the chosen random network model. Since the calculation of the expected number, or of the expected weight, of walks boils down to calculate the spectral radius of a single matrix, i.e. 𝐏 or 𝐐, eqs. (<ref>) and (<ref>) have deep implications from a purely computational point of view as well: in fact, they prevent the network ensemble induced by 𝐏 or 𝐐 from being explicitly sampled.
§.§.§ Variance of the spectral radius
Now, let us focus on the variance of the spectral radius calculation. To this aim, we will move from the known expressions of ⟨λ_1⟩, treating them as subject to statistical variability. For instance, let us recall that
π_1≃⟨k||k⟩/2L=∑_i=1^Nk_i^2/2L
for binary, undirected networks under the Chung-Lu model, according to which p_ij=k_ik_j/2L, ∀ i,j; upon considering that all quantities defining such an expression are random variables themselves, one is led to write
Var[λ_1]=Var[∑_i=1^Nk_i^2/2L]
and evaluate such an expression either analytically or numerically. In what follows, we will numerically evaluate the spectral radius variance of our random network models.
§.§.§ Statistical significance of the spectral radius
Let us now define the quantity to be inspected for spotting the presence of a spectral signature of structural changes: it reads
z[λ_1]=λ_1-⟨λ_1⟩/σ[λ_1]≃λ_1-π_1/σ[λ_1]
and is nothing but the z-score of the spectral radius λ_1. As already stressed, the statistical meaning of such a quantity is guaranteed by the Gaussianity of the quantity whose z-score is to be calculated. Such a property of the spectral radius is guaranteed by the analytical results obtained in <cit.> and by the numerical checks carried out in Appendix AppDD and depicted in fig. <ref>.
§ RANDOM NETWORK MODELS
Let us now discuss a set of null models to be employed for the subsequent steps of our analysis. To this aim, we will consider some members of the family of Exponential Random Graph Models (ERGMs), i.e. the entropy-based benchmarks that preserve a given set of constraints, otherwise being maximally random. More specifically, we follow the approach introduced in <cit.> and further developed in <cit.>, which prescribes to carry out a constrained maximisation of Shannon entropy
S=-∑_𝐆P(𝐆)ln P(𝐆),
the sum running over the ensemble 𝔾 of N× N directed networks, be they binary (in which case 𝐆≡𝐀) or weighted (in which case 𝐆≡𝐖).
§.§ Erdös-Rényi Model
The Erdös-Rényi Model <cit.> is induced by the Hamiltonian
H(𝐀)=α L(𝐀),
where L(𝐀)=∑_i=1^N∑_j(≠ i)a_ij represents the total number of directed edges, and α is the Lagrange multiplier associated with such a global constraint. The probability of the generic configuration 𝐀 reads
P_ER(𝐀)=p^L(𝐀)(1-p)^N(N-1)-L(𝐀)
where p=e^-α/(1+e^-α) is the probability that a link points from node i towards node j.
In order to tune the unknown parameter defining the Erdös-Rényi Model to ensure that ⟨ L⟩_ER=L(𝐀^*), we maximise the likelihood function ℒ_ER=ln P_ER(𝐀^*) with respect to it. Such a recipe leads us to find
p=L(𝐀^*)/N(N-1), ∀ i≠ j
with obvious meaning of the symbols.
§.§.§ Expected value of the spectral radius
Although eq. (<ref>) provides a general recipe for estimating the expected value of the spectral radius of any random network model, a more explicit expression can be derived for the Erdös-Rényi Model. Specifically, let us consider the following equation
∑_k=0^∞Tr[𝐏^k]/k!=N+∑_k=2^∞(Np)^k/k!
where 𝐏≡𝐏_ER={p_ij}_i,j=1^N, p_ij≡ p, ∀ i≠ j and each addendum encodes the information about the order of magnitude of the specific contribution to the total number of cycles - to see this explicitly, let us consider that Tr[𝐀^2]=∑_i=1^N[𝐀^2]_ii=∑_i=1^N∑_j(≠ i)a_ija_ji whose expected value reads ⟨Tr[𝐀^2]⟩=∑_i=1^N∑_j(≠ i)⟨ a_ija_ji⟩=∑_i=1^N∑_j(≠ i)p^2≃(Np)^2 and analogously for the higher orders of the expansion. As adding and subtracting 1 and Np leads to
∑_k=0^∞Tr[𝐏^k]/k! =∑_k=0^∞(Np)^k/k!+N(1-p)-1
=e^Np+N(1-p)-1≳ e^Np≃ e^⟨ k⟩,
eq. (<ref>) can be employed to derive the chain of relationships
π_1≃ Np≃⟨ k⟩,
stating that the spectral radius, π_1, of the N× N matrix of i.i.d. Bernoulli random variables 𝐏≡𝐏_ER can be accurately approximated by their sum along any row or any column; in network terms, this can be rephrased by saying that the expected value of the spectral radius under the Erdös-Rényi Model coincides with the expected value of the degree of each node.
A second way of identifying π_1 rests upon the following relationship:
𝐏·1=(N-1)p·1=⟨ k⟩·1;
since 𝐏 obeys the GPF Theorem, the equation above allows us to identify the value of its spectral radius quite straightforwardly by posing
π_1=(N-1)p=⟨ k⟩≡λ_1^ER.
Such a result is consistent with the one stating that the spectral radius of the deterministic matrix a_ii≡ν, ∀ i=j and a_ij≡μ, ∀ i≠ j is equal to λ_1=(N-1)μ+ν.
A third way of identifying the expected value of λ_1 rests upon the results from <cit.>, i.e.
λ_1=∑_i=1^N∑_ja_ij/N+σ^2/μ,
where a_ii≡ν, ∀ i=j, ⟨ a_ij⟩=μ and Var[a_ij]=σ^2, ∀ i≠ j. Since, in our case, ν=0, μ=p and σ^2=p(1-p), ∀ i≠ j, such an expression leads to
λ_1=∑_i=1^N∑_jp/N+σ^2/μ=(N-1)p+(1-p).
§.§.§ Variance of the spectral radius
Equation (<ref>) offers a straightforward way to calculate the variance of the spectral radius. It is, in fact, enough to evaluate the expression
Var[λ_1]=∑_i=1^N∑_j(≠ i)Var[a_ij]/N^2 ≃ p(1-p)≡Var[λ_1^ER]
with the symbol ≃ replacing the more correct expression lim_N→∞ N(N-1)p(1-p)/N^2=p(1-p), indicating that Var[λ_1] tends to p(1-p) in the (asymptotic) regime N→∞.
§.§ Binary Configuration Model
The Binary Configuration Model <cit.> is induced by the Hamiltonian
H(𝐀)=∑_i=1^N[α_ik_i(𝐀)+β_ih_i(𝐀)]
where k_i(𝐀)=∑_j(≠ i)a_ij represents the out-degree of node i, i.e. the number of nodes pointed by it and h_i(𝐀)=∑_j(≠ i)a_ji represents the in-degree of node i, i.e. the number of nodes it is pointed by; the vectors {α_i}_i=1^N and {β_i}_i=1^N represent the Lagrange multipliers associated with those above, local constraints. The probability of the generic configuration 𝐀 reads
P_BCM(𝐀)=∏_i=1^N∏_j(≠ i)p_ij^a_ij(1-p_ij)^1-a_ij
where p_ij=e^-α_i-β_j/(1+e^-α_i-β_j) is the probability that a link points from node i towards node j.
To tune the unknown parameters defining the Binary Configuration Model to ensure that ⟨ k_i⟩_BCM=k_i(𝐀^*), ∀ i and ⟨ h_i⟩_BCM=h_i(𝐀^*), ∀ i, we maximise the likelihood function ℒ_BCM=ln P_BCM(𝐀^*) with respect to them. Such a recipe leads us to solve
k_i(𝐀^*) =∑_j(≠ i)e^-α_i-β_j/1+e^-α_i-β_j, ∀ i
h_i(𝐀^*) =∑_j(≠ i)e^-α_j-β_i/1+e^-α_j-β_i, ∀ i
with obvious meaning of the symbols.
§.§.§ Expected value of the spectral radius
According to eq. (<ref>), π_1 is the spectral radius of the N× N matrix of i.n.i.d. random variables 𝐏≡𝐏_BCM={p_ij}_i,j=1^N, with p_ij=e^-α_i-β_j/(1+e^-α_i-β_j), ∀ i≠ j.
As for the Erdös-Rényi Model, a more explicit expression can also be derived for the Binary Configuration Model. To this aim, let us consider that a way to identify π_1 in case p_ij=k_ih_j/L, ∀ i,j rests upon the relationship
𝐏=𝐤⊗h/L=|k⟩⟨h|/L,
indicating that the matrix 𝐏 characterising the Binary Configuration Model can be obtained as the direct product of the vector of out-degrees, 𝐤, and the vector of in-degrees, 𝐡. Employing the bra-ket formalism allows the calculations to be carried out quite easily, as
𝐏|k⟩=|k⟩⟨h|/L|k⟩=⟨h||k⟩/L|k⟩
where ⟨h||k⟩=∑_i=1^Nk_ih_i. Since 𝐏 obeys the GPF Theorem, the equation above allows us to identify the value of its spectral radius[Notice that ⟨h|𝐏=⟨h||k⟩⟨h|/L=⟨h|⟨h||k⟩/L as well.] quite straightforwardly as π_1=⟨h||k⟩/L=∑_i=1^Nk_ih_i/L. The sparse-case approximation of the Binary Configuration Model is, however, defined by the position p_ij=k_ih_j/L, ∀ i≠ j, a piece of evidence leading us to write
π_1≃⟨h||k⟩/L=∑_i=1^Nk_ih_i/L≡λ_1^CL.
§.§.§ Variance of the spectral radius
The expression π_1=⟨h||k⟩/L=∑_i=1^Nk_ih_i/L offers a straightforward way to calculate the variance of the spectral radius. Upon considering that all quantities defining such an expression are random variables themselves, one is led to write
Var[λ_1]=Var[∑_i=1^Nk_ih_i/L]≡Var[λ_1^CL]
and evaluate such an expression either analytically or numerically. In what follows, we will proceed by evaluating it numerically.
§.§ Reciprocal Configuration Model
The Reciprocal Configuration Model <cit.> is induced by the Hamiltonian
H(𝐀)=∑_i=1^N[α_ik_i^→(𝐀)+β_ik_i^←(𝐀)+γ_ik_i^↔(𝐀)]
where k_i^→(𝐀)=∑_j(≠ i)a_ij^→ represents the non-reciprocated out-degree of node i, k_i^←(𝐀)=∑_j(≠ i)a_ij^← represents the non-reciprocated in-degree of node i and k_i^↔(𝐀)=∑_j(≠ i)a_ij^↔ represents the reciprocated degree of node i; the vectors {α_i}_i=1^N, {β_i}_i=1^N and {γ_i}_i=1^N represent the Lagrange multipliers associated with those above, local constraints. The probability of the generic configuration 𝐀 reads
P_RCM(𝐀)=∏_i=1^N∏_j(>i)(p_ij^→)^a_ij^→(p_ij^←)^a_ij^←(p_ij^↔)^a_ij^↔(p_ij^×)^a_ij^×
where
p_ij^→=e^-α_i-β_j/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j
is the probability that a non-reciprocated link points from node i towards node j,
p_ij^←=e^-α_j-β_i/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j
is the probability that a non-reciprocated link points from node j towards node i,
p_ij^↔=e^-γ_i-γ_i/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j
is the probability that nodes i and j are connected by a reciprocated link and p_ij^×=1-p_ij^→-p_ij^←-p_ij^↔ is the probability that i and j are disconnected.
To tune the unknown parameters defining the Reciprocal Configuration Model to ensure that ⟨ k_i^→⟩_RCM=k_i^→(𝐀^*), ∀ i, ⟨ k_i^←⟩_RCM=k_i^←(𝐀^*), ∀ i and ⟨ k_i^↔⟩_RCM=k_i^↔(𝐀^*), ∀ i, we maximise the likelihood function ℒ_RCM=ln P_RCM(𝐀^*) with respect to them. Such a recipe leads us to solve
k_i^→(𝐀^*) =∑_j(≠ i)e^-α_i-β_j/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j, ∀ i
k_i^←(𝐀^*) =∑_j(≠ i)e^-α_j-β_i/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j, ∀ i
k_i^↔(𝐀^*) =∑_j(≠ i)e^-γ_i-γ_i/1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j, ∀ i
with obvious meaning of the symbols.
§.§.§ Expected value of the spectral radius
According to eq. (<ref>), π_1 is the spectral radius of the N× N matrix of i.n.i.d. random variables 𝐏≡𝐏_RCM={p_ij}_i,j=1^N, p_ij=p_ij^→+p_ij^↔=(e^-α_i-β_j+e^-γ_i-γ_i)/(1+e^-α_i-β_j+e^-α_j-β_i+e^-γ_i-γ_j), ∀ i≠ j.
As for the Binary Configuration Model, more explicit expressions can also be derived for the Reciprocal Configuration Model. To this aim, let us consider that, in the sparse case, one can write
𝐏^→|k^→⟩ =|k^→⟩⟨k^←|/L^→|k^→⟩=⟨k^←||k^→⟩/L^→|k^→⟩,
𝐏^↔|k^↔⟩ =|k^↔⟩⟨k^↔|/2L^↔|k^↔⟩=⟨k^↔||k^↔⟩/2L^↔|k^↔⟩
where ⟨k^←||k^→⟩=∑_i=1^Nk^←_ik^→_i
and ⟨k^↔||k^↔⟩=∑_i=1^Nk^↔_ik^↔_i. Since 𝐏^→
and 𝐏^↔ obey the GPF Theorem, the equations above allow us to identify the values of their spectral radius[An analogous observation to the one in the previous footnote can be made.] quite straightforwardly as
π_1^→ ≃⟨k^←||k^→⟩/L^→=∑_i=1^Nk^←_ik^→_i/L^→≡λ_1^CL^→,
π_1^↔ ≃⟨k^↔||k^↔⟩/2L^↔=∑_i=1^Nk^↔_ik^↔_i/2L^↔≡λ_1^CL^↔
(because of the definition of the sparse-case approximation of the Reciprocal Configuration Model, valid ∀ i≠ j).
§.§.§ Variance of the spectral radius
The expressions above offer a straightforward way to calculate the corresponding variances. In fact, one is led to write
Var[λ_1^→] =Var[∑_i=1^Nk^←_ik^→_i/L^→]≡Var[λ_1^CL^→],
Var[λ_1^↔] =Var[∑_i=1^Nk^↔_ik^↔_i/2L^↔]≡Var[λ_1^CL^↔]
and evaluate such expressions either analytically or numerically. In what follows, we will proceed by evaluating them numerically.
§.§ Global Reciprocity Model
The Global Reciprocity Model <cit.> is a special case of the Reciprocal Configuration Model, induced by the Hamiltonian
H(𝐀)=∑_i=1^N[α_ik_i(𝐀)+β_ih_i(𝐀)]+γ L^↔(𝐀)
where L^↔(𝐀)=∑_i=1^N∑_j(≠ i)a_ij^↔ represents the total number of reciprocated links; the parameters {α_i}_i=1^N, {β_i}_i=1^N and γ represent the Lagrange multipliers associated with the aforementioned constraints. The probability of the generic configuration 𝐀 reads
P_GRM(𝐀)=∏_i=1^N∏_j(>i)(p_ij^→)^a_ij^→(p_ij^←)^a_ij^←(p_ij^↔)^a_ij^↔(p_ij^×)^a_ij^×
where
p_ij^→=e^-α_i-β_j/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ
is the probability that a non-reciprocated link points from node i towards j,
p_ij^←=e^-α_j-β_i/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ
is the probability that a non-reciprocated link points from node j towards node i,
p_ij^↔=e^-α_i-β_j-β_i-α_j-γ/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ
is the probability that nodes i and j are connected by a reciprocated link and p_ij^×=1-p_ij^→-p_ij^←-p_ij^↔ is the probability that i and j are disconnected.
To tune the unknown parameters defining the Global Reciprocity Model to ensure that ⟨ k_i⟩_GRM=k_i(𝐀^*), ∀ i, ⟨ h_i⟩_GRM=h_i(𝐀^*), ∀ i and ⟨ L^↔⟩_GRM=L^↔(𝐀^*), ∀ i, we maximise the likelihood function ℒ_GRM=ln P_GRM(𝐀^*) with respect to them. Such a recipe leads us to solve
k_i(𝐀^*) =∑_j(≠ i)e^-α_i-β_j+e^-α_i-β_j-β_i-α_j-γ/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ, ∀ i
h_i(𝐀^*) =∑_j(≠ i)e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ, ∀ i
L^↔(𝐀^*) =∑_i=1^N∑_j(≠ i)e^-α_i-β_j-β_i-α_j-γ/1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ
with obvious meaning of the symbols.
In the case of the Global Reciprocity Model, π_1 is the spectral radius of the N× N matrix of i.n.i.d. random variables 𝐏≡𝐏_GRM={p_ij}_i,j=1^N, p_ij=p_ij^→+p_ij^↔=(e^-α_i-β_j+e^-α_i-β_j-β_i-α_j-γ)/(1+e^-α_i-β_j+e^-α_j-β_i+e^-α_i-β_j-β_i-α_j-γ), ∀ i≠ j.
§.§ Density-Corrected Gravity Model
The density-corrected Gravity Model <cit.> is a two-step model inducing a probability for the generic configuration 𝐀 reading
P_dcGM(𝐀)=∏_i=1^N∏_j(≠ i)p_ij^a_ij(1-p_ij)^1-a_ij
where
p_ij=za_il_j/1+za_il_j
is the probability that a link points from node i towards node j and a_i=∑_j(≠ i)w_ij is the out-strength of node i, l_i=∑_j(≠ i)w_ji is the in-strength of node i and z is a free parameter, determined by fixing the value of the total number of links[Analogously, one could have fixed the connectance, or link density, defined as c=L/N(N-1).], i.e. by solving the equation
L(𝐀^*)=∑_i=1^N∑_j(≠ i)za_il_j/1+za_il_j.
The second step of the density-corrected Gravity Model, instead, is a conditional one, prescribing loading the link a_ij=1 with the value
w_ij=a_il_j/Wp_ij,
where W=∑_i=1^N∑_j(≠ i)w_ij=∑_i=1^Na_i=∑_i=1^Nl_i is the total network volume. As a consequence of such a prescription, one recovers the result
⟨ w_ij⟩=a_il_j/W;
in other words, the dcGM ensures that the (financial equivalent of the) Gravity Model prescription is recovered on average.
§.§.§ Expected value of the spectral radius
According to eq. (<ref>), ϕ_1 is the spectral radius of the N× N matrix of i.n.i.d. random variables 𝐐≡𝐐_dcGM={⟨ w_ij⟩}_i,j=1^N, ⟨ w_ij⟩=a_il_j/W, ∀ i≠ j.
As for the Binary Configuration Model, a more explicit expression can also be derived for the density-corrected Gravity Model. To this aim, let us consider that a way to identify ϕ_1 in case ⟨ w_ij⟩=a_il_j/W, ∀ i,j rests upon the relationship
𝐐=𝐚⊗l/W=|a⟩⟨l|/W,
indicating that the matrix 𝐐 characterising the dcGM can be obtained as the direct product of the vector of out-strengths, 𝐚, and the vector of in-strengths, 𝐥. Employing the bra-ket formalism allows the calculations to be carried out quite easily, as
𝐐|a⟩=|a⟩⟨l|/W|a⟩=⟨a||l⟩/W|a⟩
where ⟨a||l⟩=∑_i=1^Na_il_i. Since 𝐐 obeys the GPF Theorem, the equation above allows us to identify the value of its spectral radius[Notice that ⟨l|𝐐=⟨l||a⟩⟨l|/W=⟨l|⟨a||l⟩/W as well.] quite straightforwardly as ϕ_1=⟨a||l⟩/W=∑_i=1^Na_il_i/W. The density-corrected Gravity Model is, however, defined by the position ⟨ w_ij⟩=a_il_j/W, ∀ i≠ j, a piece of evidence leading us to write
ϕ_1≃⟨a||l⟩/W=∑_i=1^Na_il_i/W≡ω_1^CL.
As the considered matrix is deterministic, the variance of its spectral radius is, by definition, zero.
§ DATA DESCRIPTION
§.§ Dutch Interbank Network
The Dutch Interbank Network (DIN) is represented as a binary, directed network whose nodes are anonymised, Dutch banks and links represent exposures (from contractual obligations to swaps) up to one year and larger than 1.5 millions of euros. Data are reported quarterly from 1998Q1 to 2008Q4, hence consisting of 44 snapshots. Notice that the last four ends of quarters correspond to 2008, i.e. the first year of the global financial crisis <cit.>. Given the nature of the available data, a link pointing from bank i to bank j at time t indicates the existence of a total exposure of more than 1.5 million euros, directed from i to j, registered at the end of the particular quarter t.
§.§ Electronic Market for Interbank Deposit
The Electronic Market for Interbank Deposit (e-MID) is represented as a weighted, directed network whose nodes are anonymised, Italian banks and weights represent exposures in million euros[e-MID is a centralised interbank market for trading unsecured deposits, working as follows: a bank quotes an offer to lend or borrow money (minimum quote: 1.5 million euros) at a certain maturity and interest rate; a second bank chooses (at least a part of) the quoted order (minimum quote: 50.000 euros), and the trade is registered if and only if both counterparties have agreed on it. The following information is available for each active bank during the period: an anonymous ID identifying the bank and the country where it is legally settled. In <cit.>, Fricke and Lux have highlighted i) how the number of active, foreign banks largely varies over the considered period, experiencing a dramatic drop in correspondence of the Lehman-Brothers bailout; ii) how the number of active Italian banks is quite stable over the period, although it decreases after the global financial crisis.]. Reported data cover the period January 1999-December 2014, on a daily frequency: a link with weight w_ij, pointing from bank i to bank j at time t indicates the existence of the total exposure w_ij≥ 50.000 euros, directed from i to j, registered at the end of the particular period t. Considering that ≃98% of banks are Italian and that the volume of their transactions covers ≃85% of the total volume (as of 2011), our analysis solely focuses on the subgraph induced by such a subset of nodes. We also examine all aggregation periods ranging from daily to yearly - although the figures will depict e-MID on a quarterly basis.
§.§ International Trade Network
The International Trade Network (ITN) is represented as a weighted, directed network whose nodes are countries and weights represent imports/exports in million euros. Data on yearly trade flows during the period 2000-2020 have been downloaded from the UN-COMTRADE website[https://comtradeplus.un.org/https://comtradeplus.un.org/]. To consistently compare data, a panel of 112 countries for which trade information was available for the entire period has been selected <cit.>. Given the nature of the available data, a link whose weight is w_ij, pointing from country i to country j during the year y indicates the existence of an exported amount of commodities whose value matches w_ij, directed from i to j, during that year.
§ RESULTS
§.§ Inspecting the accuracy of our approximations
The derivation of our results rests upon several approximations whose accuracy must be explicitly checked case by case.
The first one concerns the expected value of the trace of the exponential of 𝐀 - which has been proven to satisfy the relationship ⟨Tr[e^𝐀]⟩≥Tr[e^𝐏], hence being strictly larger than the trace of the exponential of 𝐏 for any network with positive reciprocity, i.e. having r=L^↔/L>0. In order to check how close the two terms above are, we have explicitly computed the ratio Tr[e^𝐏]/⟨Tr[e^𝐀]⟩ for all the snapshots of our systems. The results are reported in the seventh column of tables <ref> and <ref> in Appendix AppEE. As evident, Tr[e^𝐏]/⟨Tr[e^𝐀]⟩≲1 irrespectively from the structural details of our configurations - in particular, even for configurations with a non-negligible level of reciprocity such as those constituting the DIN, for which r≃ 0.3. In other words, the trace of the matrix 𝐏 describing a random network model provides a quite accurate approximation of the expected value of the trace of the adjacency matrix 𝐀 under the same model. As the 2008Q1, 2008Q2, 2008Q3 and 2008Q4 snapshots of the DIN confirm, the accuracy of the approximation above increases as r decreases.
Analogously, Tr[e^𝐐]/⟨Tr[e^𝐖]⟩≲1, as the seventh column of table <ref> in Appendix AppEE shows.
The second one concerns the hypothesis that the trace of the exponential of 𝐀 and the trace of the exponential of 𝐏 are both dominated by their largest addendum, i.e. Tr[e^𝐀]≳ e^λ_1 and Tr[e^𝐏]≳ e^π_1. In order to check how close the two pairs above of terms are, we have explicitly computed the ratios e^λ_1/Tr[e^𝐀] and e^π_1/Tr[e^𝐏] for all the snapshots of our systems. The results are reported in the fifth and sixth columns of tables <ref> and <ref> in Appendix AppEE. As evident, e^λ_1/Tr[e^𝐀]≲1 and e^π_1/Tr[e^𝐏]≲1 irrespectively from the structural details of our configurations. In words, the trace of the matrix 𝐀 is exponentially dominated by the addendum e^λ_1 and the trace of the matrix 𝐏 is exponentially dominated by the addendum e^π_1. The accuracy of the approximation remains steadily high.
Analogously, e^ω_1/Tr[e^𝐖]≲1 and e^ϕ_1/Tr[e^𝐐]≲1, as the fifth and sixth column of table <ref> in Appendix AppEE show.
§.§ Expected value and variance of the
spectral radius
After having checked the goodness of our approximations, let us investigate the accuracy of the estimations of the expected value and variance of the spectral radius of our random network models.
*Erdös-Rényi Model. As the last column of tables <ref> and <ref> shows, the expected value of the spectral radius of 𝐀, evaluated numerically as the average over |𝔸|=10^3 configurations reading
⟨λ_1⟩=∑_𝐀∈𝔸λ_1(𝐀)/|𝔸|,
is always very well approximated by the spectral radius of 𝐏, i.e. π_1. The accuracy of such an estimation is pictorially confirmed by the left panels of fig. <ref>, showing the related scatter plot for each of the 44 snapshots constituting the DIN and for each of the 64 snapshots constituting the quarterly e-MID.
The central panels of fig. <ref>, instead, provide information about the explicit functional form of π_1, that matches the estimation reading λ_1^ER=(N-1)p=L/N.
The right panels of fig. <ref> provide information about the explicit functional form of the variance of the spectral radius by comparing
Var[λ_1]=∑_𝐀∈𝔸[λ_1(𝐀)-⟨λ_1⟩]^2/|𝔸|
with Var[λ_1^ER]=p(1-p): as it can be appreciated, such an expression slightly underestimates the ensemble variance of the spectral radius.
*Binary Configuration Model. As the last column of tables <ref> and <ref> shows, the expected value of the spectral radius of 𝐀, evaluated numerically as the average over |𝔸|=10^3 configurations reading ⟨λ_1⟩=∑_𝐀∈𝔸λ_1(𝐀)/|𝔸|, is always very well approximated by the spectral radius of 𝐏, i.e. π_1. The accuracy of such an estimation is pictorially confirmed by the left panels of fig. <ref>, showing the related scatter plot for each of the 44 snapshots constituting the DIN and for each of the 64 snapshots constituting the quarterly e-MID.
The central panels of fig. <ref>, instead, provide information about the explicit functional form of π_1 which is (overall) well approximated by the Chung-Lu estimation reading λ_1^CL=∑_i=1^Nk_ih_i/L for what concerns the e-MID and overestimated by the same expression for what concerns the DIN.
The right panels of fig. <ref> provide information about the explicit functional form of the variance of the spectral radius, by comparing Var[λ_1]=∑_𝐀∈𝔸[λ_1(𝐀)-⟨λ_1⟩]^2/|𝔸| with
Var[λ_1^CL]=∑_𝐀∈𝔸[λ_1^CL(𝐀)-⟨λ_1^CL⟩]^2/|𝔸|;
as it can be appreciated, such an expression either underestimates (for what concerns the e-MID) or overestimates (for what concerns the DIN) the ensemble variance of the spectral radius. Notice also that such an expression calculates the variance of the spectral radius by evaluating λ_1^CL(𝐀), i.e. the numerical value of the Chung-Lu approximation, for each matrix in the sampled ensemble. As fig. <ref> shows, these discrepancies seem to be due to a systematic mismatch caused by the configuration-specific values of the spectral radius - the DIN, for instance, obeys the relationship λ_1^CL(𝐀)>λ_1(𝐀), ∀ 𝐀, a result potentially explaining the differences between λ_1^CL and π_1 and between Var[λ_1^CL] and Var[λ_1] - in words, the numbers λ_1^CLs are not only larger than their ensemble counterparts but are also more dispersed (see also fig. <ref> in Appendix AppFF).
*Reciprocal Configuration Model. The Reciprocal Configuration Model performs similarly to the Binary Configuration Model. While the last column of tables <ref> and <ref> shows that the expected value of the spectral radius of 𝐀, evaluated numerically as the average over |𝔸|=10^3 configurations reading ⟨λ_1⟩=∑_𝐀∈𝔸λ_1(𝐀)/|𝔸|, is always very well approximated by the spectral radius of 𝐏, i.e. π_1, the left panels of fig. <ref>, show the scatter plot concerning the two sets of quantities ⟨λ_1^↔⟩ and π_1^↔ for each of the 44 snapshots constituting the DIN and for each of the 64 snapshots constituting the quarterly e-MID.
The central panels of fig. <ref>, instead, provide information about the explicit functional form of π_1^↔ which is (overall) well approximated by the Chung-Lu estimation reading λ_1^CL=∑_i=1^Nk_i^↔ k_i^↔/2L for what concerns the e-MID and overestimated by the same expression for what concerns the DIN.
The right panels of fig. <ref> provide information about the explicit functional form of the variance of the spectral radius by comparing Var[λ_1^↔]=∑_𝐀∈𝔸[λ_1^↔(𝐀)-⟨λ_1^↔⟩]^2/|𝔸| with
Var[λ_1^CL^↔]=∑_𝐀∈𝔸[λ_1^CL^↔(𝐀)-⟨λ_1^CL^↔⟩]^2/|𝔸|;
as it can be appreciated, such an expression overestimates the ensemble variance of the spectral radius. As for the Binary Configuration Model, such an expression calculates the variance of the spectral radius by evaluating λ_1^CL^↔(𝐀), i.e. the numerical value of the Chung-Lu approximation, for each matrix in the sampled ensemble. These discrepancies may, thus, be imputable to a systematic mismatch caused by the configuration-specific values of the spectral radius.
*Density-Corrected Gravity Model. The last column of tables <ref> and <ref> shows that the expected value of the spectral radius of 𝐖, evaluated numerically as the average over |𝕎|=10^3 configurations reading ⟨ω_1⟩=∑_𝐖∈𝕎ω_1(𝐖)/|𝕎|, is always very well approximated by the spectral radius of 𝐐, i.e. ϕ_1, as the left panels of fig. <ref> pictorially confirm. Besides, the right panels of the same figure provide information about the explicit functional form of ϕ_1 which is (overall) well approximated by the Chung-Lu estimation reading ω_1^CL=∑_i=1^Na_il_i/W for each of the 16 snapshots constituting the yearly e-MID and for each of the 21 snapshots constituting the yearly ITN.
§.§ Spectral signature of structural changes in financial networks
Now, let us inspect the presence of structural changes affecting our networked configurations. To this aim, we will plot the evolution of z[λ_1] across the periods covered by our datasets; we will proceed numerically by explicitly sampling the network ensemble induced by each of the benchmarks considered here per snapshot.
§.§.§ Dutch Interbank Network
As fig. <ref> clearly shows, the structural change undergone by the system in 2008 is signalled by several quantities: the total number of active Dutch banks sharply decreases as well as the total number of links, whose number diminishes in corresponding of the last year covered by our dataset; this, in turn, causes the connectance to rise. As already discussed in <cit.>, one of the most evident signals of the global financial crisis is provided by reciprocity: for most of the period, it is characterised by an essentially constant trend, with small fluctuations around an average value of ≃0.26; the last, four snapshots are, then, characterised by a drop of ≃40%, causing the empirical values to lie almost three sigmas away from the sample average - a trend indicating that the reciprocity of the DIN is anomalously low during the critical period and imputable to a decrease of the level of trust characterising the Dutch system.
An additional signal of the global financial crisis is provided by the empirical value of the spectral radius itself, which decreases in correspondence with 2008Q1 and remains constant across the last four snapshots of our dataset. As it is related to the number of closed walks in a network, its decrease may be related to the decline of reciprocity. However, the latter's trend appears as (much) less affected by the statistical fluctuations characterising the evolution of the DIN throughout its entire history.
Let us now comment on the signal provided by z[λ_1]. Even if the Erdös-Rényi Model is, from a merely financial perspective, an unlikely benchmark (its homogeneous nature forces the banks to be similar in size), employing it still allows us to conclude that the DIN is characterised by two structural changes - the first one taking place across 2005 and the second one taking place across 2008. More specifically, after a (more or less) stationary trend characterising the evolution of the DIN from 1998 to 2005 - in correspondence of which the number of closed walks is significantly large - a smooth trend characterising the pre-crisis phase is recovered; afterwards, an abrupt drop connecting the last quarter of 2007 with the first quarter of 2008 emerges. Such a result complements the ones presented in <cit.> where such behaviour could have been revealed only by employing a heterogeneous benchmark (specifically, the Binary Configuration Model).
Employing the heterogeneous benchmarks - preserving the heterogeneity of banks by constraining the observed (reciprocal) degrees - leads to the same qualitative result. More quantitatively, instead, all such null models reveal that the number of closed walks is perfectly compatible with their predictions during the stationary phase of the system. Such a consistency confirms that, in the absence of distress, the topology of the DIN can be reconstructed quite accurately, solely employing the information provided by the number of (inward, outward and reciprocated) partners of each bank. It is noticed that the explanatory power of the Reciprocal Configuration Model is larger than that of the Global Reciprocity Model, which, in turn, is (only slightly) larger than that of the Binary Configuration Model.
As the build-up phase of the crisis began, a decreasing trend led to 2008, indicating that the local connectivity of banks became less and less informative about the network as a whole - emerges. Under the same benchmarks, the second regime shift is preceded by a short, rising trend. As already noticed in <cit.>, maximum-entropy techniques yield a realistic guess of the real network only in tranquil times: when the network is under stress, instead, these models provide a sort of distorted picture of it, whose differences from the empirical situation constitute the structural changes we are looking for.
Apart from model-specific differences, however, the degree of informativeness about the changes affecting the DIN carried by the spectral radius seems quite independent of the model employed to spot the differences above.
§.§.§ Electronic Market for Interbank Deposit
For what concerns the e-MID, instead, the evolution of the total number of active Italian banks steadily decreases, hence not providing any clear indication about the presence of structural changes. On the contrary, the evolution of the total number of links provides a quite clear indication of the presence of two regime shifts as L drops in correspondence of 2008 and 2012. Overall, the connectance and the reciprocity provide a very similar indication - the global financial crisis being characterised by a stronger signal than the one characterising the long-term refinancing operation (LTRO) promoted by the European Central Bank at the end of 2011[The two LTRO measures date December the 22nd, 2011 and February the 29th, 2012.].
The evolution of the empirical value of the spectral radius is characterised by a drop in correspondence of the first crisis, originating a slightly fluctuating trend that lasts until 2012, the year in correspondence of which a second, decreasing trend can also be observed.
Let us now comment on the signal provided by z[λ_1]. Employing a homogeneous benchmark such as the Erdös-Rényi Model allows us to conclude that the e-MID is characterised by three structural changes, the first one taking place across 2000, the second one taking place between 2007 and 2008 and the third one taking place across 2012.
More specifically, the evolution of the e-MID starts with a drop of the z-score of the spectral radius, indicating that the number of closed walks has become significantly smaller than expected during 2001. Afterwards, an increasing trend leading to a phase characterised by several closed walks compatible with the output of the prediction by the Erdös-Rényi Model becomes visible. Such a period is interrupted by the so-called pre-crisis phase, during which the trend of λ_1 reverts and becomes again significantly smaller than expected. From 2009 on, a second, increasing trend lasting until 2012 becomes visible: afterwards, the system stabilises.
Employing the heterogeneous benchmarks leads to quite different results: more quantitatively, the first regime shift disappears, replaced by a stationary trend lasting until 2003; afterwards, a rising trend leading the system to its (pre-)critical phase appears. Since 2009 on, a decreasing trend lasting a couple of years emerges to be followed, once more, by an increasing one. From this perspective, the DIN and the e-MID behave, somehow, oppositely: while the global financial crisis induces a statistically significant signal in the case of the DIN, it does not in the case of the e-MID. In a sense, maximum-entropy techniques can be used to reconstruct the e-MID when the system is under stress, while this should be avoided in tranquil times - e.g. the first years of the dataset - when the picture of it inferred from local constraints departs the most from the empirical one.
Differently from the DIN, the explanatory power of the Reciprocal Configuration Model (still larger than the one of the Global Reciprocity Model, which, however, performs similarly to the Binary Configuration Model) is so large that the measurements carried out on the e-MID (practically) always compatible with the predictions. Although such a piece of evidence speaks against the use of the Reciprocal Configuration Model to detect deviations from the average behaviour, statistical tendencies can still be revealed, confirming once more that a dichotomous yes/no answer to the question is this pattern statistically significant? may be quite unsatisfactory to gain a sufficiently deep insight into system behaviour.
§ DISCUSSION
The so-called stability analysis represents an application of particular interest in the study of financial networks, a topic whose popularity has steadily increased since the turmoil due to the mortgage crisis <cit.>. The objective of this kind of analysis is to understand the relationship(s) between the topological structure of financial networks and their resilience to events like shocks, cascading failures, etc., by employing real data <cit.>, reconstructed configurations <cit.> or (simple) toy models <cit.>. A direct way to explore this connection is by running stress tests on several different topological structures by measuring the effects of a simulated shock and the subsequent propagation of losses ex post <cit.>: later works have related these results to the magnitude of the spectral radius of the so-called leverage matrix <cit.> although no algorithm has been devised to estimate its magnitude from the (partial) information that is usually available in financial contexts.
With the present contribution, we have tackled a more general challenge, i.e. that of estimating the spectral radius of random network models calibrated on real-world evolving networks. To this aim, we have adopted several approximations that have led to the surprisingly simple recipe ⟨λ_1⟩≃π_1 for estimating the expected value of λ_1, with π_1 representing the spectral radius of the probabilistic matrix describing the chosen model. Despite our result is based on an approximation[We have explicitly verified that the properties of existence, reality, positivity, maximality and uniqueness of the spectral radius hold for each, considered configuration.], it turns out to be extremely accurate for any directed (binary or weighted) random network model considered.
Besides the theoretical relevance of such a result, its usefulness lies in spotting the structural changes separating a (financial) regime from another by exploiting the interplay between distress and topological changes. As the case studies of the DIN and the e-MID illustrate, deviations from the average behaviour can happen in both directions, either moving away from a less structured configuration (hence becoming a less typical member of an equilibrium ensemble of graphs) or moving towards a less structured configuration (hence becoming a more typical member of an equilibrium ensemble of graphs): from this perspective, each quantity characterising the original network can be straightforwardly assigned a level of significance - which is sensitive to the direction - by computing the related z-score, i.e. an index comparing the measured value with the one expected under a null model preserving some properties of the observed network but, otherwise, being maximally random.
Although our results become exact in case a perfectly non-reciprocal network is observed, future research calls for a more accurate evaluation of our approximations - hopefully, in terms of the reciprocity itself. Besides, extending the results of the present analysis to undirected, binary or weighted networks would enlarge their applicability beyond the economic and financial domains.
§ ACKNOWLEDGMENTS
VM acknowledges support from the project NetRes - `Network analysis of economic and financial resilience', Italian DM n. 289, 25-03-2021 (PRO3 2021-2023 University joint program `Le Scuole Superiori ad Ordinamento Speciale: istituzioni a servizio del Paese'), CUP D67G22000130001 (<https://netres.imtlucca.it>) funded by the Italian Ministry of University and Research (MUR). This work is also supported by the European Union - NextGenerationEU - National Recovery and Resilience Plan (PNRR Research Infrastructures), project `SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics' - Grant IR0000013 (Avviso MUR D.D. n. 3264, 28/12/2021) (<https://pnrr.sobigdata.it/>) and the project “Reconstruction, Resilience and Recovery of Socio-Economic Networks” RECON-NET EP_FAIR_005 - PE0000013 “FAIR” - PNRR M4C2 Investment 1.3, financed by the European Union – NextGenerationEU.
We thank Anna Gallo for useful discussions.
AppA
§ APPENDIX A.
DYADIC EARLY-WARNING SIGNALS
Upon defining
X =∑_i=1^N∑_j=1^Na_ija_ji=∑_i=1^N[𝐀^2]_ii=Tr[𝐀^2],
we are left with the task of calculating its expected value and variance. The evidence that the expected value is a linear operator (i.e. ⟨ aX+bY⟩=a⟨ X⟩+b⟨ Y⟩) and that the entries of a binary, directed network are treated as independent random variables under any of the random network models considered here, makes such a calculation straightforward. In fact,
⟨ X⟩=⟨∑_i=1^N∑_j=1^Na_ija_ji⟩=∑_i=1^N∑_j=1^N⟨ a_ija_ji⟩=∑_i=1^N∑_j=1^N⟨ a_ij⟩⟨ a_ji⟩=∑_i=1^N∑_j=1^Np_ijp_ji.
In order to calculate the variance of X, let us consider that X can be re-written as
X=∑_i=1^N∑_j=1^Na_ija_ji=2∑_i=1^N∑_j(>i)a_ija_ji
i.e. as a sum over dyads, treated as independent random variables under any random network models considered here. Since the variance of a sum of independent random variables coincides with the sum of their variances, one can write
Var[X] =Var[2∑_i=1^N∑_j(>i)a_ija_ji]=4·∑_i=1^N∑_j(>i)Var[a_ija_ji];
then, since a_ija_ji∼Ber[p_ijp_ji], one finds that
Var[X] =4·∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji).
It is nevertheless instructive to follow an alternative road and consider that
Var[X]=Var[∑_i=1^N∑_j=1^Na_ija_ji] =∑_i=1^N∑_j=1^NVar[a_ija_ji]+2·∑_i=1^N∑_j(>i)Cov[a_ija_ji,a_ija_ji]
=∑_i=1^N∑_j=1^Np_ijp_ji(1-p_ijp_ji)+2·∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji)
=2·∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji)+2·∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji)
=4·∑_i=1^N∑_j(>i)p_ijp_ji(1-p_ijp_ji).
The comparison between the analytical estimations of the expected value and the variance of the number of dyads and the numerical counterparts, obtained by explicitly sampling the ensembles induced by the Erdös-Rényi Model and the Binary Configuration Model is illustrated in figs. <ref> and <ref>.
AppB
§ APPENDIX B.
TRIADIC EARLY-WARNING SIGNALS
Upon defining
X =∑_i=1^N∑_j=1^N∑_k=1^Na_ija_jka_ki=∑_i=1^N[𝐀^3]_ii=Tr[𝐀^3],
we are left with the task of calculating its expected value and variance. Analogously to the dyadic case, calculating the expected value is straightforward. In fact,
⟨ X⟩=⟨∑_i=1^N∑_j=1^N∑_k=1^Na_ija_jka_ki⟩=∑_i=1^N∑_j=1^N∑_k=1^N⟨ a_ija_jka_ki⟩=∑_i=1^N∑_j=1^N∑_k=1^N⟨ a_ij⟩⟨ a_jk⟩⟨ a_ki⟩=∑_i=1^N∑_j=1^N∑_k=1^Np_ijp_jkp_ki.
In order to calculate the variance of X, let us, first, consider that X can be re-written as
X =∑_i=1^N∑_j=1^N∑_k=1^Na_ija_jka_ki=3·∑_i=1^N∑_j(>i)∑_k(>j)(a_ija_jka_ki+a_ika_kja_ji)≡3·∑_i<j<k(a_ija_jka_ki+a_ika_kja_ji)
i.e. as a sum over triads. Then, let us notice that
Var[X]=3^2·[∑_𝐈Var[a_𝐈]+2·∑_𝐈<𝐉Cov[a_𝐈,a_𝐉]]
where we have employed the multi-index notation, i.e. 𝐈≡(i,j,k) and 𝐉≡(l,m,n). More explicitly,
Var[a_𝐈] =Var[a_ija_jka_ki]+Var[a_ika_kja_ji]+Cov[a_ija_jka_ki,a_ika_kja_ji]
=p_ijp_jkp_ki(1-p_ijp_jkp_ki)+p_ikp_kjp_ji(1-p_ikp_kjp_ji)+Cov[a_ija_jka_ki,a_ika_kja_ji]
with Cov[a_ija_jka_ki,a_ika_kja_ji] depending on the adopted benchmark: under both the Erdös-Rényi Model and the Binary Configuration Model, it amounts at zero. Overall, thus,
∑_𝐈Var[a_𝐈]=∑_i<j<k[p_ijp_jkp_ki(1-p_ijp_jkp_ki)+p_ikp_kjp_ji(1-p_ikp_kjp_ji)].
Moreover,
Cov[a_𝐈,a_𝐉] =⟨(a_ija_jka_ki+a_ika_kja_ji)·(a_lma_mna_nl+a_lna_nma_ml)⟩-⟨ a_ija_jka_ki+a_ika_kja_ji⟩·⟨ a_lma_mna_nl+a_lna_nma_ml⟩
=⟨(a_ija_jka_ki+a_ika_kja_ji)·(a_lma_mna_nl+a_lna_nma_ml)⟩-(p_ijp_jkp_ki+p_ikp_kjp_ji)·(p_lmp_mnp_nl+p_lnp_nmp_ml)
is different from zero, i.e. any two triads co-variate as long as they share an edge. In this case, they form a diamond whose vertices can be labelled either as i≡ l, j≡ m, k, n or as i≡ m, j≡ l, k, n and induce the expression
Cov[a_𝐈,a_𝐉] =p_ijp_jkp_kip_jnp_ni-(p_ij)^2p_jkp_kip_jnp_ni+p_jip_ikp_kjp_inp_nj-(p_ji)^2p_ikp_kjp_inp_nj
=p_ij(1-p_ij)p_jkp_kip_jnp_ni+p_ji(1-p_ji)p_ikp_kjp_inp_nj.
Let us now, calculate the number of times such an expression appears, i.e. the number of triples sharing an edge: since we need to first, choose the pair of nodes individuating the common edge and, then the pair of nodes individuating the `free' vertices of the two triads, such a number amounts at N2N-22=N(N-1)(N-2)(N-3)/4; in case N=4, it amounts at 3!=6 - indeed, let us concretely focus on the triads (1,2,3), (1,2,4), (1,3,4), (2,3,4): (1,2,3) co-variates with (1,2,4), (1,3,4), (2,3,4); (1,2,4) co-variates with (1,3,4), (2,3,4); (1,3,4) co-variates with (2,3,4). Overall, then,
∑_𝐈<𝐉Cov[a_𝐈,a_𝐉]=3!·∑_i<j<k<n[p_ij(1-p_ij)p_jkp_kip_jnp_ni+p_ji(1-p_ji)p_ikp_kjp_inp_nj].
The comparison between the analytical estimations of the expected value and the variance of the number of triads and the numerical counterparts, obtained by explicitly sampling the ensembles induced by the Erdös-Rényi Model and the Binary Configuration Model is illustrated in figs. <ref> and <ref>.
AppC
§ APPENDIX C.
DIAGONALISATION AND TRACE OF THE MATRIX EXPONENTIAL
In this Appendix, we will provide a sketch of the proof that
f(𝐀)=𝐅f(Λ)𝐅^-1
and that
Tr[f(𝐀)]=Tr[𝐅f(Λ)𝐅^-1]=Tr[f(Λ)𝐅^-1𝐅]=Tr[f(Λ)],
i.e. that the trace is invariant under a cyclic permutation of matrices, in the special case f(·)≡ e^(·) and where 𝐅 is the matrix that diagonalises 𝐀, i.e. the one ensuring that 𝐅^-1𝐀𝐅=Λ.
Since the function of a matrix is formally identical to its series expansion, one can write that
e^𝐀≡𝐈+𝐀+𝐀^2/2!+𝐀^3/3!+…+𝐀^n/n!+…;
let us now diagonalise it:
𝐅^-1e^𝐀𝐅 ≡ 𝐅^-1𝐈𝐅+𝐅^-1𝐀𝐅+𝐅^-1𝐀^2𝐅/2!+𝐅^-1𝐀^3𝐅/3!+…+𝐅^-1𝐀^n𝐅/n!+…
= 𝐈+Λ+(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)/2!+(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)/3!+…
= 𝐈+Λ+(𝐅^-1𝐀𝐅)^2/2!+(𝐅^-1𝐀𝐅)^3/3!+…+(𝐅^-1𝐀𝐅)^n/n!+…
= 𝐈+Λ+Λ^2/2!+Λ^3/3!+…+Λ^n/n!+…≡ e^Λ.
Since all matrices appearing in the last row are diagonal, e^Λ also has diagonal entries. As a consequence,
Tr[e^Λ] = ∑_i=1^N(e^Λ)_ii=∑_i=1^Ne^λ_i=∑_i=1^N1+∑_i=1^Nλ_i+∑_i=1^Nλ_i^2/2!+∑_i=1^Nλ_i^3/3!+…+∑_i=1^Nλ_i^n/n!+…
= Tr[𝐈]+Tr[Λ]+Tr[Λ^2]/2!+Tr[Λ^3]/3!+…+Tr[Λ^n]/n!+…
= Tr[𝐈]+Tr[𝐅^-1𝐀𝐅]+Tr[(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)]/2!+Tr[(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)(𝐅^-1𝐀𝐅)]/3!+…
= Tr[𝐈]+Tr[𝐅^-1𝐀𝐅]+Tr[𝐅^-1𝐀^2𝐅]/2!+Tr[𝐅^-1𝐀^3𝐅]/3!+…+Tr[𝐅^-1𝐀^n𝐅]/n!+…
= Tr[𝐈]+Tr[𝐀𝐅𝐅^-1]+Tr[𝐀^2𝐅𝐅^-1]/2!+Tr[𝐀^3𝐅𝐅^-1]/3!+…+Tr[𝐀^n𝐅𝐅^-1]/n!+…
= Tr[𝐈]+Tr[𝐀]+Tr[𝐀^2]/2!+Tr[𝐀^3]/3!+…+Tr[𝐀^n]/n!+…=Tr[e^𝐀],
where we have exploited the property of the trace of being invariant under circular shifts.
AppD
§ APPENDIX D.
ENSEMBLE DISTRIBUTION OF THE SPECTRAL RADIUS
AppE
§ APPENDIX E.
DUTCH INTERBANK NETWORK
§ ELECTRONIC MARKET FOR INTERBANK DEPOSIT
§ INTERNATIONAL TRADE NETWORK
AppF
§ APPENDIX F.
INSPECTING THE ACCURACY OF THE CHUNG-LU APPROXIMATION
|
http://arxiv.org/abs/2409.02327v1 | 20240903223855 | Generative Principal Component Regression via Variational Inference | [
"Austin Talbot",
"Corey J Keller",
"David E Carlson",
"Alex V Kotlar"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Journal of Class Files, Vol. 18, No. 9, September 2020
How to Use the IEEEtran Templates
Generative Principal Component Regression via Variational Inference
Austin Talbot, Corey J. Keller, David E. Carlson, Alex V. Kotlar
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
This research was supported in part by the National Institute of Mental Health under award number R01MH126639 (AT, CJK), and a Burroughs Wellcome Fund Career Award for Medical Scientists (CJK). This work was also funded via a donation from Gates Ventures to the Goizueta ADRC at Emory University for the support of innovative work in the areas of brain imaging, genomics, and proteomics (AK). (Corresponding author: Austin Talbot).
Austin Talbot is with Pillar Diagnostics Inc, Natick, MA 01760 USA (e-mail: [email protected]).
Corey J. Keller is with 1) the Department of Psychiatry and Behavioral Sciences, Stanford University, Palo Alto, CA 94301 USA (e-mail: [email protected]); 2) The Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; 3) Veterans Affairs Palo Alto Healthcare System.
David E. Carlson is with the Department of Civil and Environmental Engineering, Duke University, Durham, NC 27708 USA (e-mail: [email protected]).
Alex V. Kotlar is with the Department of Biomedical Informatics, Emory University, Atlanta, GA 30322 USA (e-mail: [email protected]).
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The ability to manipulate complex systems, such as the brain, to modify specific outcomes has far-reaching implications, particularly in the treatment of psychiatric disorders. One approach to designing appropriate manipulations is to target key features of predictive models. While generative latent variable models, such as probabilistic principal component analysis (PPCA), is a powerful tool for identifying targets, they struggle incorporating information relevant to low-variance outcomes into the latent space. When stimulation targets are designed on the latent space in such a scenario, the intervention can be suboptimal with minimal efficacy. To address this problem, we develop a novel objective based on supervised variational autoencoders (SVAEs) that enforces such information is represented in the latent space. The novel objective can be used with linear models, such as PPCA, which we refer to as generative principal component regression (gPCR). We show in simulations that gPCR dramatically improves target selection in manipulation as compared to standard PCR and SVAEs. As part of these simulations, we develop a metric for detecting when relevant information is not properly incorporated into the loadings. We then show in two neural datasets related to stress and social behavior in which gPCR dramatically outperforms PCR in predictive performance and that SVAEs exhibit low incorporation of relevant information into the loadings. Overall, this work suggests that our method significantly improves target selection for manipulation using latent variable models over competitor inference schemes.
Dimensionality reduction; Maximum likelihood estimation; Neuroscience; Principal component analysis
§ INTRODUCTION
Latent variable models, particularly factor models, serve as a foundational tool across a broad spectrum of scientific disciplines. This ubiquity is unsurprising due to their ability to distill complex, high-dimensional data into a more manageable, low-dimensional form and the quick parameter convergence allowing the models to be inferred with relatively small sample sizes. They are used in astronomy to classify celestial bodies <cit.> and in genomics to aid in the visualization and analysis of single-cell data <cit.>. In the realm of social sciences, factor models uncover latent structures that can inform policy and planning decisions <cit.>. They are also heavily used in neuroscience <cit.>, as their structure aligns with the idea of “networks” of relevant brain activity giving rise to the observed covariates <cit.>. In this field, the strong correlations between the covariates make sparsity in dimensionality a highly desirable model feature <cit.>.
Beyond their use in exploratory data analysis, factor models are also used to develop hypotheses and targets for manipulations to modify an outcome or behavior associated with the data <cit.>. Beyond the scientific goal of using manipulation to provide evidence of causality <cit.>, manipulations are critical in many clinical applications <cit.>. Once the relationship between the factors and the outcome is known, targets can be chosen as influential covariates of the critical factors, as measured by the loadings. This approach has been used successfully to modify a diverse set of behaviors such as social activity <cit.>, aggression <cit.>, and anxiety <cit.>. Unfortunately, while factor models excel in scientific interpretability, practical application of factor models in designing manipulations is quite difficult. The outcomes considered are commonly low variance signals and are easily overshadowed by more dominant high-variance components <cit.>. Standard likelihood-based techniques may miss these subtle signals as they, by design, focus on explaining maximal variance. Because of this, fitting a predictive model subsequently to the generative model, such as in principal component regression (PCR) <cit.> and more broadly cutting the feedback <cit.>, have performed poorly on prediction in comparison to solely predictive models
Addressing this problem often requires supervision, the incorporation of additional guiding signals—typically expressed as a loss function—that help steer the model towards learning representations that are specifically aligned with desired outcomes <cit.>. Supervised variational autoencoders (SVAEs) are a notable example of this approach <cit.>. They employ an encoder-decoder structure to both compress the data into a latent space and reconstruct it, with the added supervision ensuring the encoded representations are pertinent to the outcome of interest. This approach ostensibly combines the best aspects of both generative and predictive models; the generative component adds to scientific interpretability and regularizes the supervision loss while the supervision ensures that the learned space is relevant to the outcome <cit.>.
However, recent work has shown that SVAEs possess a critical flaw when the loadings are used to design manipulations; the supervision loss “drags” the encoder away from the generative posterior, the distribution of the latent variables conditioned on the covariates, as defined by the loadings <cit.>. In other words, the latent variables implied solely by the generative model are different than the latent variables inferred by the full SVAE, and the generative latent variables can be dramatically worse for predicting the outcomes of interest. The discrepancy between the encoder and generative model is highly undesirable, as it means that manipulations based on the loadings may not modify the predictive space as desired or with dramatically reduced efficacy. This property had escaped detection as the use of the generative arm of the SVAE for target selection is a more recent application and less frequent.
In this paper, we develop a novel inference algorithm to address the issue of incorporating predictive information in generative models. This algorithm is straightforward to implement in linear models, which we term generative principal component regression (gPCR) that yields dramatically improved predictive performance from the latent variables implied by the generative model. This is accomplished by using the SVAE objective but replacing the encoder with the generative posterior. This objective can be viewed as a solution to three separate problems: (1) inferring a linear predictive model with sparsity in dimensionality as opposed to covariates, (2) inferring a factor model relevant to an outcome of interest, and (3) eliminating the discrepancy between the encoder and decoder in SVAEs to improve experimental design, in this case for manipulation target selection. In addition, we also empirically demonstrate the problems caused by the encoder/decoder discrepancy in SVAEs, as we are able to directly compare the SVAE encoder with the generative posterior rather than relying on indirect evidence for the discrepancy. We evaluate our method on two neuroscience applications, one detecting the electrophysiology associated with stress and the other associated with social behavior and show that our method dramatically improves upon PCR and can match or exceed the performance of traditional predictive models. Finally, we show in synthetic data that our model provides superior identification of manipulation targets. Furthermore, we show that SVAEs exhibit similar behavior in the two neuroscience datasets, suggesting that this limitation is a real phenomenon rather than a theoretical concern and that our approach is a major advance in addressing this problem.
The contents of this paper are as follows: in Section <ref> we summarize relevant work that either inspired our method or seeks to address this problem. In Section <ref> we derive our novel inference method and discuss its properties. In Section <ref>, we provide an illustrative example using synthetic data demonstrating how gPCR improves upon PCR for predictive ability and SVAEs for target selection. In Section <ref> we demonstrate our inference algorithm's efficacy on multiple neuroscience datasets, along with illustrating the deficiencies of the commonly used SVAE. Finally, in Section <ref> we provide some brief remarks and potential future directions of this work. All models are implemented in the publicly available Bystro github repository <https://github.com/bystrogenomics/bystro> and all code required to reproduce the figures is located at <https://github.com/bystrogenomics/bystro-science>.
§ RELATED WORK
There are several areas of active research related to this work. First has been work on improving the predictive ability of latent variable models. One of the initial methods used thresholding to select the most predictive covariates <cit.> and using these features for principal component regression. While effective, this has the undesirable impact of not including all covariates in the generative model, which is often undesirable scientifically. Other alternatives focus on making the generative model to reduce the impact of misspecification, introducing extra latent variables in partial least squares <cit.>, canonical correlation analysis <cit.>, or Bayesian nonparametric models <cit.>. However, this additional flexibility often fails to improve predictive performance, which leads to methods for explicitly incorporating the auxiliary information <cit.>, <cit.>. However, these methods also have struggled to properly incorporate information into the latent space <cit.>.
Another area of relevant research is to alter the use pseudolikelihoods used commonly in Markov random fields as opposed to traditional likelihoods <cit.>. These methods replace the joint likelihood of the observed covariates with conditional likelihoods of each of the covariates conditioned on the remaining values, which avoids evaluating a computationally-intractable normalization constant. While superficially similar to gPCR, there are two critical differences. First, gPCR includes a joint likelihood of the remaining covariates making guarantees for likelihood-based inference still applicable. Second, and more importantly, gPCR upweights a specific conditional distribution of interest to improve predictive performance on the auxiliary variable.
Finally, recent developments in variational inference are also relevant to our work. Variational autoencoders allow for tractable inference on a wide variety of models <cit.> by optimizing a lower bound on the likelihood <cit.>. This can be done by using a neural network “encoder” to approximate the generative posterior then using sampled values from the encoder to evaluate the generative model. This objective can be easily minimized using stochastic methods <cit.>, allowing for usage with large datasets using complex models. Furthermore, automatic differentiation in modern packages such as Pytorch allow for such models to be easily implemented.
§ DERIVING THE GENERATIVE PCR OBJECTIVE
We start by defining notation. We are given demeaned samples {x_i}_i=1:N∈ℝ^p and associated outcomes {y_i}_i=1:N∈𝒴. Our objective is two-fold: we wish to develop a generative model with parameters θ to model x and we would like this generative model to encode information about y. After specifying a prior p_θ(z) on the latent variables and the distribution of x conditioned on z, p_θ(x|z), we obtain a model for x as p_θ(x)=∈ p_θ(x|z)p_θ(z)dz. A natural and common way to model y in this framework is to specify p_θ(y|z) and assume conditional independence between x and y <cit.>.
In this work, when developing practical inference methods, we will limit ourselves to linear models. That is, we assume that
p_θ(z) =N(0,I_L),
p_θ(x | z) =N(Wz,Λ),
where W∈ℝ^p× L and Λ is a diagonal matrix. This formulation corresponds to probabilistic PCA in the special case that Λ=σ^2I. However, we do not place any limitations on p_θ(y|z). Given the widespread use of linear models in a variety of scientific disciplines, this work yields a widely-applicable model <cit.>.
§.§ Emphasizing the Desired Predictive Distribution
Many of the difficulties in modern predictive tasks are due to the high dimensionality of x resulting in difficult inference for θ. One might assume that latent variable models would inherently possess superior performance and would be optimal under a perfectly specified model, as p_θ(y|z) is a low-dimensional distribution. However, when the numbers of samples dramatically exceed the numbers of parameters, solely predictive models tend to predict better in practice. The reason for the performance gap is simple: in the classical regime the parameters of a generative model that are effective at regularization are incredibly restrictive. This combined with the fact that the total variance in the high-dimensional x is substantially larger than the variance in y this means that even minor misspecification in the generative model encourages the model to sacrifice p_θ(y|x) in favor of p_θ(x) under likelihood-based inference <cit.>. To restate, even simple types of model misspecification, such as underestimating the true latent dimensionality, will dramatically degrade performance if y is correlated with the lower variance variables.
A natural method to address this issue is to simply upweight the desired conditional distribution and maximize the modified objective. Such an objective (suppressing penalization terms or priors on θ for clarity) is
max_θ∑_i=1^N log p_θ(x_i) + μlog p_θ(y_i|x_i),
where μ is the tuning parameter controlling the emphasis on the predictive distribution of y. A value of μ=1 corresponds to the standard maximum likelihood objective of the joint distribution, while larger values of μ correspond to an increasing emphasis on the specific conditional distribution. We can see from this that in most practical applications, μ will have to be very large, as log p_θ(x) will be very large relative to log p_θ(y|x). The objective above can be obtained rigorously as a Lagrangian relaxation <cit.> of maximizing the generative log likelihood with a constraint on the predictive distribution. Alternatively, this approach can be viewed as tempering the predictive distribution <cit.> to increase its relative importance.
§.§ Introducing a Targeted Variational Lower Bound
Unfortunately, while the term log p_θ(x) in (<ref>) has an analytic form in linear models, the term log p_θ(y|x)=∫ p_θ(y|z)p_θ(z|x)dz does not unless y is also Gaussian. This is suboptimal for many classification applications, where logistic <cit.> or probit losses <cit.> are desirable both theoretical and practical reasons. However, in this work we develop a second novel objective that eliminates this constraint using the same methods as variational autoencoders. This allows the use of any predictive loss or distribution to ensure a phenotypically relevant latent space.
To do this, we will make use of the following decomposition of the log likelihood,
log p_θ(x)=-D_KL(p_θ(z|x)|p_θ(z)) + E_p_θ(z|x)[log p_θ(x|z)]
When p_θ(z|x) is replaced by a density q_ϕ(z|x) with parameters ϕ, Equation (<ref>) becomes the classic evidence lower bound used in variational inference <cit.>. We can use this decomposition, combined with the conditional independence of x and y given z to rewrite the maximum likelihood objective as
max_θ∑_i=1^N log p_θ(x_i,y_i) =
max_θ∑_i=1^N -D_KL(p_θ(z|x_i,y_i)|p_θ(z)) +
E_p_θ(z|x_i,y_i)[log p_θ(x_i|z) + log p_θ(y_i|z)]
This looks similar to (<ref>), as we now have separated the joint distribution into a reconstruction term and a predictive term, with an additional term quantifying the divergence between the posterior on the latent variables and the prior.
At this point we introduce a variational approximation and replace p_θ(z|x_i,y_i) with p_θ(z|x_i) and the resulting objective functions as a lower bound on the likelihood. While we will elaborate further below, the reason we use p_θ(z|x_i) rather than a more flexible q_ϕ(z|x_i) with new parameters ϕ is to ensure any predictive information relevant to y in the latent space by necessity is contained in the loadings. This variational approximation is a lower bound on the likelihood, as we are omitting the information obtained from y on the latent space. With this substitution we can recombine the first two terms and weight the third term to obtain our robust variational objective
max_θ∑_i=1^N p_θ(x_i) + μ E_p_θ(z|x_i)[ log p_θ(y_i|z)]
This variational objective is almost identical to (<ref>), as it leaves the generative likelihood unaltered. However, it does not require integrating out z which makes it compatible with the reparameterization trick used in variational autoencoders. This form also provides an intuitive justification for the variational lower bound. If the supervision term were E_p_θ(z|x_i,y_i)[log p_θ(y_i|z)], the model would simply rely on y_i to infer a relevant latent space rather than ensuring that the latent space is relevant even absent knowledge of the outcome, resulting in a large discrepancy between p_θ (z|x,y) and p_θ (z|x).
This formulation ensures that p_θ (z|x), and by extension p_θ (y|x)=∫ p_θ (y|z) p_θ (z|x)dz, is prioritized.
§.§ Inference Via Gradient Descent
Nothing in the previous section requires that p_θ (x) be a linear model. However, we have limited our consideration to linear models as our practical inference scheme depends on both p_θ (z|x) and -D_(KL ) (p_θ (z|x,y)|p_θ (z)) to be analytic in (<ref>).
If these quantities are available, inference can be performed using the reparameterization trick from standard VAEs <cit.> however, with the parameters of the encoder defined by the generative model. Because of this, we restrict (5) to be a linear model, the gPCR objective.
This novel objective is straightforward to optimize using gradient descent-based methods using the same techniques used for variational autoencoders. However, unlike traditional variational autoencoders, we found that batch training yielded superior performance to stochastic methods. A potential explanation for this behavior is that the combination of a simple architecture, an analytic generative likelihood, and the lack of a separate encoder makes the objective substantially better behaved. Thus, rather than providing necessary regularization, stochastic methods instead result in a slower convergence rate to a good local optimum. We also found that gradient descent with momentum substantially outperformed more modern methods such as Adam <cit.>.
A final benefit of our formulation in linear models is that inference has the same computational complexity as standard linear regression. While matrix inversion (required for evaluating the Gaussian likelihood) generally scales as 𝒪(p^3 ). By exploiting the Sherman-Woodbury matrix identity, we can reduce the computational cost to 𝒪(L^2 p) while maintaining the ability to propagate gradients. When p is substantially larger than L, as is commonly the case in latent variable models, the L^2 term is nearly insignificant. Thus, from a computational point of view, the model described here has no drawbacks compared to any other version of regression lacking a closed-form solution (such as LASSO).
§ SYNTHETIC RESULTS
We now provide an in-silico demonstration how gPCR dramatically improves on PCR and potentially improves experimental manipulation efficacy by eliminating the encoder/decoder discrepancy present in SVAEs. Let the data generation mechanism be
p(z) =N(0,Λ),
p(x|z) =N(Wz,σ^2 I),
p(y^*|z) =N(z_1,τ),
y =1_y^*>0,
where Λ is a diagonal matrix of ones except in the first entry which is substantially smaller than 1 and z_1 denotes the first element in z. In other words, information about y is encoded in the lowest variance component. In this simulation, we set p=440, L=10, and σ^2=1 with a sample size of 2000. For ease of visualization, W_1 was 1 for the first 40 covariates and 0 for the remainder. The remaining factors were generated as W_ij∼ N(0,1), with no constraint on orthogonality. This lack of orthogonality was chosen as it allows for the encoder to perform “double-duty” due to the overlap between a high-variance component with no predictive ability and a low-variance highly predictive component. We then fit three models, PCR with logistic regression, an SVAE, and a model using our novel objective, all with 5 latent variables representing the common situation where the number of estimated components is fewer than the true number of components.
In the SVAE, we used an affine encoder q_ϕ(z|x)=N(Ax,D), corresponding to the standard VAE setup of a separate parameterization for the mean and a diagonal covariance D. This simple encoder is not restrictive in this situation, as the true posterior p_θ(z|x)=N((σ^2I+W^TW)^-1W^T,I-(W^TW)/σ^2) is also Gaussian. Furthermore, we induce sparsity by supervising only the first latent variable, aligning with previous work <cit.>. This choice of sparsity enhances interpretability as the loadings of the supervised factor are a scaled version of the predictive coefficients. This synthetic formulation has two enormously beneficial properties; (1) we can directly evaluate the impact of separating the encoder from the decoder without concerns about encoder capacity that would occur in deeper models and (2) we can directly evaluate the discrepancy between q_ϕ(z|x) and p_θ(z|x). We choose the correlation between the posterior mean and the encoder mean as our measure of similarity.
The different supervised components for all models are shown in the first row of figure <ref>. On the left we plot both the encoder and decoder in the SVAE, while the middle shows the coefficients of the learned linear model using PCR, which is a composition of the linear transformation and the subsequent regression coefficient, while on the right we show the loadings of gPCR, which is a scaled version of the predictive coefficients. We can see that the SVAE encoder and decoder differ dramatically. The encoder clearly detects that the first 40 covariates are relevant to the outcome, the decoder is less clear. Certainly, the first 40 covariates are highly influential but there are a substantial number of nonzero loadings among the remaining irrelevant coefficients. The decoder has created a superposition of several networks, while the encoder almost exclusively focuses on the predictive information the decoder becomes a superposition of networks; it explains the variance in both the supervised network and information contained in some of the remaining non-orthogonal networks. The PCR objective has captured minimal relevant information to the objective, which is unsurprising as the largest networks have minimal overlap with the supervised network. As a result, a large quantity of irrelevant high variance networks becomes incorporated in the resulting predictive model. Meanwhile, only gPCR is able to clearly separate the relevant coefficients from the irrelevant coefficients in the learned network.
We then show some of the signs that there is significant divergence between the encoder and decoder in an SVAE. First, as we visualize in the bottom left figure, the correlation between the posterior mean and encoder mean of the SVAE is dramatically reduced in the supervised factor as compared to an unsupervised factor. Given that supervision is isolated to a single factor, the encoder for the unsupervised networks is free to learn the optimal encoding for reconstruction loss. We can further detect this problem by a dramatic drop of predictive ability of the generative posterior as compared to the encoder as visualized in the middle plot. The encoder obtains almost perfect predictive ability with an AUC of 0.995. However, the predictions made by the latent variables inferred from the decoder (generative model) drop to 0.83. While this is an improvement over standard PCR (AUC of 0.77), it is dramatically degraded from the performance we would expect from the encoder. On the other hand, gPCR achieves an AUC of 0.96, which is close to the predictive performance achievable by regression-based models.
Where gPCR truly shines is when the generative parameters are used to design stimulation procedures. We assume a causal relationship between x and y. We then create 100 distinct synthetic “stimulations” as shifting the mean of 10 randomly selected covariates by 1 from among the 50 largest covariates as measured by the generative parameters. We then examine the shift in E[y^*] given each of the different stimulation techniques. This reflects the common biological situation where there are multiple candidates for stimulation given a network and the final protocol is chosen based on secondary criteria such as ease of access. The distribution of these stimulation procedures is shown in the bottom right. The protocols developed via PCR are minimally effective, which is unsurprising given that many of the influential covariates are independent under the true model. The SVAE is more effective, which we could see given that the supervision did alter the decoder to weight the initial 40 covariates higher. However, target selection via gPCR is by far the most effective, with the average shift being 0.89, as opposed to 0.18 for PCR and 0.41 for the SVAE. As a result, we expect that stimulation targets in real datasets using gPCR should dramatically outperform SVAE and standard PCR.
§ IMPUTATION AND PREDICTION IN NEURAL DATASETS
We demonstrate the advantages of our novel inference algorithm on two neuroscience datasets. The first dataset is publicly available <cit.> and contains electrophysiological measurements of mice in a tail suspension experimental paradigm (TST). The objective of this experiment was to characterize electrophysiology in an animal model relevant to bipolar disorder. The recordings came from 26 mice, which were observed under various conditions—ranging from non-stressful (home cage) to highly stressful (tail suspension) —over a 20-minute period while continuously recording local voltages (LFPs) in 11 distinct brain regions. We segmented these recordings into 1-second intervals and estimated the spectral power in 1 Hz intervals from 1 to 56 Hz after performing preprocessing steps described in <cit.>, generating a total of 616 covariates. In this work we use the standardized log-transformed features, which is a common approach from signal processing <cit.>.
The second dataset (social) included electrophysiology from 28 mice recorded in 8 brain regions on multiple days. In each recording session, the mice were placed in a two-chambered social assay for 10 minutes. The mice were allowed to wander freely and in one chamber they were able to interact with another mouse (social interaction), while the other contained an inanimate object (non-social interaction). The location of the mouse was tracked during the entire recording and location was used as a proxy for social or non-social interaction. The initial objective of the experiment was to uncover the brain activity relevant to social interactions. The ultimate goal of developing stimulation targets to enhance social behavior, as currently medication struggles to treat social deficiencies in some disorders <cit.>. We used identical feature extraction steps used above to obtain 448 spectral power covariates.
§.§ Regression: Imputing Unobserved Brain Activity
The first application of our method is imputing dynamics of a missing brain region using the remaining regions in the TST dataset. This task is useful in its own right as missing data occurs for two common reasons. First, electrode failure is often observed, and while multiple electrodes are placed in each region, occasionally all electrodes fail or yield low-quality recordings resulting in no usable data from the specific region. Often, the data from these mice are not used, resulting in weeks of wasted effort. Second, data from multiple experiments are often used in a single study, for example, using mice from a different behavioral paradigm as a validation set for a specific hypothesis <cit.>. Depending on the priorities of the separate experiments, the recorded regions may not align, resulting in the need to infer the missing dynamics. For the purposes of this work, another advantage of a regression-based task is that it allows us to make direct comparisons with multiple alternative methods beyond principal component regression (PCR), namely partial least squares (PLS) and canonical correlation analysis (CCA). In this specific application, we are not limiting supervision to a single factor and instead use all latent factors for prediction. This reflects a difference in goal, rather than selection of stimulation targets we simply want to monitor activity in a potentially unmeasured region.
In this experiment, we divided training and test sets by mouse to evaluate its performance in new animals <cit.> and repeated each experiment 50 time to obtain confidence intervals. In all dimension reduction models, 20 components were used. The results for several representative brain regions are shown in Table <ref>. We can see that Elastic Net outperforms traditional methods of PCA, PLS, and CCA universally. In some of these brain regions, such as acumbens or mSNC, the difference in performance is dramatic, with the MSE in acumbens being a third of the MSE in PCA and a quarter of the performance of CCA. Surprisingly, canonical correlation analysis, which is meant to address the issues outlined previously, underperforms the standard PCR in many regions. Our novel objective, in contrast, dramatically improves on the competitor methods, being close to linear regression in performance in all regions.
There are two important takeaways from these results. First, as previously mentioned, the capacity of generative models to make predictions is not the issue in their underperformance. Rather, it is a failure of likelihood-based inference methods to emphasize the desired characteristics of the model, namely good prediction. Second, the variational approximation does not impede predictive ability as we nearly match the performance of what is achievable with linear models. It is important to emphasize that unlike the predictive model, whose sole purpose is to impute unobserved dynamics, this is a full generative model that characterizes p_θ(x), and as such can be used for clustering <cit.> and detect anomalies <cit.> and other tasks unable to be performed by the predictive model. Together, these results give us confidence that our inferred models in predictive tasks are achieving excellent performance where a consistent estimator is not available, such as the subsequent classification tasks.
§.§ Prediction: Stress Versus Nonstress Conditions
We switch to classification tasks based on the original experimental justification. We no longer have an analytic form for p_θ(y|x), meaning that we cannot compare to PLS or CCA without changing from a logistic loss. However, PCR and logistic regression are still viable competitor methods. We start with the TST dataset and predict stress vs non-stress using the log spectral power features previously described. We compare our performance to PCR, L_1, L_2, and Elastic Net regression cross-validating over regularization strengths. We impose a sparseness penalty on the predictive coefficients of p_θ(y|z) to supervise only the first factor, similar to <cit.>. In addition to improved biological interpretability (one network responsible for one behavior), it allows us to evaluate the effect that supervision has on the latent space.
We find that our supervised model almost matches the predictive performance of regression-based methods, with an AUC of 0.91±0.003, as opposed to 0.94±0.003 for L_1, L_2, and Elastic Net (EN) regression. However, this is a dramatic improvement over the performance of PCR, which has an AUC of 0.82±0.001. The predictive ability of this particular task is abnormally high, due to the dramatic differences between stressful and non-stressful conditions in mice. Because of this, even generative models are able to yield respectable predictive performance. However, even in these trivial tasks, predictive models yield superior performance.
While gPCR is unable to quite match the performance of regression-based models, it has dramatically more interpretable predictive coefficients, which are plotted in figure <ref>. This plots the coefficients as a function of frequency in four representative brain regions. Positive coefficients indicate that spectral power is amplified in that band under stress, while negative coefficients indicate that power is suppressed. These features largely align between the different models, with an increase in power in NAC between 10 and 20 Hz and suppression of power in mSNC at 10 Hz. However, the gPCR model coefficients show dramatically smoother trajectories that we would expect based on the data. In any particular region, the effect that 10 Hz power should be largely similar to the effect of 11 Hz power. The jagged coefficients seen in the regression models are unrealistic and highlight the advantages of the latent variable viewpoint over a shrinkage viewpoint.
§.§ Prediction: Social Versus Nonsocial Interactions
We now move on to an application that was the large motivation in developing these algorithms, distinguishing social from non-social interactions. Unsurprisingly, the differences between stress and nonstress conditions are substantially stronger than the differences in social vs nonsocial interaction, which is reflected in the weaker predictive performances observed in the latter experiment. This provides an ideal demonstration of the utility of gPCR, as now we are searching for relevant dynamics that are very weak. It is important to emphasize, however, that while the predictive relationships are certainly weaker, an AUC of 0.57 was sufficient to design a stimulation protocol that successfully modified behavior <cit.>.
We found that L_1 regression yielded an AUC of 0.554±0.005, EN regression had an AUC of 0.572±0.004 and L_2 regression had an AUC of 0.575±0.004. Incredibly, gPCR outperformed LASSO regression with an AUC of 0.57±0.005 while matching the performance of L_2and EN regression. Given that gPCR must perform an additional task of reconstructing the data with a strong constraint on the predictive parameters, this was quite surprising. We found the origin of this discrepancy was overfitting on the part of the pure regression models. When the AUCs on the training set were examined, we found that L_1 regression outperformed gPCR (AUC of 0.63 and 0.61 respectively). Meanwhile, the PCR model had no predictive information with an AUC of 0.51±0.001, even though the chosen dimensionality is large by the standards of neuroscience. Supervision in gPCR makes the difference between having no predictive ability and outperforming predictive models. This suggests several important conclusions. First, the posited latent network hypothesis is biologically realistic, as quantified in an objective comparison with predictive models that do not share this assumption. Second, it provides strong evidence in the efficacy of generative models to regularize predictive models. While all sparsity regularization was cross validated, for the latent variable models only a single set of parameters were used that had shown strong performance empirically, due purely to computational constraints.
This dataset also provides us an opportunity to evaluate the claim of improved parameter interpretability provided by a generative model as opposed to predictive models such as Lasso. While the previous task was sufficiently predictive that the penalization term was inconsequential, this task is sufficiently difficult that the penalization scheme makes a dramatic difference in the resulting coefficients, plotted in figure <ref>. LASSO and Elastic Net perform as they were designed, with the inherent sparsity assumption shrinking most of the coefficients to 0. Ridge regression did not shrink the coefficients to 0 and captured the expected smooth variation. However, the incredible amount of shrinkage required resulted in most of the coefficients being infinitesimal. The PCA model on the other hand had the large coefficients we would expect with relatively smooth variation we would expect. This is unsurprising, the requirement that the factor perform double duty of prediction and variance explanation in the electrophysiology requires that these coefficients be non-trivial and relatively smooth. This results in some dynamics missed in the other regression model to be captured in gPCR, which increases the variety of potential targets for stimulation. Given that some regions are more accessible than others, it is highly desirable that the model not run the risk of eliminating targets that are correlated but slightly less predictive in favor of a covariate that is difficult to modify.
§.§ Exposing the Deficiencies of SVAE Loadings for Target Selection
As our last contribution, we compare the results of fitting an SVAE as opposed to gPCR on the two neuroscience datasets. We use the same methodology in the second synthetic example to compare the posterior with the encoder and any discrepancies in the latent space. Unfortunately, due to the expense and time required to collect the data, performing a second stimulation protocol based on an SVAE that is hypothesized to perform worse is simply not viable. However, we can compare the other characteristics from the synthetic example that would suggest suboptimal loadings in an SVAE, namely lower correlations between the encoder and posterior means along with a drop in predictive accuracy when using the posterior mean for prediction.
We show the relevant results from the model for the TST task in figure <ref>. We can see dramatic differences between the learned encoder and the true posterior, as shown in the top left and top middle plots respectively. There is substantial jaggedness in the encoder that is not present in the decoder, which in part stems from additional regularization required to prevent overfitting on the predictive task. However, this is not the critical issue; instead, the critical flaw is that although the decoder shows power amplification in all regions at a wide range at 10 and 50 Hz, the encoder certainly does not support that conclusion. Furthermore, we can see a dramatic drop in predictive ability as shown by the ROC curves in the top right panel, with the encoder achieving an ROC of 0.93 while the posterior has an AUC of 0.83. We can see this discrepancy in the latent space as shown in the bottom right panel as the correlation between the two scores is only ρ=0.88. While there are some visual discrepancies between the encoder and decoder in the generative factors as shown by the bottom left and center panels, the latent states determined by the two methods correlate very strongly with ρ=1.0. In aggregate, these results are similar to those seen in the synthetic example. The discrepancies are even stronger in the social preference task as visualized in figure <ref>. Here, the generative posterior has no predictive ability (AUC of 0.51) and the estimates of the latent variables via the encoder have a substantially lower correlation from those provided via the generative model with ρ=0.39. Thus, while we are unable to perform the experiment in-vivo, these results strongly suggest that stimulation techniques based on gPCR would dramatically outperform those based on an SVAE, particularly in the social/non-social task.
§ CONCLUSION
Generative models, such as factor analysis, have many desirable properties, such as allowing for easy covariate imputation, a desirable scientific interpretation, and quick parameter convergence in terms of sample size. Unfortunately, they have been ignored in many predictive applications, as under mild model misspecification often results in poor predictive performance, unless the predictive task aligns with high-variance components. Here, we develop a novel inference objective that allows researchers to maintain all desirable properties of generative modeling, while ensuring that the latent variables are relevant to scientific questions. This is done by emphasizing a specific predictive distribution using a variational objective. This encourages that the model be predictive in terms of the generative parameters. We show that it is critical that this variational lower bound be obtained in terms of the generative posterior and that such an approach is competitive with traditional linear models in multiple applications. Furthermore, by avoiding the incorporation of a separate decoder, this approach forces the relevant information to be incorporated into the generative features, which is critical in many stimulation-based applications.
This work also leaves several promising avenues for extension. The most prominent is relaxing the requirement that p_θ(z|x) and D_KL(p_θ(z|x)|p_θ(z)) be analytic, allowing this technique to be used in a broader class of latent variable models. The second is further exploring why the SVAE approach struggles to incorporate the phenotypically relevant information into the generative parameters. Under the current model assumptions, the posterior mean is able to be properly represented by the linear encoder used in the variational lower bound, making such large impacts surprising. Finally, it would be helpful to demonstrate experimentally that stimulation based on gPCR outperforms competitor methods.
§ ACKNOWLEDGMENT
ChatGPT-4o was used for editing and grammar enhancement throughtout the document.
IEEEtran
|
http://arxiv.org/abs/2409.02654v1 | 20240904122914 | On the critical group of the k-partite graph | [
"Xinyu Dong",
"Guangfeng Jiang",
"Weili Guo"
] | math.CO | [
"math.CO",
"05C50, 20K01"
] |
graphs, graphs.standard
shapes.geometric, arrows
node/.style=circle, draw,
edge/.style=->, thick,
arrow/.style=line width=0.5pt,
a4paper,scale=0.75
qiquanfigs/
figuresection
lemmaLemma[section]
theoremTheorem[section]
corollaryCorollary[section]
definitionDefinition[section]
propositionProposition[section]
exampleExample[section]
proofProof[section]
remarkRemark[section]
[email protected]
organization=College of Mathematics and Physics, Beijing University of Chemical Technology,
addressline=15 North Third Ring East Road,
city=Beijing,
postcode=100029,
state=Beijing,
country=China
[email protected]
[cor1]Corresponding author
[email protected]
§ ABSTRACT
The critical group of a connected graph is closely related to the graph Laplacian, and is of high research value in combinatorics, algebraic geometry, statistical physics, and several other areas of mathematics. In this paper, we study the k-partite graphs and introduce an algorithm to get the structure of their critical groups by calculating the Smith normal forms of their graph Laplacians. When k is from 2 to 6, we characterize the structure of the critical groups completely, which can generalize the results of the complete bipartite graphs.
critical group, graph Laplacian, k-partite graphs, Smith normal form
[2020] 05C50 20K01
§ INTRODUCTION
Chip firing game is a discrete dynamical model studied by physicists, in the context of self-organized critical state. The basic rule of the model is that chips (sand, dollars) are exchanged between the sites in a network. When the model system reaches a particular state, even a very small perturbation can lead to collapse. For instance, the addition of a grain of sand can cause a massive avalanche. In nature, there are a huge number of similar phenomena, such as fires, earthquakes, extinction of species and many others. The research of such phenomena is of vital significance, while a large number of scientists are interested and have achieved numerous results including a group structure, which is an important algebraic invariant associated with the chip-firing process.
In 1990, Lorenzini <cit.> named the group of components Φ(G) to approach the chip firing game from the viewpoint of arithmetic geometry. Dhar <cit.> defined the sandpile group from the perspective of physics. In 1997, Bacher <cit.> named this group Jacobian and Picard group, when working on the various lattices formed by graphs. In 1999, Biggs <cit.> defined this group as critical group, when doing research on the economic process under the theory of chip firing.
Furthermore, the critical group of a graph is strictly associated with the structure of the graph. From the Kirchoff's Matrix Tree Theorem <cit.>, we get the following two formulas.
(i) The order κ(G) of the critical group of a graph G is equal to the number of spanning trees in the graph,
κ(G)=(-1)^i+jdetL(G),
where L(G) is a reduced Laplacian matrix obtained from L(G) by striking out any row i and column j.
(ii) If the eigenvalues of L(G) are indexed λ_1, …, λ_n-1, λ_n, where n is the number of vertices of G and λ_n=0, then
κ(G)=λ_1⋯λ_n-1/n .
In (i), we note that the critical group can be used to study the corresponding graph. Besides, the critical group of a connected graph is a finite Abelian group. In 1990, Rushanan <cit.> found the comparable group related to the Smith normal form of adjacency matrices known as the Smith group. Then the algebraic structure of the critical group of a graph can be known from the Smith normal form of the Laplace matrix (or adjacency matrix). For a matrix, we can get the Smith normal form by the following row and column operations:
* Add a non-zero integer multiple of one row (resp. column) to another row (resp. column),
* Permute rows or colums,
* Multiply a row or column by - 1.
The critical group structures of some special graphs are presently fully characterized, such as the cycle graphs C_n <cit.>, the complete graphs K_n <cit.>, the wheel graphs W_n <cit.>, the bipartite graphs K_n_1, n_2 <cit.>, the complete
multipartite graphs K_n_1, ⋯ , n_k <cit.>, the de Bruijn graphs DB(n,d) <cit.>, the Möbius ladders M(n) <cit.>, the square cycles C_n^2 <cit.>, the threshold graphs <cit.>, the 3 × n twisted bracelets <cit.>, the n-cubes Q_n <cit.>, the tree graphs <cit.>, the polygon flowers <cit.> and so on.
Moreover, there are also composite graphs such as the cartesian products of complete graphs <cit.>, P_4× C_n <cit.>, K_3× C_n <cit.>, K_m× P_n <cit.>, P_m∨ P_n <cit.> and so on.
Base on the present researches, we study the critical groups of a category of incomplete multipartite graphs which are introduced after the Definition <ref>, and our work includes the results of the bipartite graphs <cit.>. For the k-partite graph G_n_1, …, n_k, we supply the algorithm of the critical groups. Furthermore, the specific abelian groups of k-partite graphs isomorphic to the critical groups are computed and listed, when k=2, 3, ⋯ , 6.
This paper is organized as follows. In the second section, we show the definitions and the invertible matrices associated with the row and column operations to get the simpler matrices L_3, L_4, which can simplify the calculations to get the invariant factors of the critical groups K(G_n_1, …, n_k). Through the algorithms, we can achieve the structures of critical groups in the case k=2, 3, ⋯ , 6 in the next sections.
§ THE CRITICAL GROUP OF THE K-PARTITE GRAPH G_N_1, …, N_K
Let G=(V, E) be a graph on n vertices. The graph Laplacian L(G) is the n × n matrix given by
L(G)_i j={[ -1, i ≠ j and {v_i, v_j}∈ E;; deg(v_i), i=j;; 0, otherwise. ].
Let A be the n × n adjacency matrix of G and let D be the n × n diagonal matrix with diagonal given by the degree sequence of G. Then the above definition can be written as
L(G)=D-A .
When G is connected, the kernel of L(G) is spanned by the vectors in ℝ^|V| which are constant on the vertices.
Thinking of L(G) as a map ℤ^|V|→ℤ^|V|, its cokernel has the form
ℤ^|V| / im L(G) ≅ℤ⊕ K(G),
where K(G) is defined to be the critical group.
For more details, please refer to <cit.>.
A k-partite graph is one whose vertex set can be partitioned into k subsets, or parts, in such a way that no edge has both ends in the same part.
In this article, we consider one kind of k-partite graph G with parts of sizes n_1 , n_2,
⋯ , n_k. Meanwhile, G is an incomplete graph, in which the vertices in the i-th subset are only adjacent to all vertices in the (i-1)-th and (i+1)-th subsets (i=2, 3, ⋯ , k-1). Specifically, the vertices in the first subset are only adjacent to all vertices in the second subset. Similarly, the vertices in the k-th subset are only adjacent to all vertices in the (k-1)-th subset. For example, while k=5, n_1=6 , n_2=4, n_3=5, n_4=3, n_5=4, G_n_1, n_2, … , n_5 is shown in the Figure <ref>.
For the sake of notation, let I_n denote an n × n identity matrix, O a zero matrix, and J_m × n an m × n matrix with all entries equal to 1. Then it is easily seen that by ordering the vertices of G_n_1, …, n_k in their groups of size n_1, n_2, …, n_k, one has
L(G_n_1, …, n_k)=[[ n_2 I_n_1 -J_n_1× n_2 O ⋯ O O; -J_n_2× n_1 (n_1+n_3)I_n_2 -J_n_2× n_3 ⋯ O O; O -J_n_3× n_2 (n_2+n_4)I_n_3 ⋯ O O; ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; O O ⋯ -J_n_k-1× n_k-2 (n_k-2+n_k)I_n_k-1 -J_n_k-1× n_k; O O ⋯ O -J_n_k× n_k-1 n_k-1I_n_k ]].
In the first stage of reduction, one can perform row and column operations on L(G_n_1, …, n_k) to make
P_1 L(G_n_1, …, n_k) Q_1=[[ L_1,1 L_2,2 O ⋯ O O; L_2,1 L_1,2 L_2,3 ⋯ O O; O L_2,2 L_1,3 ⋯ O O; ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; O O ⋯ L_2,k-2 L_1,k-1 L_2,k; O O ⋯ O L_2,k-1 L_1,k ]],
where
L_1,i= [[ N_i 0 0 ⋯ 0 0; 0 N_i 0 ⋯ 0 0; 0 0 N_i ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ 0 0; 0 0 ⋯ 0 N_i 0; 0 0 ⋯ 0 0 N_i ]],
L_2,i=[[ -n_i 0 0 ⋯ 0 -1; 0 0 0 ⋯ 0 0; 0 0 0 ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ 0 0; 0 0 ⋯ 0 0 0; 0 0 ⋯ 0 0 0 ]].
N_i={[ (n_i-1+n_i+1), i=2,3,⋯,k-1 ;; n_2, i=1;; n_k-1, i=k. ].
The matrices P_1 and Q_1 are block diagonal P_1=diag(P_1,1, …, P_1,k),
Q_1=diag(Q_1,1, …, Q_1,k), where P_1,i and Q_1,i are n_i× n_i matrices given as:
P_1,i=[[ 1 0 0 ⋯ 0 0; -1 1 0 ⋯ 0 0; 0 -1 1 ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ 0 0; 0 0 ⋯ -1 1 0; -n_i+1 1 ⋯ 1 1 1 ]], Q_1,i=[[ 1 0 0 ⋯ 0 0; 1 1 0 ⋯ 0 0; 1 1 1 ⋯ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ 0 0; 1 1 ⋯ 1 1 0; 1 -n_i+2 ⋯ -2 -1 1 ]].
According to the above row and column operations on L(G_n_1, …, n_k), we can get the following proposition.
The critical group of the graph G_n_1, …, n_k has the following isomorphism,
ℤ⊕ K(G_n_1, …, n_k)≅(⊕_i=1^kℤ / (N_iℤ)^⊕(n_i-2)) ⊕ coker L_3,
where L_3 is the 2k × 2k matrix obtained by removing some rows and columns
L_3=[[ N_1 0 -n_2 -1 … … 0 0; 0 N_1 0 0 ⋯ … 0 0; -n_1 -1 N_2 0 ⋱ ⋱ ⋮ ⋮; 0 0 0 N_2 ⋱ ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ -n_k -1; ⋮ ⋮ ⋱ ⋱ ⋱ 0 0; 0 0 ⋯ ⋯ -n_k-1 -1 N_k 0; 0 0 ⋯ ⋯ 0 0 0 N_k ]].
After the operation P_1 L(G_n_1, …, n_k) Q_1, the resulting matrix is as follows:
L =[[ N_1 0 … 0 -n_2 0 … -1 … … 0 0 … 0; 0 N_1 … 0 0 0 ⋯ 0 ⋯ … 0 0 … 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋯ ⋯ ⋮ ⋮ ⋱ ⋮; 0 0 … N_1 0 0 ⋯ 0 ⋯ … 0 0 … 0; -n_1 0 ⋯ -1 N_2 0 ⋯ 0 ⋯ ⋯ 0 0 ⋯ 0; 0 0 ⋯ 0 0 N_2 ⋯ 0 ⋯ ⋯ 0 0 ⋯ 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋯ ⋯ ⋮ ⋮ ⋱ ⋮; 0 0 … 0 0 0 ⋯ N_2 ⋯ … 0 0 … 0; 0 0 … 0 -n_2 0 … -1 … … 0 0 … 0; 0 0 … 0 0 0 ⋯ 0 ⋯ … 0 0 … 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋯ ⋯ ⋮ ⋮ ⋱ ⋮; 0 0 … 0 0 0 ⋯ 0 ⋯ … 0 0 … 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 … 0 0 0 … 0 … … N_k 0 … 0; 0 0 … 0 0 0 ⋯ 0 ⋯ … 0 N_k … 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋯ ⋯ ⋮ ⋮ ⋱ ⋮; 0 0 … 0 0 0 ⋯ 0 ⋯ … 0 0 … N_k; ]].
Consider the rows and columns of the integers N_1 in the matrix, we can find that the entries are only zeros in the same rows and columns as from the second to (n_1-1)-th entry N_1. The situations are same for the integer from N_2 to N_k. Hence, we can obtain n_i-2 invariant factors N_i and the 2k × 2k matrix L_3 by removing these rows and columns.
▪
By calculation, we can obtain L_4=P_2L_3Q_2, where P_2, Q_2∈ G L_2 k(ℤ) are as follows,
[ P_2=[[ B A O O O ⋯ ⋯ O; n_1B-D n_2B A O O ⋯ ⋯ O; n_1B-D n_2B-D n_3B A O ⋯ ⋯ O; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ ⋮; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ ⋮; ⋮ ⋮ ⋱ ⋱ ⋱ O; n_1B-D n_2B-D ⋯ ⋯ ⋯ n_k-2B-D n_k-1B A; n_1R+S n_2R+S ⋯ ⋯ ⋯ n_k-2R+S n_k-1R+S n_kR+T ]], ]
[ Q_2= [[ -n_1A+I_2 O O ⋯ O O; O -n_2A+I_2 O ⋯ O O; O O -n_3A+I_2 ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ O O; O O ⋯ O -n_k-1A+I_2 O; O O ⋯ O O -n_kA+I_2 ]], ]
where
A= [[ 0 0; 1 0 ]], B= [[ 1 0; 0 0 ]], C= [[ 1 0; 0 -1 ]], D= [[ 0 -1; 0 0 ]],
R= [[ 1 0; 1 0 ]], S= [[ 0 1; 0 1 ]], T= [[ 0 0; 0 1 ]].
Further reduction of L_3 can be achieved by re-ordering rows and columns to obtain
L_4=[[ -n_2B-T N_2A+D C O O O ⋯ ⋯ O; O n_2N_2B+n_1D-T N_3A+n_2D C O O ⋯ ⋯ O; O O n_3N_3B+n_2D-T N_4A+n_3D C O ⋯ ⋯ O; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ ⋮ ⋮; ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ ⋮; O O ⋯ ⋯ ⋯ O n_k-2N_k-2B+n_k-3D-T N_k-1A+n_k-2D C; O O ⋯ ⋯ ⋯ ⋯ O n_k-1N_k-1B+n_k-2D-T N_kA+n_k-1D; O O ⋯ ⋯ ⋯ ⋯ O O n_kN_kB+n_k-1D; ]] .
Now, L_4 is an upper triangular matrix and upper 5-banded matrix, where the entries in the i-th row and j-th column are zeros for j<i and j≥ i+5. According to the algorithms in the paper <cit.>, we can reduce L_4 to an upper 2-banded matrix. And then we obtain its Smith normal form by the algorithms in <cit.>.
By the above steps, for k≥ 4, the critical groups can be decomposed as
ℤ⊕ K(G_n_1, …, n_k)≅ ( ⊕_i=1^kℤ / (N_iℤ)^⊕(n_i-2)) ⊕ℤ / (n_2(n_1+n_3) ℤ)
⊕ℤ / (n_k-2(n_k-1+n_k) ℤ)
⊕𝒢,
where 𝒢 is a finite Abelian group determined by the numbers n_1, n_2, ⋯, n_k. And we can achieve the determinant of the graph Laplacian L(G_n_1, …, n_k) is
det(L(G_n_1, …, n_k))= (∏_i=1^k N_i^n_i-1) ·(∏_i=2^k-1 n_i)
which is the number of spanning trees in the graph G_n_1, …, n_k.
For the 2-partite graph G_n_1,n_2, we can get the followings from Equation (<ref>) to Equation (<ref>),
ℤ⊕ K(G_n_1, n_2)≅ℤ / (n_1ℤ)^⊕(n_2-2)⊕ℤ / (n_2ℤ)^⊕(n_1-2)⊕ coker L_3(G_n_1, n_2),
L_3(G_n_1,n_2) =[[ n_2 0 -n_2 -1; 0 n_2 0 0; -n_1 -1 n_1 0; 0 0 0 n_1 ]].
By some row and column operations, we get L_4(G_n_1,n_2) = P_3(G_n_1,n_2)L_3(G_n_1,n_2)Q_3(G_n_1,n_2),
where
[ P_3(G_n_1, n_2)=[[ 1 0 0 0; 0 1 0 0; -n_1 0 1 0; 0 0 0 1 ]] ],
[ Q_3(G_n_1, n_2) =[[ 0 0 0 1; 0 1 n_1 n_1; 0 0 1 1; 1 0 0 n_2; ]], ]
[ L_4(G_n_1, n_2)=[[ -1 0 0 0; 0 -1 0 0; 0 0 n_1n_2 0; 0 0 0 0 ]] ].
Then we find
ℤ⊕ K(G_n_1, n_2)≅ℤ / (n_1ℤ)^⊕(n_2-2)⊕ℤ / (n_2ℤ)^⊕(n_1-2)⊕ℤ / (n_1n_2ℤ).
In this case, the result is identical to the one in <cit.>. It is straightforward to work out the critical group structures of complete bipartite graphs with our method.
For k=3, G_n_1,n_2,n_3 is also a complete bipartite graph. In other words, consider the n_1+n_3 vertices of the first and third parts as one part of the bipartite graph, and the remaining n_2 vertices as the other part.
§ THE CRITICAL GROUP OF THE 4-PARTITE GRAPH
In this section, we can obtain the critical group of the 4-partite graph G_n_1, …, n_4. Following the above calculation steps from Equation (<ref>) to Equation (<ref>), we can achieve
ℤ⊕ K(G_n_1, …, n_4)≅(⊕_i=1^4ℤ / (N_iℤ)^⊕(n_i-2)) ⊕ coker L_4(G_n_1, …, n_4),
where
L_4(G_n_1, …, n_4):=[[ n_2 0 0 -1 0 0 0 0; 0 -1 n_1+n_3 0 0 -1 0 0; 0 0 n_2(n_1+n_3) -n_1 0 n_2 0 0; 0 0 0 -1 n_2+n_4 0 0 -1; 0 0 0 0 n_3(n_2+n_4) -n_2 0 -n_3; 0 0 0 0 0 -1 n_3 0; 0 0 0 0 0 0 n_3n_4 -n_3; 0 0 0 0 0 0 0 0; ]] .
By calculation, we can obtain L_5(G_n_1, …, n_4)=P_3(G_n_1, …, n_4)L_4(G_n_1, …, n_4)Q_3(G_n_1, …, n_4), where P_3(G_n_1, …, n_4), Q_3(G_n_1, …, n_4) ∈ G L_8(ℤ) are
[ P_3(G_n_1, …, n_4)=[[ 1 0 0 -1 0 0 0 0; 0 1 0 0 0 0 0 0; -n_1-n_3 0 1 n_3 -1 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 -n_2 -1 0; 0 0 0 0 0 1 0 0; n_3 0 0 -n_3 1 -n_2 0 0; 0 0 0 0 0 0 0 1; ]] ],
[ Q_3(G_n_1, …, n_4) =[[ 0 0 1 0 0 0 0 1; 0 1 -n_3 0 0 -1 -n_3 n_1; 0 0 0 0 0 0 0 1; -1 0 n_2 1 0 0 0 n_2; 0 0 1 0 1 0 1 1; 0 0 n_3 0 0 1 n_3 n_3; 0 0 1 0 0 0 1 1; 1 0 n_4 0 n_2+n_4 0 n_2+n_4 n_4; ]], ]
and
L_5(G_n_1, …, n_4):=[[ 1 0 0 0 0 0 0 0; 0 -1 0 0 0 0 0 0; 0 0 n_2(n_1+n_3) 0 0 0 0 0; 0 0 0 -1 0 0 0 0; 0 0 0 0 n_3(n_2+n_4) 0 0 0; 0 0 0 0 0 -1 0 0; 0 0 0 0 0 0 -n_2n_3 0; 0 0 0 0 0 0 0 0; ]] .
Hence, we can obtain the following therom.
The critical group of G_n_1,n_2,n_3,n_4 has the following structure
ℤ⊕ K(G_n_1,n_2,n_3,n_4)≅ ℤ / (n_2ℤ)^⊕(n_1-2)⊕ℤ / ((n_1+n_3) ℤ)^⊕(n_2-2)
⊕ℤ / ((n_2+n_4) ℤ)^⊕(n_3-2)⊕ℤ / (n_3ℤ)^⊕(n_4-2)
⊕ℤ / (n_2n_3) ℤ) ⊕ℤ / (n_2(n_1+n_3) ℤ) ⊕ℤ / (n_3(n_2+n_4) ℤ).
§ THE CRITICAL GROUP OF THE 5-PARTITE GRAPH
In this section, we continue to calculate the critical group of the 5-partite graph with the same method as before. Then we can get
ℤ⊕ K(G_n_1, …, n_5)≅(⊕_i=1^5ℤ / (N_iℤ)^⊕(n_i-2)) ⊕ coker L_4(G_n_1, …, n_5),
where
L_4(G_n_1, …, n_5):=[[ n_2 0 0 -1 0 0 0 0 0 0; 0 -1 n_1+n_3 0 0 -1 0 0 0 0; 0 0 n_2(n_1+n_3) -n_1 0 -n_2 0 0 0 0; 0 0 0 -1 n_2+n_4 0 0 -1 0 0; 0 0 0 0 n_3(n_2+n_4) -n_2 0 -n_3 0 0; 0 0 0 0 0 -1 n_4 0 0 -1; 0 0 0 0 0 0 n_4(n_3+n_5) -n_3 0 -n_4; 0 0 0 0 0 0 0 -1 n_4 0; 0 0 0 0 0 0 0 0 n_4n_5 -n_4; 0 0 0 0 0 0 0 0 0 0; ]] .
By calculation ,we can obtain L_5(G_n_1, …, n_5)=P_3(G_n_1, …, n_5)L_4(G_n_1, …, n_5)Q_3(G_n_1, …, n_5), where P_3(G_n_1, …, n_5), Q_3(G_n_1, …, n_5) ∈ G L_10(ℤ) are
[ P_3(G_n_1, …, n_5)=[[ 1 0 0 -1 0 0 0 1 0 0; 0 1 0 0 0 0 0 1 0 0; -n_1-n_3 0 1 n_3 -1 0 0 0 0 0; 0 0 0 1 0 0 0 1 0 0; n_3 0 0 -n_3 1 -n_2 0 0 0 0; 0 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 1 -n_3 -1 0; 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 -n_3 0 0; 0 0 0 0 0 0 0 0 0 1; ]] ],
[ Q_3(G_n_1, …, n_5)=[[ 0 0 0 0 1 0 0 0 0 1; 0 1 n_1+n_3 0 n_1+n_3 -1 0 0 1 n_1; 0 0 1 0 1 0 0 0 0 1; n_2+n_4 0 0 1 0 0 0 -1 0 n_2; 1 0 0 0 0 0 0 0 0 1; 0 0 0 0 0 1 0 0 -1 n_3; 0 0 0 0 0 0 1 0 0 1; 0 0 0 0 0 0 0 1 0 n_4; 0 0 0 0 0 0 0 0 0 1; 0 0 0 0 0 0 n_4 0 1 n_5; ]], ]
and
L_5(G_n_1, …, n_5)=[[ -n_2-n_4 0 0 0 n_2 0 0 0 0 0; 0 -1 0 0 0 0 0 0 0 0; 0 0 n_2(n_1+n_3) 0 0 0 0 0 0 0; 0 0 0 -1 0 0 0 0 0 0; 0 0 0 0 n_2n_3 0 0 0 n_2 0; 0 0 0 0 0 -1 0 0 0 0; 0 0 0 0 0 0 n_4(n_3+n_5) 0 0 0; 0 0 0 0 0 0 0 -1 0 0; 0 0 0 0 0 0 0 0 -n_4 0; 0 0 0 0 0 0 0 0 0 0 ]] .
By the row and column operations, we can further reduce L_5(G_n_1, …, n_5) to obtain [[ L_6 O; O L_7 ]], where L_6 is a diagonal matrix, and L_7=[[ -n_2-n_4 n_2 0; 0 n_2n_4 n_2; 0 0 -n_4 ]]. By calculating the Smith normal form of L_7, we get the invariant factors σ_1, σ_2/σ_1, det(L_7)/σ_2. Then we obtain the following theorem.
For the graph G_n_1,n_2,n_3,n_4,n_5, its critical group can be decomposed as following
ℤ⊕ K(G_n_1,n_2,n_3,n_4,n_5)≅ ℤ / (n_2ℤ)^⊕(n_1-2)⊕ℤ / ((n_1+n_3) ℤ)^⊕(n_2-2)
⊕ℤ / ((n_2+n_4) ℤ)^⊕(n_3-2)⊕ℤ / ((n_3+n_5) ℤ)^⊕(n_4-2)
⊕ℤ / (n_4ℤ)^⊕(n_5-2)⊕ℤ / (n_2(n_1+n_3) ℤ)⊕ℤ / (n_4(n_3+n_5)) ℤ)
⊕ℤ / (σ_1ℤ)⊕ℤ / ((σ_2/σ_1) ℤ) ⊕ℤ / ((n_2n_3n_4(n_2+n_4) /σ_2) ℤ),
where
σ_1=gcd(n_2,n_4,n_2+n_4,n_2n_3),
σ_2=gcd(n_2^2 , n_2n_4 , n_2n_3n_4 , n_2(n_2+n_4) , n_4(n_2+n_4) , n_2n_3(n_2+n_4) ).
§ DISCUSSION
With the above method, for k=6,
ℤ⊕ K(G_n_1,⋯ ,n_6)≅ ℤ / (n_2ℤ)^⊕(n_1-2)⊕ℤ / ((n_1+n_3) ℤ)^⊕(n_2-2)
⊕ℤ / ((n_2+n_4) ℤ)^⊕(n_3-2)⊕ℤ / ((n_3+n_5) ℤ)^⊕(n_4-2)
⊕ℤ / ((n_4+n_6) ℤ)^⊕(n_5-2)⊕ℤ / (n_5ℤ)^⊕(n_6-2)
⊕ℤ / (n_2(n_1+n_3) ℤ) ⊕ℤ / (n_5(n_4+n_6)) ℤ) ⊕ℤ / (σ_1ℤ)
⊕ℤ / ((σ_2/σ_1) ℤ)⊕ℤ / ((n_2n_3n_4n_5(n_2+n_4)(n_3+n_5) /σ_2) ℤ).
where
σ_1= gcd(n_2n_3, n_2n_5, n_3(n_2+n_4), n_5(n_2+n_4), n_2(n_3+n_5), n_4(n_3+n_5)),
σ_2= gcd(n_2n_3^2(n_2+n_4) , n_2n_3n_5(n_2+n_4), n_2n_3(n_2+n_4)(n_3+n_5), n_2^2n_5(n_3+n_5),
n_5(n_2+n_4)^2(n_3+n_5) ) .
In this paper, we study the critical group of the k-partite graph G_n_1, …, n_k. First of all, we obtain the algorithm of the critical group K(G_n_1, …, n_k) for the arbitrary k. When k = 2, G_n_1, n_2 is a completely bipartite graph, and our conclusion is consistent with the result in <cit.>. Then the decompositions of the critical groups of k-partite graphs are given for the cases k = 3, 4, 5, and 6.
For further research, we have two questions.
Question I: Based on the k-partite graphs in this paper, randomly deleting some edges, how to calculate the critical groups of the modified graphs?
Question II: What is the solution to compute the critical groups for the arbitrary incomplete multi-partite graphs?
§ ACKNOWLEDGMENTS
We would like to show our great gratitude to the anonymous referees for carefully reading this manuscript and improving its presentation and accuracy. The corresponding author is supported by National Science Fund for Distinguished Young Scholars 12201029.
elsarticle-num
|
http://arxiv.org/abs/2409.02186v1 | 20240903180011 | State Dependent Spread Complexity Dynamics in Many-Body Localization Transition | [
"Maitri Ganguli",
"Aneek Jana"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn",
"cond-mat.str-el",
"hep-th",
"quant-ph"
] |
[][email protected]
Department of Physics, Indian Institute of Science, Bangalore
[][email protected]
Center for High Energy Physics, Indian Institute of Science, Bangalore
§ ABSTRACT
We characterize the Many-Body Localization (MBL) phase transition using the dynamics of spread complexity and inverse participation ratio in the Krylov space starting from different initial states. Our analysis of the disordered Heisenberg spin-1/2 chain unravels that the ergodic-to-MBL transition can be determined from the transition of the pre-saturation peak in the thermofield double state (TFD) spread complexity. On the other hand, if an initially ordered state or a superposition of a small number of such states is chosen, then the saturation value of spread complexity and Krylov inverse participation ratio (KIPR) can distinguish the ergodic phase from the integrable phases, with no sharp difference between the integrable phases. Interestingly, the distinction between the disorder-free integrable and the MBL integrable phase is established by the spread complexity study of random states chosen from unitary and orthogonal Haar ensembles. We also study the complexity dynamics by coupling the system to a bath, which shows distinctive profiles in different phases. A stretched exponential decay of KIPR is observed when the MBL system is connected to the bath, with the decay starting at an earlier time for a greater value of environmental dephasing. Our work sheds light on the efficacy of Krylov space dynamics in understanding phase transitions in quantum many-body systems.
State Dependent Spread Complexity Dynamics in Many-Body Localization Transition
Aneek Jana 0009-0001-1097-4250
September 9, 2024
===============================================================================
*Introduction.—
In recent years, the study of quantum complexity quantified on the Krylov basis has gained significant interest for its usefulness in understanding the various
aspects of quantum many-body systems, quantum field theories, holographic models, quantum circuits etc <cit.>. The basic notion of complexity captures the difficulty of preparing a certain quantum state starting from a given initial state <cit.>. In the context of quantum dynamics, Krylov Complexity measures the average position of a time-evolved state/operator in the Krylov basis formed by the action of the generator of time-evolution using the Lanczos algorithm or some modified version, such as the bi-Lanczos algorithm in the cases of non-unitary dynamics <cit.>. The Hamiltonian is rendered into a tridiagonal form in the Krylov space spanned by the Krylov basis vectors. Therefore, the complex quantum mechanical state/operator dynamics problem is effectively reduced to an equivalent single particle hopping problem in a semi-infinite lattice numbered by the index of Krylov basis vectors, where the hopping amplitudes are given by the Lanczos coefficients, which are the outputs of the Lanczos/bi-Lanczos algorithm.
While operator complexity in the Krylov space has been studied extensively, in a plethora of both closed and open quantum systems <cit.>, starting from the seminal paper by Parker et al.<cit.>, the study of the complexity of spread of states, also known as Krylov spread complexity, is relatively new <cit.> and its relevance has been investigated in various contexts including integrability to chaos transitions <cit.> and 𝒫𝒯-symmetric non-Hermitian Hamiltonians <cit.>, etc very recently. This work aims to contribute to this endeavor by studying state-dependent spread complexity dynamics in the systems that exhibit Many-Body Localization Transition (MBLT) [Krylov operator complexity in MBL is studied in <cit.>].
The discovery of many-body localization (MBL), where strong disorder and interaction lead to emergent integrability, is a great example of the violation of Eigenstate Thermalization Hypothesis (ETH) <cit.> beyond integrable systems. In the interacting systems, the presence of disorder <cit.> or quasiperiodicity <cit.> generically can give rise to MBL (which is a generalization of Anderson localization <cit.> to interacting systems); however, in thermodynamic limit and greater than one dimension its stability is a subject of active debate <cit.>. Recent experimental findings have provided direct evidence of this breakdown of ergodicity in interacting many-body systems. Specifically, these studies have observed such behavior in various systems involving ultracold atomic fermions <cit.>, a chain of trapped ions <cit.>, and also in superconducting circuits <cit.>. In these systems, strong disorder caused them to become localized, thereby preventing them from reaching thermal equilibrium as expected in the absence of such disorder.
Though for both MBL systems and disorder-free integrable systems, the existence of an extensive number of conserved quantities is observed, which gives rise to the absence of level repulsion (hence Poisson level spacing distribution in the energy spectrum <cit.>), but from the point of view of the entanglement entropy there is a difference. The MBL systems possess eigenstates showing area-law entanglement entropy, whereas generally, with few exceptions, disorder-free integrable systems show volume-law entangled eigenstates. So, from this perspective, identifying the MBL system from thermal eigenstates is easier than identifying the integrable systems. The localization behavior of MBL prevents the system from exploring all possible states, effectively breaking ergodicity—the principle that a system will eventually explore all accessible microstates if given enough time. We are interested in exploring this behavior of MBL systems through the spread complexity and differentiate the MBL emergent integrability from disorder-free integrable systems in this perspective.
Recent studies have focused on the spread complexity of the Thermo-Field Double (TFD) state, which is a canonical purification of the Gibbs density matrix, to distinguish chaotic and integrable phases <cit.>, as well as to observe the integrable-to-chaotic transition by treating the peak in the spread complexity as an order parameter<cit.>. Our study verifies that the peak in the TFD spread complexity indeed shows a transition in the ergodic-to-MBL crossover.
However, in the present work, our scope is more focused on the ergodic-to-MBL transition in a more elaborate way by studying the initial state dependence of the spread complexity in various physically relevant scenarios. We comment on how the distinguishability among the phases can be seen in the dynamics of initial states chosen from random Haar ensembles, making a clear distinction between the strong-disorder MBL integrable phase and the integrable phase that exists in no or weak disorder in finite-size systems [we should mention that in the strict thermodynamic limit, even an infinitesimal disorder can break integrability, see <cit.>].
Finally, coupling the system to a bath weakly, we have demonstrated that the MBL system shows stretched exponential decay of the Krylov space localization measure (similar to the decay profile of an initial density pattern <cit.>).
Apart from the Krylov spread complexity, we have also looked at the Krylov Inverse Participation Ratio (KIPR) to understand the dynamics of various states in the Krylov space clearly. The KIPR gives a good dynamic measure of how a state is localized on the Krylov basis.
All these complexity measures are ultimately dependent upon the wave-function coefficients of the state in the Krylov basis; nevertheless, they allow us to shed light on different aspects of the dynamics in the Krylov space.
*The Model and the Method.—
In this work, we have considered the paradigmatic model of Many-body Localization <cit.><cit.>, the spin-1/2 Heisenberg model with random-field disorder,
H = 1/2∑_i (σ_i^xσ_i+1^x+σ_i^yσ_i+1^y + σ_i^zσ_i+1^z) + ∑_i h_i σ_i^z
here the random-fields h_i are sampled from a uniform distribution [-W,W]. It is known that by increasing the disorder strength, the system goes through an MBL transition around W ≈ 3.5 <cit.><cit.>. This is an instance of a chaotic-to-integrable transition. Another disorder-free integrable phase exists (at W=0) and is sustained at a very small value of W for finite size systems <cit.>. We have tried to characterize both of these integrable and the ergodic phases through the analysis of Krylov complexity, starting from various initial states. We work in the zero magnetization sector (∑_i σ^z_i=0) for definiteness and use the periodic boundary condition.
The first complexity measure we use is the Krylov Spread Complexity (KSC). Starting from an initial state |ψ_i⟩, we calculate the orthonormal Krylov basis vectors {|K_n⟩} (using Lanczos algorithm for Gram-Schmidt orthogonalization on the set {|ψ_i⟩,H|ψ_i⟩,H^2|ψ_i⟩,…}) and then expand the time-evolved state in this basis,
|ψ(t)⟩ = e^-i H t|ψ_i⟩ = ∑_n ϕ_n (t) |K_n⟩
The ϕ_n(t)'s can be caculated numerically or using the Lanczos coefficients {a_n} and {b_n} (see Supplemental Material <ref>) in solving the following recursive differential equation,
i d/dtϕ_n(t)= a_n ϕ_n(t)+b_n ϕ_n-1(t)+b_n+1ϕ_n+1(t),
with the boundary condition given by ϕ_n(0)=δ_n,0.
If we denote the probability of being in the n-th basis vector by p_n then,
p_n(t) = | ϕ_n (t) |^2, ∑_n p_n(t) = 1
The Krylov spread complexity is given by the average position in the Krylov basis,
𝒞_𝒦 (t) = ∑_n n p_n (t)
While the above complexity measures average position, we need something that measures the typical number of basis elements needed for describing the time-evolved state by an entropic notion. For this purpose we use the Krylov entropy,
𝒮_𝒦(t) = - ∑_n p_n(t) log p_n (t)
and associated Krylov Entropic Complexity (KEC),
𝒞_𝒮 (t) = e^𝒮_𝒦 (t)
As a measure of localization in the Krylov space, one can define the Krylov Inverse Participation Ratio (KIPR),
ℐ_𝒦(t) = ∑_n p_n^2(t) ≤ 1
Where larger values of ℐ_𝒦 will imply localization in Krylov space. ℐ_𝒦 also satisfies the lower bound ℐ_𝒦≥ 1/d_𝒦≥ 1/d, where d is the dimension of Hilbert space under consideration and d_𝒦 is the dimension of Krylov space with d_𝒦≤ d.
We also consider the effect of dissipation in this system due to its coupling to the environment via a thermal bath or some measurement apparatus. To do so, we have used an effective non-unitary evolution of the state of the system, specifically, the evolution of a single quantum trajectory, which corresponds to the no-click/no-jump limit in a suitable post-selection procedure.
For simplicity, we consider only two jump operators that couple to the system with a coupling strength or dephasing α. The jump operators are chosen to commute with the total magnetization operator to keep the state in the same magnetization sector <cit.>,
L_1 = σ_0^xσ_1^x + σ_0^yσ_1^y
L_2 = σ_L-2^xσ_L-1^x + σ_L-2^yσ_L-1^y
The non-Hermitian Hamiltonian is,
H' = H - iα(L_1^† L_1+L_2^† L_2)
To elevate the notion of spread complexity for non-Hermitian evolution, one needs to use the bi-Lanczos algorithm, which reduces to the usual Lanczos algorithm in Hermitian limit <cit.>. Here we have two sets of Krylov basis vectors which are bi-orthogonal, {|P_n⟩} and {|Q_n⟩}, and we can expand the time-evolved state in these two sets as following,
|ψ(t)⟩ = ∑_n ϕ^q_n(t) |P_n⟩ = ∑_n ϕ^p_n(t) |Q_n⟩
For this scenario, one can modify the notion of probability by introducing additional necessary normalization,
p_n (t) = |(ϕ^p_n(t))^*ϕ^q_n(t)|/∑_m |(ϕ^p_m(t))^*ϕ^q_m(t)|
Once the probability of being in the n-th Krylov basis is defined, the definitions of the various complexity measures are kept unchanged.
In the following sections, we emphasize the complexity dynamics for various initial states and the effect of dissipation in the presence of different disorder strengths W. Further, we study the distinctions between the phases that exist for different ranges of W.
*Thermofield Dynamics.— To understand the complexity dynamics, at first, we have chosen the Thermofield Double (TFD) state, which is an entangled state in the product Hilbert space of the two copies of the same system. If the system has a spectrum {E_n}, and corresponding eigenvectors |Ψ_n⟩, then the TFD state at inverse temperature β is defined by,
|TFD(β)⟩ = 1/√(Z_β)∑_n e^-β E_n/2|Ψ_n⟩_L ⊗|Ψ_n⟩_R
where Z_β is the thermal partition function and L and R denotes left and right copies in the product Hilbert space, respectively. For this state's time evolution, one can consider only the time evolution of the left copy by ℋ, and the right copy does not evolve. So effectively, we do the dynamics of the following Gibbs state,
|ψ_β⟩=1/√(Z_β)∑_n e^-β E_n/2|Ψ_n⟩
By choosing β=0, we have done the exact time evolution of this state and the corresponding complexity 𝒞_𝒦, with different choices of W, which gives a characteristic peak in the 𝒞_𝒦 evolution for the chaotic regime, and the peak disappears in the MBL regime.
The plot of peak height (here we define peak height by (max(𝒞_𝒦(t)/d)-0.5), which we take as an order parameter) against disorder strength for various system sizes (see Fig. <ref>) helps us to determine the range of disorder where the transition occurs, which agrees with earlier estimates from level-statistics transition <cit.>.
From Fig. <ref> one important point to note is that the early growth of complexity is controlled by W monotonically (this can be related to the fact that the minimum energy difference in the spectrum increases with W, <cit.> shows that the time-scale in spread complexity growth depends on the minimum energy difference in the spectrum), even though for larger W, the complexity fails to reach the peak.
The other two complexity measures shed light on a different aspect of the dynamics in the Krylov space. We observe that (see Fig.<ref>) both the integrable phases, disorder-free integrable and MBL integrable, delocalizes faster in the Krylov space at early times than the deep ergodic phase (W≈ 1). However, the KIPR and the entropic complexity, which measure localization in Krylov space, are the same in the ergodic and MBL phases at late times. In contrast, in the disorder-free integrable phase, the final state is significantly localized in comparison. Understanding the behavior of early time localization in Krylov space of the TFD state can be an important direction for further research.
We have successfully probed the chaotic-to-integrable transition in the context of MBL transitions using the complexity dynamics of the TFD state. However, one should remember that the dynamics of TFD states do not use the information of the associated eigenvectors; it solely depends upon the distribution of eigenvalues. So, studying the dynamics of the TFD state alone will leave us with an incomplete picture of the actual MBL transition. This motivated us to consider the complexity dynamics of other states, which we discuss in the following sections.
*Relaxation Dynamics.—
It is known that MBL systems break ergodicity and can retain some local information about the initial state <cit.> . Since our complexity measures essentially give us the information on how distant the time-evolved state is from the initial state and its spread in the Krylov basis, it does make sense to use the spread complexity of states as a probe to measure this memory-retaining property of the MBL phase and its initial state dependence [In <cit.>, the authors studied memory-retaining property of MBL phase for different ordered initial states using the statistics of Lanczos coefficients.].
For the model under consideration, we prepare the initial state in the computational basis, which has a Néel-like order [The authors in <cit.> considered the domain-wall-like state as the initial state and found different complexity behavior in ergodic and MBL phases. Our results are more general and explain their observations.],
|ψ_i⟩ = |1010⋯10⟩
where 1 (0) at ith position denotes the eigenstate of local σ_i^z operator with eigenvalue +1 (-1). The results discussed below hold whenever the number of computational basis elements on which the initial state has support, is much less than the total Hilbert space dimension. We show the complexity dynamics for the initial state in Eq.(<ref>) in Fig. <ref>.
Fig. <ref> shows that the late-time averaged value of all three complexity measures point towards the fact that the MBL system retains significant memory of the initial state (Robustness of this behavior of retaining the initial memory will be probed by coupling the system to the environment, in a later section). On the other hand, the ergodic phase makes the state more complex and delocalized than its integrable neighbors, as clear from its higher complexity and lesser KIPR.
The above observations can be explained by the existence of pseudo-spin-like quasi-local integrals of motions (LIOMs) <cit.> or local-bits (ℓ-bits) τ_i^z, which have finite overlap with the local spin operators σ_i^z in the presence of strong disorder. Since LIOMs are conserved quantities, information encoded in their initial values remains intact unless the system is coupled to a bath. Therefore, the computational basis elements, eigenstates of the local spin operators σ_i^z, show less complex and more localized dynamics in the MBL phase. This is the same reason why, in the MBL phase, the operator Krylov complexity of local σ_i^z operators show more localized behavior (in Krylov space) <cit.>.
Now, if we consider extensive superpositions of computational basis elements as our initial state, then the characterization of different phases through complexity becomes more involved. To capture the typical behavior of complexity dynamics in the different phases, we need to be more careful in choosing initial states. To probe typical behavior, we choose Haar random states from two different ensembles, unitary and orthogonal, as our initial states, which we discuss next.
*Complexity of Typical States.— We have been choosing various initial states from the angle of different physical motivations. We have observed that some states, for example, the TFD state and initially ordered states, carry the direct signatures of integrability, be it the disorder-free integrable or MBL integrable phases. Despite these successes, one should try to understand the complexity dynamics of typical states in different phases to see whether the distinctions among the phases via complexity dynamics are generically present.
To be completely generic at first, we choose states distributed randomly in the N-dimensional complex projective space ℂℙ^N according to the Haar measure (where (N+1) is the dimension of the Hilbert space). Such states can be sampled by the action of random (N+1)× (N+1) unitary matrices on some chosen arbitrary state. We then compute the complexity dynamics of such random Haar states evolving under specified Hamiltonian.
We find that the MBL phase can be distinguished from the ergodic phase from the absence of a peak in the complexity profile and absence of a dip in the KIPR profile. On the other hand, the disorder-free integrable phase can be distinguished from its lower complexity and higher KIPR, see Fig. <ref>.
Another choice can be to sample random states uniformly from the real projective space ℝℙ^N, which can be physically important while being completely random. Such states can be sampled by acting random orthogonal matrices on some arbitrarily chosen state. It is again observed that a peak in the complexity can distinguish the ergodic phase from the integrable ones. Even though, at late times, all of them have similar saturation values, unlike the ℂℙ^N. But from Fig. <ref>, it is clear that the KIPR value still can set apart the disorder-free integrable phase from the MBL phase.
From these observations, we infer that the distinctions among the chaotic and the integrable phases are not special to states like TFD; rather, they occur in more generic states also. Therefore, it is worth understanding the physical origin of such distinctive complexity behavior of different kinds of random states under the evolution of different Hamiltonians.
*Dissipative Dynamics.—
MBL is recognized as a robust dynamical phase of matter, when strictly decoupled from the environment, does not thermalize <cit.>. However, coupling to the environment necessarily leads to a delocalization transition, with a time scale governed by the coupling parameter value. The goal is to understand this transition from a Krylov complexity perspective and point out the robustness of the MBL integrability.
We model the coupling to a bath or a measurement apparatus by an effective non-unitary evolution by the Hamiltonian in Eq.(<ref>), which describes a specific quantum trajectory. Under the assumption of weak dephasing (α<<1), this effective description is justified, and as we will demonstrate, it well captures the essential physics even within this minimal setup. To be specific, we start with the Néel state, whose complexity dynamics in a closed system can distinguish the ergodic from the MBL, to see how its evolution is affected by the environmental coupling.
Spread complexity and KIPR for different disorder strengths W are affected by non-zero α, which is shown in Fig. <ref>. It is found that coupling to the environment certainly causes delocalization in the Krylov space and an increase of spread complexity. We have observed that, by increasing disorder W, the early-time growth rate of spread complexity increases first and decreases by further increasing of W. In particular, in the large W phase, the early-time growth rate is comparably less [However, in the late-time spread complexity behavior, we have found that in the large W phase, the complexity is more than the chaotic and disorder-free integrable phase. To understand the precise reason, further investigation is needed. We would like to address this point in a future work.]. Therefore, the MBL phase is less susceptible to environmental coupling from the Krylov complexity perspective, at least at early times.
Now, focusing on the delocalization transition in MBL systems for different values of the environmental coupling α, we have observed that the Krylov IPR has a stretched exponential decay profile (see Fig. <ref>), which is similar to the decay profile of initially set particle density imbalance found earlier in <cit.> and decay of third Renyi negativity as found in <cit.>[In this context, one should note that the idea of KIPR is more robust in the angle of understanding the localization property for the system. We do not need to use Néel like order to observe this kind of characteristic decay of KIPR in open quantum systems, in fact, any computational basis state would capture this feature.].
However, the complexity increases always for non-zero α (see Fig. <ref>) with a possible saturation above 0.5 at very late times. This indicates that the time-evolved state really goes far from the initial state in the Krylov space at late times. These observations show that the quasi-local integrals of motion (LIOMs) are destroyed when the MBL system is coupled to a bath, which is efficiently captured by Krylov space dynamics.
*Discussion and Future Directions.—
In our work, we have studied the state-dependent spread complexity dynamics in the disordered spin-1/2 Heisenberg chain to understand whether complexity dynamics in the Krylov space can make distinctions among the disorder-free integrable, ergodic, and emergent integrable MBL phases.
* Through our analysis we show that the pre-saturation peak height in TFD state complexity, which we use as an order parameter, can significantly capture the ergodic to MBL transition.
* If we start with an initially ordered state like Néel state we can also infer different phases from the saturation values of Krylov Complexity and Krylov Inverse Participation Ratio (KIPR).
* Our work has established that not only special states like TFD or initially ordered states but also randomly chosen typical states carry important information about the phases in their complexity dynamics.
* However for the open system if we focus on the early-time growth of the complexity of the initial Néel state, in the MBL phase it is slower than the other two phases. Interestingly, for the MBL phase coupled to a bath, we observe the dissipative delocalization in the Krylov space with a stretched exponenetial decay profile.
So, our analysis highlights the effectiveness of Krylov space methods in understanding the different phases for both closed and open quantum dynamics.
One important future direction is to do the complexity dynamics in the quasiperiodic systems that show MBL transition, such as the interacting Aubry-Andre model <cit.>, to understand if there is any qualitative difference in the complexity dynamics of disordered systems and the quasiperiodic potential systems. Investigating the role of interaction in the complexity dynamics is also interesting. In the spin model, this amounts to tuning a parameter that multiplies the term σ_i^zσ_i+1^z. One should also try to understand if time-reversal symmetry (TRS) has any role in this context. For that, a term that breaks time-reversal symmetry <cit.> is to be added to the Hamiltonian. Then, the complexity dynamics should be studied for this Hamiltonian that breaks TRS.
Finally, we leave it for future work to analyze the spread complexity dynamics in the MBL phenomenological model in terms of the LIOMs <cit.>. That can potentially provide a more model-independent way of characterizing MBL transition through the spread complexity dynamics.
M.G. would like to thank Sumilan Banerjee and Subroto Mukerjee for useful discussions. The authors also thank Aranya Bhattacharya for comments on the draft and suggestions. M.G. is supported by the Integrated PhD fellowship of Indian Institute of Science, Bengaluru, and A.J. is supported by INSPIRE fellowship by DST, Govt. of India.
Note added.– After the completion of this work, we became aware that the authors <cit.> are investigating complexity in random unitary circuits, which also have signatures of MBL like behavior from a Krylov complexity perspective.
apsrev4-1
§ SUPPLEMENTAL MATERIAL
§ BI-LANCZOS ALGORITHM AND COMPLEXITY
The bi-Lanczos algorithm is suitable for tri-diagonalizing a non-Hermitian matrix (H^†≠ H). This can be applied for non-Hermitian state evolutions (in non-unitary dynamics, no-click limit in MIPT or no-jump limit in open quantum system evolution) or operator evolution (in the context of open quantum system dynamics and general measurement settings). In the context of state evolution, the Hamiltonian H will be non-Hermitian, and in the context of operator evolution, the Lindbladian ℒ_o (ℒ_o^†≠ℒ_o) will be non-Hermitian. Once the inner product is specified, the following bi-Lanczos algorithm can be used for both non-Hermitian state and non-Hermitian operator evolution. For Hermitian generators, this reduces to the original Lanczos algorithm. This algorithm is based on <cit.>.
Suppose we trying to tri-diagonalize an operator M which is non-Hermitian, that is M^†≠ M. In this case, we have to construct two sets of Krylov basis vectors {|P_n⟩} and {|Q_n⟩}, which are bi-orthogonal,
⟨ Q_m | P_n ⟩ = δ_mn
So these two sets are orthogonal with respect to each other, but inside each set, the vectors are not orthogonal; by that we mean ⟨ Q_m | Q_n ⟩≠δ_mn and ⟨ P_m | P_n ⟩≠δ_mn.
As we will see, we shall generate three sets of Lanczos coefficients, the main diagonal {a_n}_n≥0, the upper diagonal {b_n}_n≥1 and the lower diagonal {c_n}_n≥1. We shall have b_n = c_n for the Hermitian limit, but they would be different for non-Hermitian.
Constructing the Krylov basis vectors. Start with |P_0⟩ = |Q_0⟩ = |ψ(0)⟩ or |𝒪_0⟩ for state and operator complexity respectively. And define the initial Lanczos coefficients values a_0 = ⟨Q_0| M |P_0⟩, b_0 = 0, c_0 = 0. The following algorithm is similar to the usual Lanczos algorithm where the |P_n⟩'s are constructed using M and |Q_n⟩'s are constructed using M^† but in a correlated way.
For n=0,
|A_1⟩ = M |P_0⟩ - a_0 |P_0⟩
|B_1⟩ = M^†|Q_0⟩ - a_0^* |Q_0⟩
w_1 = ⟨A_1|B_1⟩, c_1 = √(|w_1|), b_1 = w_1^*/c_1
|P_1⟩ = |A_1⟩/c_1, |Q_1⟩ = |B_1⟩/b_1^*
a_1 = ⟨Q_1| M |P_1⟩
For n≥ 1,
|A_n+1⟩ = M |P_n⟩ - a_n |P_n⟩ - b_n |P_n-1⟩
|B_n+1⟩ = M^†|Q_n⟩ - a_n^* |Q_n⟩ - c_n^* |Q_n-1⟩
w_n+1 = ⟨A_n+1|B_n+1⟩, c_n+1 = √(|w_n+1|), b_n+1 = w_n+1^*/c_n+1
|P_n+1⟩ = |A_n+1⟩/c_n+1, |Q_n+1⟩ = |B_n+1⟩/b_n+1^*
a_n+1 = ⟨Q_n+1| M |P_n+1⟩
It can be shown that the two sets of Krylov basis vectors as formed by above algorithm are indeed bi-orthogonal. In practical purposes one has to stop once c_n+1 is less than some cut-off and one has to full-orthogonalize |A_n+1⟩ in the |Q_m⟩ basis and full-orthogonalize |B_n+1⟩ in the |P_m⟩ basis for m≤ n.
Observe that,
M |P_n⟩ = a_n |P_n⟩ + b_n |P_n-1⟩ + c_n+1|P_n+1⟩
M^†|Q_n⟩ = a_n^* |Q_n⟩ + c_n^* |Q_n-1⟩ + b_n+1^* |Q_n+1⟩
It shows that M is rendered to a tridiagonal form in the |P_n⟩⟨Q_m| basis.
Complexities.
Now expand |ψ(t)⟩ or |𝒪 (t)⟩ in the both sets of Krylov basis vectors (let's call them P-basis and Q-basis vectors).
|ψ(t)⟩ = ∑_n ϕ_n^q (t) |P_n⟩ = ∑_n ϕ_n^p (t) |Q_n⟩
where,
ϕ_n^q (t) = ⟨Q_n|ψ(t)⟩, ϕ_n^p (t) = ⟨P_n|ψ(t)⟩
For non-Hermitian evolution we define a probability P(t) by,
P(t) = ∑_n |(ϕ_n^p (t))^* ϕ_n^q (t)|
Let's define the Krylov complexity as average position in the Krylov basis dictated by the wavefunction in this basis,
𝒞_𝒦(t) = ∑_n n |(ϕ_n^p (t))^* ϕ_n^q (t)|/∑_m |(ϕ_m^p (t))^* ϕ_m^q (t)| = ∑_n n p_n
where we have the diagonal probabilities,
p_n = |(ϕ_n^p (t))^* ϕ_n^q (t)|/∑_m |(ϕ_m^p (t))^* ϕ_m^q (t)|
The Krylov entropy can be defined as the following Shannon entropy of the diagonal probabilities,
𝒮_𝒦(t) = - ∑_n p_n log p_n
Then we define entropic complexity as exponential of the Krylov entropy,
𝒞_𝒮(t) = e^S(t)
We also define the Krylov Inverse Participation Ratio (KIPR),
ℐ_𝒦(t) = ∑_n p_n^2
The KIPR signifies the extent of localization of the wave function in the Krylov basis, and the Krylov entropic complexity signifies the effective number of Krylov basis vectors on which the wave function has typical support.
§ LEVEL STATISTICS AND STATISTICS OF LANCZOS COEFFICIENTS
The transition of level statistics is a useful probe for chaotic-to-integrable transition. For chaotic or ergodic systems, we have the Wigner-Dyson level statistics, and for integrable systems, we have the Poisson level statistics. However, during chaotic to integrable transition, the level statistics change smoothly from Wigner-Dyson to Semi-Poissonian to completely Poisson. A flow can model this whole transition in the space of different random matrix ensembles, specifically Gaussian β-ensembles, by changing the value of β (note that β=1 corresponding to Gaussian Orthogonal Ensemble (GOE) and β=0 correspond to Poisson distribution) <cit.>, see Fig. <ref>.
The variance of Lanczos coefficients also captures chaotic and integrable behavior. In particular, integrable systems will have a higher variance of Lanczos coefficients, and chaotic systems will have comparatively less variance of Lanczos coefficients, see Fig. <ref>, Fig. <ref>. This is observed in the case of operator growth also. In the context of the spread complexity of TFD state in random matrix theory (RMT) ensembles, the variance of Lanczos coefficients has anti-correlation with the average level-spacing ratio <cit.>. This holds for our model also; see Fig. <ref> for the initial TFD state and Fig. <ref> for the initial Néel state.
§ EIGENSTATE IPR AND SPECTRAL FORM FACTOR
One of the important characteristics of the many-body localized phase is the localization of many-body eigenstates in real space due to the presence of disorder. This can be quantified by evaluating the inverse participation ratio (IPR) of many-body eigenstates |Ψ_n⟩ in the computational basis elements |i⟩, IPR = ∑_i |⟨Ψ_n|i⟩|^4. A higher value of IPR will capture the localization of the eigenstates. In Fig. <ref> we have plotted the IPR of many-body eigenstates for different disorder strengths. It is observed that the disorder-free integrable phase has the most delocalized eigenstates; on the contrary, the MBL phase has the most localized eigenstates. Therefore, we observe that the localization of eigenstates increases monotonically with the disorder strength.
While the eigenstate IPR sheds light on the localization property of the MBL phase, it does not capture the emergent integrability aspect. Emergent integrability can be understood from the level-spacing ratio as well as spectral form factor (SFF). Spectral form factor is defined by,
SFF(t) = 1/d^2∑_n,m=1^de^i(E_n-E_m)t
For chaotic eigenvalue distribution, the SFF shows a characteristic dip-ramp-plateau behavior as a function of time t. However, this feature is absent in integrable models. From Fig. <ref>, we observe that the ergodic phase SFF has indeed a dip-ramp-plateau behavior, which is absent in both the disorder-free integrable phase and the MBL phase.
§ QUENCH DYNAMICS
we consider some possible quench scenarios to understand state-dependent spread complexity in different phases. To do so at first we assume the disorder-free phase to be at its ground state, then suddenly we add disorder in the system with varying strengths. Moreover, we track the complexity dynamics of the state under the evolution of the disordered Hamiltonian. Our observation indicates that the complexity and delocalization both increase with increasing strength of disorder, and the normalized spread complexity seems to saturate to around 0.16 towards high disorder (top-panel in Fig. <ref>). In contrast, if one starts with an infinite temperature TFD state corresponding to the disorder-free model and evolves with disordered Hamiltonian, then in the high disorder limit, the normalized complexity seems to saturate to around 0.36 (bottom-panel in Fig. <ref>).
Further, to understand the reversed quench scenario, we take the initial state, which is the ground state of a phase inside the MBL regime, say W=6.5, and evolve it with a Hamiltonian with lesser disorder strength. At W=6.5, it will have zero complexity since the complexity evolution of eigenstates is trivial. However, for lesser disorders, the state will evolve non-trivially, and we observe that the complexity saturation increases with the decrement of W. In our case for system size L = 12 as we go away from W=6.5, complexity increases with a maximum saturation around 0.4, but as we enter the disorder-free phase, the complexity saturation decreases to around 0.29 for W=0 (middle-panel in Fig. <ref>).
Taking into account the observations and previous results on thermofield dynamics, relaxation dynamics and quench dynamics, it gives an indication that the disorder-free phase generally exhibits lower complexity compared to the other two phases. The results for typical state spread complexity in the main text also support this observation.
|
http://arxiv.org/abs/2409.03247v1 | 20240905045118 | End User Authoring of Personalized Content Classifiers: Comparing Example Labeling, Rule Writing, and LLM Prompting | [
"Leijie Wang",
"Kathryn Yurechko",
"Pranati Dani",
"Quan Ze Chen",
"Amy X. Zhang"
] | cs.HC | [
"cs.HC"
] |
End User Authoring of Personalized Content Classifiers]End User Authoring of Personalized Content Classifiers: Comparing Example Labeling, Rule Writing, and LLM Prompting
[email protected]
University of Washington
Seattle
United States
[email protected]
University of Oxford
Oxford
United Kingdom
[email protected]
University of Washington
Seattle
United States
[email protected]
University of Washington
Seattle
United States
[email protected]
University of Washington
Seattle
United States
§ ABSTRACT
Existing tools for laypeople to create personal classifiers often assume a motivated user working uninterrupted in a single, lengthy session.
However, users tend to engage with social media casually, with many short sessions on an ongoing, daily basis.
To make creating personal classifiers for content curation easier for such users, tools should support rapid initialization and iterative refinement.
In this work, we compare three strategies—(1) example labeling, (2) rule writing, and (3) large language model (LLM) prompting—for end users to build personal content classifiers.
From an experiment with 37 non-programmers tasked with creating personalized comment moderation filters, we found that with LLM prompting, participants reached 95% of peak performance in 5 minutes, beating other strategies due to higher recall, but all strategies struggled with iterative refinement.
Despite LLM prompting's better performance, participants preferred different strategies in different contexts and, even when prompting, provided examples or wrote rule-like prompts, suggesting hybrid approaches.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003233</concept_id>
<concept_desc>Human-centered computing Collaborative and social computing systems and tools</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Collaborative and social computing systems and tools
[
Amy X. Zhang
September 5, 2024
=====================
§ INTRODUCTION
Today, internet users encounter more content than ever before, ranging from social media posts, blogs, and news articles to chat conversations and emails. To manage this overwhelming and sometimes unwanted information, online platforms provide automated curation systems to categorize, label, and moderate content. Some offer content moderation algorithms to remove harmful posts <cit.>, automatic classifiers to organize and prioritize emails <cit.>, or recommendation algorithms to discover interesting content <cit.>. But these systems are typically centralized and platform-wide, failing to accommodate the diverse preferences of individual users <cit.>.
To support end-user customization, researchers have examined and built a variety of specialized tools that enable users to author personalized content classifiers within social media <cit.>. They have also explored generic techniques for non-technical people to build their own text classifiers <cit.>.
However, many tools for authoring personal classifiers on social media are designed for users with heightened motivation, such as community moderators <cit.> and high-profile content creators <cit.>. In focusing on these users, such tools neglect the usage patterns of general social media users, who spend significant time on social media over the long term but in many relatively short, fragmented sessions. They also have limited attention and motivation for cognitively demanding tasks <cit.>.
As a result, for many users to realistically participate in curation, tools for authoring classifiers should support rapid initialization. As end users oftentimes engage with internet content as a leisure activity, a successful tool should allow them to quickly and intuitively build an initial classifier with decent performance. In contrast, existing systems often demand that users maintain a high level of concentration for extended periods in this process <cit.>. For instance, some treat users as oracles who can continuously provide high-quality input, whereas others overwhelm users with an excessive amount of information <cit.>.
In addition, curation tools should enable easy iteration to improve initial creations incrementally, as social media users may be more amenable to short tasks spread out over many sessions as opposed to s single lengthy task. Instead, existing systems often assume that users want to create highly performant custom classifiers in a single sitting <cit.>, for instance, requiring that users carefully debug their classifiers before their deployment. However, social media users naturally audit curation algorithms as they browse their feeds regularly <cit.>. Their preferences may also evolve gradually, necessitating continuous iteration <cit.>. Thus, a successful tool should enable users to make small adjustments over a period of time and ensure that each change results in the desired incremental improvement.
In this work, we use these system requirements to compare three prominent strategies for creating custom classifiers for content curation: (1) labeling examples for supervised learning, (2) writing and carrying out rules, and (3) prompting a large language model (LLM).
Each of these techniques has its respective benefits and drawbacks for end-user classifier creation.
In interactive machine learning (IML) research, labeling examples is a popular strategy to customize a classifier. It is promising in the social media context due to its simplicity and ability to be easily divided into smaller tasks. However, even though pretrained word-embeddings <cit.> and active learning <cit.> reduce the number of labels needed to train a classifier, labeling examples can still be tedious and inefficient <cit.>, so this technique is typically not available for social media users to create personal classifiers from scratch.
In contrast, the most common strategy deployed on social platforms today is writing rules. Most platforms allow users to curate content through simple keywords <cit.>. AutoMod, the most widely used content moderation tool on Reddit <cit.>, further enables community moderators to write regular expressions to remove inappropriate posts <cit.>. Despite the transparency that rules afford, users often struggle to write rules that account for nuanced contexts and to further refine them <cit.>.
Finally, with recent advances in LLMs, we additionally investigate the technique of customizing zero-shot or few-shot classifiers by prompting an LLM in natural language <cit.>. While LLMs have the potential to learn nuanced preferences quickly <cit.>, their effective use may require prompt engineering skills beyond the capabilities of many end users <cit.>.
We conducted a within-subjects lab experiment to evaluate these three strategies.
For each strategy, we implemented a content curation system with state-of-the-art features to ensure a fair comparison.
We then invited 37 non-technical social media users to build a personal classifier using each system.
Each participant first manually labeled 100 comments sourced from YouTube videos on political topics to create a personal ground truth dataset, which allowed us to evaluate the performance of their classifiers. Then, in a randomized order, they used each of the three systems to create personal classifiers for removing unwanted YouTube comments, with a 15-minute time limit per system.
We logged the actions participants took in each condition and tracked the performance of the classifiers as participants built them. After each condition, participants filled out a survey to report their subjective user experience.
Additionally, we conducted 13 semi-structured interviews with a subset of participants after all conditions were complete to better understand their challenges in communicating initial preferences and iterating on classifiers in different conditions.
We found that writing prompts generally allowed participants to create personal classifiers with higher performance more quickly, achieving a significantly higher recall and F_1 score than authoring rules and labeling examples throughout the 15-minute experiment period.
However, participants struggled to iteratively refine their prompts to align LLMs with their nuanced preferences, as both recall and precision plateaued five minutes or earlier after starting.
This difficulty was most pronounced when users tried to teach LLMs to categorize specific phrases within certain concepts, such as classifying “fool” as offensive or “hell” as obscene.
Even worse, the opaque nature and occasionally unpredictable behavior of LLMs left participants uncertain about which phrases LLMs might interpret differently.
As a result, one-third of participants preferred labeling examples over writing prompts, while another third preferred authoring rules.
Some found labeling examples to be the easiest method for expressing their intuitive preferences, while they considered rules better for curating content about specific topics or events.
Participants even tried to deploy these two strategies when iterating on their prompts: they often added incorrectly classified examples directly into their prompts as few-shot examples or wrote prompts resembling rules.
Our findings offer a roadmap for future end-user content curation systems and have broader implications for facilitating more efficient and effective communication between end users and classifiers.
First, LLMs are a promising tool for enabling personalized content classifiers, with only five minutes of work needed to initialize a classifier with a better F_1 score than 15 minutes using traditional methods of labeling examples or writing rules.
However, we find that labeling examples requires less cognitive effort from users, and writing rules provides users with more control and transparency over their classifiers.
Given the diversity of content curation scenarios and user preferences, future content curation systems should provide end users with more flexible strategies to create and iterate on their classifiers.
For instance, while writing prompts could be the default strategy given its superior overall performance, future hybrid systems could also suggest possible prompts based on example labels to help users who have difficulty articulating their intuitions, or allow rule authoring for preferences about specific topics or events.
Hybrid approaches such as these could bring together the benefits of all three strategies and facilitate more effective human-LLM collaboration.
§ RELATED WORK
§.§ End User Customization of Content Curation
Content curation is an important process for internet users who seek to classify desired or undesired content on social media <cit.>, filter out harassment or spam from their email folders <cit.>, manage online chats <cit.>, find relevant news articles <cit.>, or otherwise categorize information that they encounter online. However, content curation tools are often centralized and platform-wide, leaving limited room for end user customization <cit.>. For example, one-size-fits-all moderation algorithms often fail to account for the diverse perceptions of toxicity across countries <cit.> and communities <cit.>. Similarly, some users want to categorize their emails into more fine-grained folders other than the default “Spam” or “Promotions” folders <cit.>.
As a result, there have been growing calls to empower end users to customize content curation classifiers. For example, researchers have argued for personal content moderation tools that enable users to customize some aspects of their moderation preferences based on the content of posts uploaded by other users <cit.>.
Users may wish to remove unwanted content from their feeds on platforms like Instagram, TikTok, and X/Twitter <cit.> or to moderate the comment sections of their YouTube videos or the chat channels of their Twitch streams <cit.>. In both cases, end users act as moderators that remove content according to their preferences.
Researchers and practitioners have experimented with offering custom content curation tools. However, these tools often struggle to navigate the trade-off between flexibility of customization and ease of use.
Some tools offer extensive customization but are too complex for casual users. For example, Reddit’s AutoMod <cit.> allows community moderators to write regular expressions in YAML to remove inappropriate posts, but only users with technical knowledge of regular expressions and programming syntax can fully utilize AutoMod's flexibility <cit.>.
Conversely, other tools are more simplistic in design. Many of them involve pre-trained classifiers for platform-defined or researcher-defined concepts such as racism, misogyny, and political views, with users only able to adjust their sensitivity to each concept <cit.>. However, this approach fails when users have differing definitions of a concept or wish to classify content with respect to a concept that is not provided <cit.>.
Recently, research has shown that LLMs can outperform state-of-the-art classifiers in detecting toxic content with natural language prompts <cit.>. But LLMs are not exempt from the customization versus ease of use trade-off: when applied toward more complex and nuanced curation preferences that go beyond simple hate speech or toxicity filtering, LLMs sometimes perform even worse than a coin toss <cit.>. In this work, we compare common content curation techniques in terms of this trade-off through an experiment using the case of personal content moderation, with the eventual goal of designing a system that offers both customization and ease of use for a broad range of end users.
§.§ Strategies that Support End Users in Customizing Classifiers for Content Curation
Despite the ubiquitous use of machine learning (ML) algorithms in various domains, only a small group of people with technical expertise possess the skills to develop these algorithms. Interactive machine learning (IML) seeks to democratize ML training with humans-in-the-loop, enabling non-experts to participate through “rapid, focused, and incremental model updates” <cit.>. IML underlies many efforts at developing end user systems to build classifiers and operates in three critical phases <cit.>: (1) Users examine a model’s outputs across a set of examples; (2) Users then offer feedback to the model, guiding its learning in the desired direction; and (3) Users assess the model's overall performance and decide whether to conclude the training process or to offer the model more feedback.
At the heart of these phases are teaching vocabularies: the frameworks through which end users structure their feedback <cit.>, including assigning labels, selecting features, specifying parameters, or indicating error preferences <cit.>. Recently, prompts have emerged as another promising way for end users to communicate with algorithms (i.e., LLM-based models). Through the lens of teaching vocabularies, we characterize existing content curation systems by three primary strategies that they leverage to support end user customization: labels, features or rules, and prompts. In the following, we discuss each strategy and its relationship to content curation.
§.§.§ Labels
Many IML systems consider end users to be oracles who provide correct labels. Techniques like pre-trained word embeddings <cit.> and active learning <cit.> are often used to minimize the number of labels required to effectively train a ML algorithm <cit.>. An active learning algorithm iteratively picks examples for labeling from an unlabeled dataset through query sampling strategies, such as uncertainty sampling <cit.> and query-by-committee <cit.>. Previous studies have shown that people find labeling effective for communicating with opaque, “black-box’’ algorithms <cit.>. However, researchers have noted that end users might find the repetitive labeling of data tedious and non-transparent <cit.>. There is also a risk of users applying labels inconsistently due to their evolving preferences <cit.>.
§.§.§ Features or Rules
In response to the aforementioned criticisms of the labeling approach, a body of research focuses on feature-level human input <cit.>. Here, the term “feature’’ refers to an attribute of a data instance that ML algorithms use to predict its class label. In comparison, “rules” are often logical statements that directly map to a class label without relying on ML algorithms. In systems that support technical users to build a text classifier via feature-level input, features and rules often take the same form: the mentions of keywords. By incorporating features or rules into different algorithm architectures, these systems can offer varying degrees of transparency.
Rules can offer the highest transparency by directly classifying text without using any ML models.
Features, on the other hand, can be incorporated into transparent ML algorithms, such as Decision Trees, Naive Bayes algorithms <cit.>, and Support Vector Machines <cit.>.
In addition, because end users might produce noisy and inconsistent features <cit.>, user-generated features can also be used to label data. Weak supervision algorithms, for example, then learn from these “soft labels’’ <cit.>. However, these indirect approaches sacrifice the transparency and control that users often assume with feature-level input. For this reason, many systems designed for end users often integrate features with transparent algorithms to enhance users' mental models of such systems <cit.>.
Although empirical results regarding whether feature-level input from end users improves performance have been mixed <cit.>, researchers are generally in agreement that feature-level input requires fewer user actions to achieve comparable results to example labeling, and that it could produce models more closely aligned with an individual’s needs or domain knowledge. However, features can be seen as too granular and therefore not generalizable enough, posing a challenge for end users in creating effective classifiers <cit.>.
§.§.§ Prompts
LLMs have exploded in popularity due to their proficiency in a broad range of natural language processing tasks, such as sentiment analysis and machine translation. Compared to features, natural language prompts can convey richer user guidance to models and thus promote greater generalizability across various contexts <cit.>. Rather than having to train a new model for every custom task, users can simply customize LLMs by feeding them prompts at run time. Such an ability to recognize the desired task on-the-fly is called in-context learning (ICL) <cit.>.
Developing effective prompts for ICL is crucial to leveraging LLMs in content curation <cit.>. To date, the most common patterns for prompting are zero-shot or few-shot prompts. Zero-shot prompts give instructions for a task without any specific examples for training <cit.>. Research indicates that the performance of zero-shot prompts can be enhanced by iteratively refining task instructions <cit.> or breaking down tasks into simpler subtasks <cit.>. Alternatively, few-shot prompts incorporate a few input-output examples to showcase the desired pattern to which LLMs should adhere <cit.>. The quality of few-shot prompts relies heavily on the selected examples <cit.>. While few-shot prompts generally outperform zero-shot prompts, the simplicity of composing natural language instructions without the need for selecting examples makes zero-shot prompting an attractive option for end users. Even so, various challenges impede end users in creating and refining their prompts. For example, even a slight adjustment to a prompt's format or ordering of examples can greatly influence a model’s performance <cit.>. In addition, users often find it difficult to evaluate the clarity of their instructions to LLMs and to align their perspectives with those of LLMs, leading to incorrect or unexpected outputs <cit.>.
§.§ How Existing IML Systems Overlook End User Needs in Content Curation
While researchers and practitioners have built many systems using labels, features, and prompts, they often make assumptions about end users that do not translate well into the context of content curation.
First, these systems often require end users to spend hours of dedicated time creating their custom classifiers <cit.>. However, users typically engage with social media as a leisure activity <cit.>.
For example, a recent survey notes that the top two reasons why people use social media are to keep in touch with friends and to fill spare time <cit.>. Research also suggests that internet users often rely on default settings, despite acknowledging the benefits of customization controls <cit.>. Ultimately, users are only willing to invest effort in creating custom classifiers if there are proportional improvements to their content feeds <cit.>.
Additionally, existing IML systems require users to be highly dedicated to creating their custom classifiers.
For instance, some systems treat users as infallible experts who can continuously provide high-quality labels or rules <cit.>, while others present an overwhelming amount of information intended to facilitate the teaching process <cit.>.
Even though weak supervision algorithms can learn from noisy user annotations, they often require more annotations to compensate for their low quality <cit.>. Such an expectation of high commitment could thus intimidate users—who are likely to be casually using classifiers <cit.>, accessing them on mobile devices, or using them in between other activities and with distractions <cit.>—from using these systems at all.
Finally, IML systems often assume that users want to create highly performant classifiers at their inception <cit.>, but users of content curation tools prefer to iteratively refine their classifiers <cit.>.
For example, instead of carefully considering all possible mistakes when initially creating their classifiers, social media users naturally audit their deployed classifiers as they browse content feeds for entertainment <cit.>. Members of online communities also tend to help community moderators collect mistakes via user reports <cit.>.
This on-the-go monitoring signals the need for content curation classifiers to adapt to constant distribution shifts and fluid curation preferences: a new political event or video theme could spark community discussions that were unobserved or nonexistent when a user initially developed their classifier <cit.>. Users' curation preferences may also evolve over time, sometimes even depending on their moods <cit.>. However, iterating a classifier remains challenging in IML research <cit.>. Traditional ML algorithms often require users to provide sufficient new training data or relabel existing training data to reflect updated information. While several systems enable users to update a model via rules <cit.> or natural language descriptions <cit.> by adjusting the weights of training data, it remains to be explored how these methods could be effectively applied in the context of content curation.
§ METHODS
Drawing inspiration from research in interactive machine learning and in-context learning, we identified three primary strategies, or “teaching vocabularies,” through which users can convey their personal preferences to algorithms: labeling examples, authoring rules, and writing prompts.
Each teaching vocabulary can be leveraged most effectively by a corresponding backend architecture: black-box ML algorithms, transparent ML algorithms, and LLMs respectively. From the perspective of users, each teaching vocabulary provides different levels of control and requires varying degrees of effort to learn and master.
Corresponding to these three teaching vocabularies, we developed three representative systems enabling end users to create binary text classifiers, using the scenario of personalized content moderation for YouTube comments. Each system is either a popular or potentially popular tool for end users to customize moderation algorithms.
We conducted a within-subjects experiment with 37 non-programmer participants to comparatively evaluate these three personal moderation systems for everyday users. Participants used each of the systems—in a randomized order—to build a classifier within a fixed amount of time, resulting in three experimental conditions. Throughout the duration of each condition, we collected the performance of participants' classifiers every half-minute in order to understand how well each system enables rapid initialization and iterative improvement over time.
After each condition, we collected participants' self-reported usability ratings of the system in that condition to understand their experiences and challenges that they encountered. For 13 participants, we additionally conducted a semi-structured interview after the experiment was over to understand their challenges in creating and iterating personal classifiers.
We further describe our system implementations and experiment design below.
§.§ Experiment Systems
Through pilot studies (detailed in Appendix A), we iteratively implemented three systems corresponding to the three primary teaching vocabularies: Label System (labeling examples to train a supervised model), Rule System (writing executable rules), and Prompt System (writing prompts for an LLM). To ensure the validity of our experiment, we established the following criteria in our system implementations.
First, we restricted the form of user input in each system to its designated teaching vocabulary.
While we acknowledge that an ideal personal moderation system might accept multiple forms of user input, we note that allowing varied types of user input in a single system could obscure the comparison between the three teaching vocabularies in our analysis.
For instance, in a label-based system, allowing users to influence algorithm predictions by highlighting keywords in examples could blur the lines between teaching through labels and teaching through rules (since keywords act as rule indicators).
Similarly, if users are allowed to instruct an LLM to enumerate all possible offensive words in a rule-based system, then users would essentially use both rules and prompts to interact with the algorithm.
Second, we ensured that similar functionalities were integrated across all systems so that no condition had an unfair advantage, provided that such functionalities did not compromise the user experience. For instance, both the Rule System and the Prompt System enable users to interactively view how their systems label examples from the training dataset. However, we did not include this feature in the Label System because our pilot studies indicated that displaying system predictions on examples during the active learning process tended to confuse rather than assist end users.
Finally, we equipped each system with state-of-the-art features to assist end users in creating their moderation classifiers. For example, the Rule System suggests synonyms for phrases that users have already input, and the Prompt System helps rephrase users' prompts.
We chose to implement features that involve relatively trivial and already established improvements to the basic authoring interaction to improve performance or usability <cit.>.
Note that this approach does not conflict with our first criterion, as we only enhance user input and do not permit multiple forms of user input.
Thus, our three systems present our attempt to provide the best versions of each strategy while keeping each strategy distinct and functionalities similar for a fair comparison.
In Figure <ref>, we summarize the features we chose to implement for each system.
In the following, we discuss the detailed implementation of each system, highlighting how the systems are optimized to support users in the context of personal moderation.
§.§.§ Label System: Train a “Black-box” ML Algorithm by Labeling Examples.
With the Label System, users label example comments to create a personal moderation classifier. We experimented with various models using the Jigsaw toxicity dataset <cit.> and opted for a combination of Sentence Transformer + Naive Bayes, which had the highest overall performance.
We incorporated active learning to enhance our system, specifically employing uncertainty sampling <cit.>. After a user labels a set of examples, we train the algorithm on these labels and calculate the predicted positive probabilities for the remaining examples in the training dataset. We subsequently sample examples with the highest label uncertainty. In the context of Naive Bayes, examples with prediction scores near 0.5 are deemed the most uncertain. The frontend web interface, depicted in Figure <ref>, allows users to label examples selected by active learning and then train the backend algorithms.
§.§.§ Rule System: Create a Transparent Algorithm by Authoring Rules
In implementing the Rule System, we drew inspiration from AutoMod, which allows community moderators to create automated scripts for detecting rule violations, such as profanity or external links. Recognizing that AutoMod’s complexity may deter end users from creating personal moderation classifiers, we used our pilot studies to develop a more user-friendly Rule System tailored to individual users' moderation needs (see Appendix A). The system filters comments that match any constructed rule, which represents a category of texts that the user wants to remove from their content feed. Figure <ref> presents an example of rules that users can create with our Rule System. Authoring rules is typically an iterative process in which users review examples, refine their rules, and examine the effect of their rules on those examples <cit.>. Therefore, we present examples from the training dataset on the side, as seen in Figure <ref>, so that users can interactively review the effect of their rules and then adjust their rules accordingly.
§.§.§ Prompt System: Communicate with LLMs by Writing Prompts.
Like the Rule System, the Prompt System features a panel of instructions (prompts in this case) and a panel of examples. Figure <ref> provides an example of a prompt as a category of unwanted comments. We refrain from asking LLMs to generate explanations for their predictions on individual examples because prior studies indicate that such explanations can be inaccurate, potentially confusing users rather than helping them develop clear mental models <cit.>.
When developing the prediction algorithm, we needed to determine the optimal balance between the quality of LLM predictions and the time users must wait for predictions.
Through our pilot studies, we decided to prioritize low response times while maintaining sufficient prediction quality to reflect realistic deployment settings (see Appendix A).
Our prediction algorithm operates as follows. It combines each user-created prompt with a predefined system prompt and then requests predictions from LLMs in batches of 10 comments.
Since LLMs process each prompt independently, we can tell users which prompt leads to the removal of an example, thus offering more explainability than aggregating all prompts into a single query. We also introduced a caching mechanism—only re-querying the LLM for prompts that have been modified since their last evaluation—to minimize unnecessary requests and therefore reduce users' waiting times. Throughout our implementation, we used OpenAI's model.
§.§ Experiment Datasets
For our experiment dataset, we crawled all comments of three videos about gun control policies from a YouTube channel called The Young Turks: an online news show with 5.78 million subscribers that covers political topics from a left-leaning perspective. We refrained from using existing toxicity datasets <cit.> because they were often sampled from various online platforms and communities to train more generic toxicity classifiers. Their examples relate to diverse contexts and are difficult for participants to understand. Hence, we concentrated on three videos regarding the same topic to simulate a real-world personal moderation setting more accurately.
We selected the channel The Young Turks for various reasons. First, given our plan to recruit university students, we chose the familiar and opinion-provoking topic of political news over niche interest group content. Additionally, to elicit more opinionated reactions, we opted for a channel known for its more polarizing and occasionally controversial content. Lastly, we selected a channel without active comment moderation to ensure the presence of potentially toxic ones in our dataset.
We gathered over 5,000 comments from the three selected videos. To simplify the moderation task, we excluded comments that were responses to others. We also filtered out comments that were too brief or excessively lengthy, as they tended to be either irrelevant or cumbersome to read. Next, we used the Perspective API to assess the toxicity level of each comment. We labeled a comment as toxic if its toxicity score exceeded 0.7, following recommendations from prior research <cit.>. Although participants' moderation preferences can differ greatly, we aimed to balance the dataset so that nearly half of the comments would be toxic as determined by Perspective API. This process resulted in a balanced dataset of 800 comments. We randomly divided our dataset into a training dataset and a test dataset of 100 examples for each participant. The training dataset was used to help participants create their classifiers, whereas the test dataset was labeled by participants and used to evaluate their created classifiers.
§.§.§ Recruitment and Participants
We recruited 37 non-programmer participants by advertising a call for participation on mailing lists of non-Computer Science departments at two major U.S.-based universities. We only selected participants who self-reported having little knowledge of programming and algorithms. There were 27 females, 9 males, and one participant who preferred not to disclose their gender. Most participants were pursuing their bachelor's degrees except four pursuing more advanced degrees.
Regarding political stance, 16 participants identified as liberal, 12 as moderate, and the remaining 9 were evenly distributed among “very liberal,” “conservative,” and “prefer not to disclose” groups. In terms of generative AI usage, 14 participants used it at least every few weeks, 14 participants every few months, and 9 participants rarely.
To gather qualitative data on participants' experiences with the systems, we conducted 13 individual user studies via Zoom. Additionally, we held 7 in-person workshops that did not include semi-structured interviews. Each workshop had between three to five participants, with all participants in a given workshop using the three systems in the same sequence. The individual user studies averaged 129 minutes in length, while the in-person workshops lasted about 100 minutes. Individuals were compensated with a $40 gift card for their participation. In our analysis, we denote participants who attended individual sessions as P1–P13 and those who did not as W1–W24.
§.§ Experimental Design
§.§.§ Study Design and Procedure
We employed a within-subjects design with the three experiment systems described above.
The final experiment protocol was designed iteratively through five pilot experiments to ensure its effectiveness.
This study was reviewed by our university IRB and deemed exempt.
Stage 1: Study Onboarding. We started the experiment by briefing the participants and warning the possibility of encountering profanity and hate speech. We emphasized that participants could stop the experiment whenever they wanted, and we gained their explicit consent before proceeding. We then asked participants to imagine themselves as YouTube content creators whose videos on gun control policies had gone viral and were flooded with comments. They were then invited to set up automated classifiers to remove unwanted content based on their personal preferences. To gather a wide range of user preferences in our study, we assured participants that we would not judge their moderation preferences but instead focus on how well the three systems could align with their preferences.
Stage 2: Ground Truth Labeling. Subsequently, participants were asked to label 100 comments as “Keep” or “Remove” as the test dataset.
Prior research suggests that users might label data inconsistently, which harms the training of downstream ML algorithms <cit.>.
Hence, we emphasized that participants should make labeling decisions consistently and that, if they change their criteria, they should revise their previous decisions.
This test dataset would later be used to evaluate the performance of the classifiers that participants created.
We scheduled the ground truth labeling before the classifier creation so that participants could familiarize themselves with content moderation and this dataset of YouTube comments. While exposure to the test dataset beforehand might bias the classifier creation process, we argue that participants already had moderation preferences in mind and simply conveyed them when labeling their test datasets.
Stage 3: Creating Personalized Classifiers. We then asked participants to create classifiers using experiment systems in a randomized order. We used a counterbalanced design to counter any potential learning or fatigue effects.
The underlying process for all three experiment conditions remained consistent. Each condition lasted 25 minutes in total. Participants first spent five minutes engaging with tutorial slides and the corresponding system. They were asked to try all of the system's functionalities and were encouraged to ask any questions.
Then, in each condition, participants were given 15 minutes to create classifiers.
In particular, the waiting time for backend computations did not count toward the 15-minute duration. This affected two systems: the Prompt System supported by a generative language model, and the Label System, which takes time to calculate the most uncertain examples for the next batch.
After 15 minutes, participants spent three minutes examining the overall performance and individual predictions of their created classifiers on their test datasets. Following this, participants reported their subjective experiences in a survey, which we will discuss in detail in <ref>. Once participants completed all three conditions, we conducted a final survey to collect participants' preferred systems for content moderation and their rationales.
Stage 4: Semi-structured Interviews (for Individual User Studies Only).
At the end of the experiment, we asked participants a series of questions to understand the challenges that they encountered in creating and iterating on their classifiers. Examples of these questions include “What do you like and dislike about writing prompts?” and “Can you easily understand why a prompt classifier makes its decisions?”
We also inquired about their preferred systems for various content curation scenarios.
§.§.§ Evaluation Measures
We consider the following three measures to evaluate the systems in our experiment.
Classification Performance. To determine which system could learn user preferences with the highest performance, we evaluated the accuracy, precision, recall, and F_1 score of the final systems participants developed.
We adopted several performance metrics because individuals prioritize different metrics for content moderation algorithms <cit.>. Additionally, since participants' ground truth labels on the test dataset were not always balanced, accuracy might not reflect a system's true ability to distinguish between positive and negative examples.
Creation Speed. To assess which system could enable participants to develop a performant classifier most rapidly, we logged each participant's classifier every 30 seconds throughout the classifier creation period and calculated the performance of each snapshot.
The performance of each classifier at early intervals serves as an indicator of its creation speed. In particular, we selected 5 minutes and 10 minutes as two representative intervals.
Ease of Creation. We documented 23 different types of user interactions, including asking for synonym suggestions, loading additional examples, and applying classifiers to examples. These logged actions indicated the usefulness of the implemented features and how participants approached classifier creation with each system.
We also gathered participants' subjective perceptions using a five-point Likert scale (from Strongly Disagree to Strongly Agree) for each condition:
* Subjective Workload. We adopted four applicable items from the NASA-TLX survey <cit.> regarding mental demand, temporal demand, effort, and feelings of stress.
* System Usability. We adopted all four items from the Usability Metric for User Experience (UMUX) survey <cit.> to measure system usability. We chose the UMUX over the System Usability Scale (SUS) to reduce the overall number of questions that participants had to answer.
* Understanding. To understand whether participants had a clear mental model of each system, we evaluated both participants' global understanding of a system and their local understanding of individual predictions <cit.>. For global understandings, participants were asked to rate the statement “I felt that I had a good understanding of how my classifier works.” For local understanding, we asked participants to explain the predictions of one false positive and one false negative randomly selected from the test dataset.
§.§ Data Analysis
§.§.§ Quantitative Modeling
Quantitative results were analyzed using a linear mixed-effects (LME) model, where experiment systems were treated as a fixed effect and participants as a random effect. The dependent variables in our model included classification performance, creation speed, and various subjective measures of user experience. We did not include the order in which participants used the three systems as another fixed effect, as no significant differences were observed for this variable. For each dependent variable, we calculated pairwise differences between the three experiment systems. Finally, we conducted a sanity check to confirm that participants exhibited diverse moderation preferences. We found that half of the ground truth comments had, at most, a 75% majority consensus.
§.§.§ Qualitative Coding
Our qualitative data comprised semi-structured interview data and responses to open-ended questions in surveys. We employed a reflexive thematic analysis approach <cit.> to explore participants' experiences and challenges in creating personalized classifiers with each system.
Reflexive thematic analysis has been widely used in HCI research to understand users’ experiences and views, as well as factors that influence particular phenomena or processes <cit.>. During data collection, the first author took detailed debriefing notes after each interview to document emerging themes. The authors then collectively reviewed the debrief notes and discussed themes in weekly group meetings. Recordings were automatically transcribed into text. The first author then open-coded the data on a line-by-line basis, and the remaining authors reviewed the transcripts and added codes. Over 300 codes were generated from the open-coding process. The authors clustered the open codes into high-level themes in a codebook and iteratively improved the codebook through discussion. Some examples of codes are Examples: Failed to understand preferences, Rules: Transparency, and Prompts: Idiosyncratic behaviors. Finally, the authors applied the codes to the data to complete the thematic analysis.
§ RESULTS
§.§ What strategies enabled participants to create high-performing classifiers quickly?
§.§.§ The Prompt System resulted in the best final performance after 15 minutes, although both the Prompt System and the Rule System had the highest precision
Figure <ref> compares classification performances across the three systems over 15 minutes, with each line's endpoint representing the final performance. Table <ref> presents the pairwise differences in the final performances among the three systems. We observe no significant differences in accuracy across systems. In terms of precision, the Rule System showed significantly higher precision than the Label System (Est.Diff = 0.097, p < 0.05) but did not differ significantly from the Prompt System. However, the Prompt System demonstrated significantly higher recall compared to both the Label System (Est.Diff = 0.205, p < 0.001) and the Rule System (Est.Diff = 0.232, p < 0.001). Consequently, the Prompt System also achieved a significantly higher F_1 score than the other two systems (Est.Diff compared to the Rule System = 0.105, p < 0.01; Est.Diff compared to the Label System = 0.141, p < 0.001).
We focus on F_1 as a performance metric of interest over accuracy due to the unbalanced nature of participants' labels.
Finally, when examining how the final performance of each system varied across participants, we found that the Prompt System showed the least variance, regardless of the performance metric used. This suggests that the Prompt System is less susceptible to individual variances than the other systems.
§.§.§ The Prompt System reached 95% of its peak performance within 5 minutes on average, with gains due to rapid initial improvements in recall
Table <ref> and Table <ref> present the pairwise differences in classifier performance at 5 and 10 minutes respectively. Consistent with our earlier findings, the Prompt System demonstrated significantly higher recall and F_1 scores than the other two systems. Notably, its recall at 5 minutes had already surpassed the final recall of the other two systems.
Meanwhile, the Prompt System's recall exhibited a distinct pattern: a rapid increase in the first five minutes, followed by a steady climb in the subsequent five minutes, and a plateau in the remaining time. In contrast, the Rule System showed a consistent increase in recall throughout the entire 15-minute period, whereas the Label System showed a less noticeably upward trend in recall over time.
On average, participants reached 95% of their peak recall by 7 minutes, and that of their peak F_1 score by 5 minutes. We also observed that participants made fewer changes to their prompts as time went on, while they continued actively labeling examples or curating keywords for their rules in the other two systems. These findings suggest that writing prompts can facilitate rapid initialization but may be less effective in supporting iterative improvements.
§.§.§ Users struggled to improve the precision of all three systems over time.
The precision of the Prompt System and the Rule System remained comparable throughout the experiment.
While the Prompt System showed significantly higher precision than the Label System at 5 minutes (Est.Diff = 0.131, p < 0.05), the difference became non-significant at 10 minutes (Est.Diff = 0.082, p = 0.090). In contrast, while initially comparable at 5 minutes (Est.Diff = 0.100, p = 0.06), the Rule System had a significantly higher precision than the Label System at 10 minutes (Est.Diff = 0.099, p < 0.05) and at the end (Est.Diff = 0.097, p < 0.05).
While both the Prompt System and the Rule System showed notable improvements in recall by the end of the experiment, their precision either plateaued or even decreased over time.
For instance, the Rule System experienced a gradual decrease in precision during the first 5 minutes, followed by a steady state for the remaining 10 minutes. The Prompt System, despite a slight increase in precision in the initial 2 minutes, subsequently experienced a minor decline before reaching a plateau.
These observations align with participant behaviors: participants tended to add more categories of unwanted content rather than increase the specificity of existing rules or prompts during the task because the latter was often considered more cognitively demanding.
Although the Label System demonstrated a rapid increase in precision during the first five minutes, it began with such low precision that its final precision was still below that of the other two systems.
§.§.§ Summary
In summary, the Prompt System enabled end users to create custom moderation classifiers with superior performance more rapidly compared to the other two systems.
In terms of the precision-recall trade-off, its higher performance was primarily driven by its highest recall, as its precision remained comparable to the Rule System.
Temporally, we observed that the Prompt System facilitated rapid classifier initialization during the early stage but faced challenges in supporting further iteration, as evidenced by its plateaued precision and recall afterward.
In contrast, both the Rule System and the Label System showed steadily increasing trends in their recall and F_1 scores throughout the experiment, despite being constrained by their significantly lower initial values.
§.§ What strategies did participants find easiest for communicating their preferences?
§.§.§ When participants had ill-defined but intuitive preferences, they found the Label System to be easiest
Unlike professional moderators, social media users often lack the time or opportunity to articulate their moderation preferences clearly <cit.>. Instead, their moderation decisions are often a result of their intuitive feelings about the content they encounter. P4 pointed out that “sometimes it might be the entire sentence that I don't want to see, or maybe it's just about a word...It could also be the feeling of the comment. You don't feel right about it.”
Some participants also lacked “a holistic view about what were the alarming comments” (P2), since new content continuously flooded their social media feeds.
Labeling examples thus provided them with a natural way to express their intuitive preferences to the algorithm, whereas writing prompts or authoring rules forced them to distill their intuitive preferences into high-level patterns.
Participants tended to trust algorithms to infer their preferences from their labels more than they trusted themselves to accurately convey their preferences, especially in time- and attention-constrained settings of personal moderation.
As P2 explained, “I would actually trust the algorithm finding trends more [than] me looking at everything and trying to build my own...especially with my lack of experience and lack of time.” This trust made participants more willing to offload the task of summarizing high-level criteria to the algorithm, despite this requiring them to label many examples.
In contrast, translating their intuitive preferences into rules or prompts required participants to iteratively refine their rules or prompts to better convey their intuitions, a process that participants largely considered to be mentally demanding. P10 described their process of writing prompts in detail: “In the beginning, I have an idea of what I do not want to see and then I would submit that. But [AI] wouldn't know what I was talking about, so I had to change my wording to match what AI would understand.” For some participants, this process nudged them to actively reflect on their intuitions about what kinds of content they wanted to moderate, but for others, it was simply too mentally taxing. P2 expressed the latter sentiment: “It was almost more tiring to write prompts [than label examples], as you need to think about each one but then apply it, and then see one little thing you could have fixed, and then reapply it.” This is part of the reason why participants rated the Label System as significantly less frustrating and demanding than the other two systems in Figure <ref>.
§.§.§ When participants had well-defined and general preferences, they found the Prompt System to be easiest
Some participants could clearly define their general preferences regarding themes like violence or hate speech before even engaging with a classifier.
Unlike authoring rules, they could directly translate those preferences into prompts without the need to curate a comprehensive list of keywords.
P6 explained this advantage of writing prompts: “For rules, if there are 1,000 words I don't want to see, I have to list all 1,000 words, [whereas for ChatGPT], I only need to think of 10 words among those 1,000 words, and then ChatGPT itself understands that this guy does not want this kind of words.”
This advantage is especially pronounced because some preferences are hard to capture with keywords. P10 highlighted their struggle to author rules for such preferences: “It is easier to think about specific words for removing profane content, but then for violence, I didn't know what words specifically to say [for such content].”
Even if participants could identify keywords to indicate their preferences, they sometimes hesitated to add them because these keywords could also be used in neutral or benign contexts and thus adding them could result in many false positives.
Additionally, writing prompts simplified the articulation of complex preferences, which would otherwise require complex rule structures.
For instance, P11 tried to create a rule to “remove personal attacks against an organization.” Even though they managed to compile two lists of phrases—one for “personal attacks” and another for “organizations”—they found that the conjunction of these two concepts in a rule structure did not accurately capture the intended relationship between them.
While rules provided participants with a structured, albeit somewhat cumbersome, way to map their pre-established preferences, labeling examples was considered the least straightforward method for expressing such preferences.
P13 communicated their frustration with the inability to explicitly state their preferences through labeling: “There are too many decisions to make. Yeah, they're small decisions of yes or no. But usually, I don't have to keep creating 50 small decisions to indicate my personal preferences. Why am I going through hundreds of these when I just want to focus on saying my preferences?”
Participants' senses of frustration further increased when they could not find sufficient relevant examples to label as a way of indicating their preferences, as described by P10: “I feel like I have this criterion [remove violence calls] in my head. But there is nowhere I can label these examples.”
This issue is especially severe when users try to develop a classifier for a new community with a limited number of examples available.
Finally, participants were concerned about whether algorithms would accurately learn their high-level preferences from labeled examples. P13 worried, “What if all the comments happened to have [the word] Texas, so the AI just picks up those all related to Texas?”
Despite the prompts' advantages over the other approaches, participants still found it challenging to describe concepts outside of LLMs' existing knowledge. For example, LLMs lacked awareness of the context, such as the video or post to which comments were attached. P7 wanted to “remove irrelevant comments” and wished for “ChatGPT to be familiar with the content of the original post.” Additionally, LLMs often struggled with more complex concepts such as misinformation or conspiracy theories. W14 experienced this limitation: “I wanted to remove this comment because it contains obvious disinformation about the city of Portland Oregon, but my filter did not understand my definition of obvious disinformation.”
§.§.§ When participants had well-defined preferences regarding specific topics or events, they found the Rule System to be easiest
While rules as a keyword-based approach may overlook the broader context of comments, they can effectively adhere to preferences that can be captured by a list of keywords. Although people could write prompts that resemble rules in the Prompt System, these prompts could still remove comments that included similar concepts. P8 described the difference between authoring rules and writing prompts along this dimension: “[By authoring rules], you might get to really specific thing, and lost the big picture. But then [by writing prompts], you have just these larger generalizations. You know you can't get down to the nitty-gritty and identify truly exactly what phrases you want to remove.” In addition, rules are particularly useful for hiding content about specific events. As P1 explained, “I would only use the rule system if there was a particular topic that was very distressing for me, like a bombing somewhere. If I were personally affected by that and didn't want to hear anything about that, I would just create a short list of relevant words. ”
§.§ What strategies were easiest for iterating on classifiers after initialization?
§.§.§ Transparency in the Rule System and the Prompt System enabled participants to pinpoint problems but not always be able to fix them, whereas the Label System offered less transparency and little opportunities for targeted iteration.
Participants had a clearer understanding of why the Rule System made incorrect predictions than the other two systems. Such transparency helped participants improve their classifiers in some cases, like when they forgot to include spelling variants or similar phrases to those that they explicitly included in their rules. But in other cases, transparency did not lead to iterative improvements because rules could not accommodate contextual variances, as documented in prior research <cit.>. Our survey and interviews suggest that participants had a slightly better understanding of why their classifiers made mistakes for the Prompt System than for the Label System (Figure <ref>). Participants found that they could at least review and reflect on problematic prompts in the Prompt System, whereas they could not pinpoint specific labels that led to classification mistakes in the Label System. P2 noted this advantage: “The prompt system was more transparent in the sense that it highlights specific guidelines you have set up. You could then change them and see what they do.”
For the Label System, participants had only a vague understanding of why their classifiers made incorrect predictions, reflecting previous research on how end users develop folk theories to interpret content curation algorithms <cit.>. Moreover, it offered few opportunities for targeted iteration, which were particularly useful as participants found it easier to point out what the classifier should learn from its mistake. This highlights the need for more flexible approaches at different stages of classifier creation, as recent studies advocated <cit.>
In the absence of such features in the current system, participants could only label more examples similar to these mistakes in the hope of providing targeted feedback.
§.§.§ Users of the Prompt System struggled to refine their initial prompts due to human-LLM misalignment and LLMs' unpredictable behaviors.
While participants could quickly build classifiers with decent performance using the Prompt System, they found it challenging to incorporate more nuances when iterating on their prompts. Participants rated it as comparably difficult to correct with the Rule System (Figure <ref>).
Many participants described a disconnect in how humans and LLMs perceive the same prompts. They observed that “descriptions could be very subjective” (P4) and asked, “What has [the LLM] previously been trained on? For words they labeled as extreme, I might not think as extreme” (P13). Participants also suggested that LLMs might not fully understand participants' experiences and personalities from their prompts and might therefore fail to execute their preferences faithfully. Consequently, many noticed that LLMs seemed to interpret their preferences broadly, causing LLMs to have trouble distinguishing different degrees of a concept. For instance, P13 noted that “[LLMs] have the problem of differentiating between just minor attacks on someone's character [such as calling someone a fool] and ones that are strong and can be perceived as harm.” Similarly, P8 struggled to teach LLMs to “differentiate between threats and just using the word `shoot' or `kill.'” Articulating the fine-grained differences between concepts proved too difficult for many participants.
While they could easily teach LLMs what was clearly good or bad, aligning LLMs with their moderation decisions for borderline cases remained a challenge.
In addition, participants sometimes faced confusion in response to the peculiar behaviors of LLMs, leaving them unsure of how to further improve their prompts. For instance, P3 was perplexed by their prompt classifier's removal of the comment “Jessica is the best,” explaining, “I'm honestly not sure why the filter removed this content. I wrote something about removing comments with harmful descriptions, so maybe it thought that this description was harmful in some way.” Sometimes, participants were surprised that the Prompt System did not remove comments that they included as few-shot examples. P1 frustratedly said, “With ChatGPT, I would directly copy and paste a comment that I didn't want, and it still wouldn't remove it.”
Even though the performances of participants' prompt classifiers were comparable to the other two systems' performances, such incomprehensible behaviors undermined their confidence in using LLMs for personal moderation.
§.§.§ Even when using the Prompt System, participants iterated using Rule System- and Label System-like strategies
Surprisingly, to further refine their prompts to align with their nuanced preferences, participants often learned from strategies of authoring rules or labeling examples. Several participants opted to avoid using high-level descriptions in their prompts and instead wrote prompts that resembled rules, such as “Remove texts that refer to people as stupid, dumb, idiots [a list of words].” In this way, they expected that LLMs could still catch various spelling variants of these words but would not generalize more broadly beyond the word list.
Alternatively, when unsure how to describe nuances in prompts, many participants resorted to adding representative positive or negative examples directly into their prompts as few-shot examples. In this process, they were essentially labeling examples rather than writing prompts. Four participants even included more than five examples in their prompts.
W6 remarked: “I liked that I could copy and paste certain examples in ChatGPT. It made it easier to capture ideas that specific words do not allude to but are alluded to by unique comments.”
However, some participants also recognized the limitations of this approach. As P13 noted, “I think [adding examples to prompts] is good up to a certain extent. If you know there are comments you don't want, it's great and...also gives you more nuances. But sometimes when a lot of these comments are extremely similar, [the LLM] just gets confused.”
§.§ Which strategies did participants prefer overall?
§.§.§ There was no clear preference for a particular system across all participants; instead, different systems were favored based on varying use cases and individual needs.
We asked participants to rank the three systems regarding their preferences for using them in real-life content moderation and to explain their rationale for these rankings.
Figure <ref> illustrates the distribution of participants' rankings.
Surprisingly, despite growing enthusiasm for using LLMs to moderate content, a comparable number of participants still preferred labeling examples or authoring rules to create custom classifiers for content moderation. While the Prompt System was rated first most often, it was also rated third most often, suggesting that it was a polarizing choice.
The Wilcoxon Signed-Rank test showed no significant differences between any of the systems.
By examining data from our semi-structured interviews and open-ended responses from the final survey, we identified several dimensions that contribute to this variance in preferences.
They highlight the principal differences across the three systems, the diversity of individual preferences, and the variety of content curation scenarios for end users.
*
Whether users want explicit articulation of their preferences. Individual preferences for content curation are nuanced and subject to change. During the experiment, many participants tended to modify their criteria after reviewing more examples.
As a result, if participants had explicitly stated their preferences in rules or prompts, they would have to continually reflect on their preferences and update their classifiers accordingly. In comparison, the Label System's algorithms had the potential to learn these subtle changes in preferences from a continuous flux of labels. As P12 explained, “Preferences change over time, and we may not explicitly realize that. So you may accidentally remove some things if your preferences change but you have already explicitly set them in stone. ” However, some participants appreciated the chance to actively reflect on and articulate their preferences through high-level rules or prompts, despite the additional effort required.
* Whether users can tolerate reviewing many toxic examples. In our study, participants reviewed fewer comments to develop an effective classifier using the Prompt System compared to the other two systems. On average, participants reviewed 133 and 127 examples for the Label System and the Rule System respectively, whereas only 53 examples were needed for the Prompt System. The need to review numerous toxic examples could be distressing, particularly for those creating classifiers to block such content. Whether people are willing to undertake such an emotional toll can depend on their sense of responsibility in specific moderation scenarios. As P8 stated, “If I were moderating for my own sake and not for a community, I would probably just focus more on the ease of creating a filter so that I don't have to subject myself to these potentially offensive comments.”
* Whether users prioritize precision over recall. Participants demonstrated varied approaches to balancing precision and recall. Some favored precision because they were more concerned about the risk of removing benign comments than approving unwanted content. P12 described their rationale as follows: “My preference is to limit the number [of comments] that accidentally gets removed. I don't want to fall into any echo chamber. Even if there are really crazy people saying really crazy things, they make me frustrated, but I still should know about it.” Other participants favored recall, showing a stronger inclination to remove toxic comments, even if it meant mistakenly removing benign content. P13 expressed this preference: “Personally, too specific is worse than the too broad. If the filter keeps certain content I really don't want to see, that's far worse than if I didn't get an extra post. Because of how much content there already is, I'll just get others on my feed anyway.” For these participants, the high recall of the Prompt System is even advantageous during the creation stage. Since LLMs could collect the majority of potentially unwanted examples given initial criteria, participants could focus on reviewing these examples to increase the precision of their prompts.
* To what extent users value transparency and controllability. An important distinction between the Rule System and the other two systems lies in transparency.
As P10 described, “I like rules more because I know what's going into them. But with the prompts, you don't know what's going on. While the prompting is better at catching things, I just like the transparency of rules more.”
The Rule System helps users not only understand why exactly a comment is removed; it also enables moderators and users to have clear expectations about which comments will be removed. For users who moderate for a community, this sense of controllability could enable them to be more accountable for their moderation decisions. For example, P9 shared, “I think there's like a fine line between filtering or censoring. I think it's important to know why certain comments or text was filtered.”
§ DISCUSSION
In this work, we compared three prominent strategies for creating custom classifiers, focusing on their support for rapid initialization and easy iteration. Our experiments revealed that writing prompts generally enabled participants to create custom classifiers with the highest performance most quickly. However, this approach also had shortcomings, such as challenges in further iteration and lack of transparency. In this section, we discuss how incorporating labeling examples and authoring rules could help mitigate these problems, and how the unique user needs of diverse content curation applications demand different kinds of hybrid systems.
§.§ Hybrid Approaches to Personalized Classifier Creation
Existing tools often fail to accommodate the diversity and fluidity of user needs in content curation. While writing prompts generally proves to be an effective way to convey user preferences to classifiers, our experiment indicates the potential to incorporate labeling examples and authoring rules into future content curation systems, thereby providing end users with more flexible strategies to create and iterate on their classifiers.
Labeling examples can facilitate easier prompt bootstrapping.
Our findings indicate that when users have ill-defined but intuitive preferences, they prefer labeling examples over writing prompts to convey those preferences.
However, with traditional ML-based algorithms, users often complain that they need to label numerous examples to express their preferences and that there is a lack of transparency regarding what the algorithm learned from their labels.
We envision that users can provide a few representative examples for LLMs, which help infer and suggest potential preferences.
Labeling examples could also help users easily iterate on their prompts. In our experiments, when users tried to refine their prompts, they often focused on a few misclassified examples and adjusted their prompts until LLMs made correct predictions.
Given the mental load of such interactive iteration for many participants, we propose developing an automated prompt chain where LLMs automatically refine prompts based on a few corner-case examples users curate and label.
Authoring rules offers users a stronger sense of transparency and controllability than writing prompts. Our experiments indicate that rules could easily capture preferences involving specific topics or events.
Therefore, instead of completely replacing rules with prompts, content curation tools should allow users to choose between rules and prompts based on their preferences.
Future research should also investigate ways to increase the transparency of LLM predictions.
In our implementation of the Prompt System, we asked LLMs to process each prompt independently rather than aggregating all prompts into a single query.
This approach is akin to connecting prompts in a rule structure, thus offering more transparency.
Similarly, researchers could introduce more transparency into LLM predictions by decomposing a complex prompt into a series of conditions and querying LLMs separately.
This method allows users to understand which specific conditions might be causing mistakes <cit.>.
§.§ Supporting Different Content Curation Applications
As discussed, content curation scenarios can range from curating personal feeds to managing community content. Individuals could also have intuitive preferences or gradually develop more well-defined preferences from initial intuitions. Additionally, individuals vary greatly in many dimensions, such as tolerance of undesirable content and the precision-recall trade-off. Future research should therefore only apply our findings after closely examining the unique needs of their contexts.
For example, individuals looking to remove undesirable content from their feeds may prefer prompt-based systems over existing rule filters due to their ease of use. In contrast, prompt-based systems might only be desirable for community moderators if they offer comparable transparency to existing rule systems, as moderators are more accountable for their moderation decisions <cit.>.
Additionally, despite their algorithmic similarities, curating desired content requires different systems than removing undesirable content, since users tend to prioritize recall over precision during active exploration and have more fluid and intuitive preferences in this case than in content moderation <cit.>. Future work should thus explore how to combine the high recall of writing prompts and the ease of labeling examples into one system for this use case.
In this work, we investigate supporting users to create personal classifiers from scratch but social media users might want to modify existing classifiers. Some platforms have already offered similar but simplified functionalities, allowing users to adjust the sensitivity for predefined concepts like racism and misogyny <cit.>. Compared to a platform-centric definition of these concepts, end users might feel more comfortable customizing classifiers created by friends or family members who share similar interests and moderation preferences <cit.>. Our findings also advocate for more flexible customization options beyond simply adjusting thresholds <cit.>. For example, we envision users being able to merge labeled data, fork and edit rules, or share prompts.
While we focus on content-based curation in this work, end users also frequently curate content based on metadata.
For instance, they might want to see all posts from certain accounts regardless of content or may prefer the latest content when curating time-sensitive information. Here, the challenge lies in helping users curate and summarize information for decision-making, such as evaluating whether a user is worth following or identifying when content becomes interesting. Future research should explore how to support end users in communicating preferences that involve conflicting non-textual and textual information (e.g., what if someone I follow posts unwanted content?).
Additionally, while our findings on binary classifiers could be generalized to a categorical classifier, they cannot be easily extended to regression algorithms.
Such algorithms are still valuable for users who want to sort content in their feeds, as in the context of content recommendation <cit.>.
Common content recommendation systems often learn from example-level feedback but suffer from similar issues identified in our experiments, such as few opportunities for explicit feedback and lack of transparency <cit.>. Future work should investigate the potential of using rules or prompts in this space.
Finally, while there is growing interest in more cost-effective LLMs <cit.>, the cost of deploying an LLM-based content curation system remains a concern given the vast amount of online content produced daily.
For instance, on a platform level, there are 500 million tweets sent every day <cit.>, and on a user level, a popular YouTube video can gather thousands of comments.
Although techniques like multi-step reasoning <cit.> or self-consistency <cit.> can enhance the performance of LLMs on complex user preferences, they often require more computational resources. Future research should investigate leveraging techniques such as LLM cascading <cit.> or model distillation <cit.> to maintain performance while reducing costs. For example, cheaper models could be used for less complex or less important user preferences, whereas LLMs could be distilled into lightweight classifiers for heavy users.
§ LIMITATIONS
While our experiment protocol was developed iteratively through pilot studies, it still has limitations. First, we determined our three experiment systems by mapping each strategy to their most common backend algorithms (supervised learning algorithms for labels, transparent algorithms for rules, and LLMs for prompts). As a result, we did not have the chance to test other less common combinations in our experiments, such as supervised learning algorithms for rules (i.e., a Snorkel-like system <cit.>).
Second, even though we explicitly encouraged participants to label the test dataset consistently, exposure to more examples from the training dataset during classifier creation might alter their criteria. To prevent the experiments from being excessively long, we limited participants to labeling only 100 comments that comprised our test dataset, which might affect our evaluation's accuracy.
Besides, our participant pool of undergraduates did not represent the diverse population of internet users, and our study involved a relatively small number of participants. Future work should conduct larger-scale experiments to validate our findings.
Fourth, our study was conducted solely in English and focused on political content, limiting its applicability to other languages and types of content. Since LLMs might have different performances for other languages or content types <cit.>, future research should validate and extend our findings.
Finally, while personal content moderation as a case study illustrates the primary user needs for content curation tools, we acknowledge the diversity of content curation scenarios. Future work should examine whether there are distinct user needs in other scenarios before applying our findings broadly.
§ CONCLUSION
To communicate feedback to an algorithmic system, end users can primarily leverage three strategies: labeling examples, authoring rules, and writing prompts.
Which strategy is the most useful for end users to customize a classifier for content curation?
Using personal content moderation as a case study, we conducted a within-subjects experiment with 37 non-programmer participants to compare these three prominent strategies for creating custom classifiers.
We found that writing prompts generally allowed participants to create personal classifiers with higher performance more quickly, but not without shortcomings.
While prompts could effectively communicate users' well-defined and general preferences, participants preferred labeling examples to convey their ill-defined but intuitive preferences and authoring rules for their preferences about specific topics or events.
Moreover, participants found it challenging to refine their prompts to communicate their nuanced preferences iteratively. Consequently, they often directly added misclassified examples as few-shot examples or attempted to write rule-like prompts.
Building on top of our findings, we envision a hybrid approach to custom classifier creation: users could label examples to bootstrap and iterate on their prompt classifiers, while decomposing a complex preference into a rule structure could provide users with greater transparency.
ACM-Reference-Format
§ PILOT STUDIES
We conducted pilot studies with our lab members and 5 non-programmers to iteratively implement our three systems. We included non-programmers in these studies to guarantee that our interfaces would be user-friendly and accessible to all potential users.
§.§ Label System
Since both the Rule System and the Prompt System allowed users to review the performance of their filters during the creating process, we tested real-time feedback for the Label System in our pilot studies. Specifically, when users were labeling examples from the new batch of active learning, they could review how the filter that was trained on previous labels would predict on each example. We experimented with displaying this information immediately after users labeled an example and after they labeled all examples in a batch. However, our pilot studies suggested that such features were more confusing than helpful for end-users, so we excluded them from the Label System for our experiments.
§.§ Rule System
We used our pilot studies to iteratively develop a more user-friendly Rule System than AutoMod that is tailored to the moderation needs of individual users. Specifically, we made the following changes to adapt the AutoMod system for personal moderation.
* AutoMod permits a variety of conditions in a rule such as “include,” “exclude,” “start with,” or even regular expressions. However, regular expressions might be excessively difficult for end-users, and certain conditions, like “start with,” may not be as beneficial for filtering unwanted content. To simplify the rule creation process, our system restricts end-users to using only “include” and “exclude” conditions. We call the exclude condition an “exception” because users might be confused by a complicated rule like “remove a unit that includes this word but excludes that word.”
* While AutoMod allows an unlimited number of conditions per rule, this complexity can be overwhelming and not particularly useful for end-users in expressing their preferences. Therefore, we limit users to having at most two “include” conditions and one “exclude” condition per rule to maintain simplicity and clarity.
* AutoMod offers a range of actions for each rule, including approving and removing caught texts. In contrast, our system is designed solely for creating rules that remove texts. This design choice is based on observations that navigating between “approve”/“remove” actions and “include”/“exclude” conditions can be confusing for end-users.
* Prior research has indicated that users often struggle to account for all potential spelling variants that could bypass word filters <cit.>. To reduce the mental load of brainstorming synonyms for phrases, we integrated a “similar phrases” feature that leverages the LLM to suggest phrases that are similar to existing ones provided by users. Additionally, our system offers an option to detect the following spelling variants. We implemented the detection of the first three types of variants by generating generalized regular expressions for each phrase. For nouns and verbs, we first identify the part-of-speech tag of the word <cit.>, then search for its plural form if it is a noun or its various tenses if it is a verb <cit.>.
* phrases with repeated letters, such as “coooool” for “cool”
* phrases with mixed uppercase and lowercase letters, such as “Cool” for “cool”
* phrases where letters are replaced with visually similar characters, such as “co0l” for “cool”
* singular or plural forms of nouns, such as “apples” for “apple”
* different tenses of verbs, such as “found” for “find”
§.§ Prompt System
We evaluated two potential methods for the Prompt System. First, we put all user-generated prompts into a single system prompt template and only queried the LLM once for the overall prediction. This approach compromised the quality of LLM predictions since the LLM sometimes skipped a few examples or completely ignored certain prompts.
Second, we queried the LLM multiple times for each prompt and determined the final prediction based on the majority. This quality-minded approach often required users to wait more than 5 minutes for LLM predictions and thus discouraged users from interactively testing their filters during the creation process.
Some may advise prioritizing the quality of LLM predictions during the filter creation process in order to evaluate the full potential of LLMs in personal moderation. However, content moderation algorithms in deployment often need to analyze vast quantities of comments per day. Prioritizing quality without regard for efficiency can therefore lead to impractical and costly operations. To ensure that users accurately evaluate how well their filters would work once deployed, it is essential that, during the filter creation process, users engage with the algorithm under conditions that mirror actual deployment settings. Thus, we opted to prioritize a low response time while ensuring that prediction quality was still sufficient in order to mirror realistic deployment settings.
Ultimately, we used the following system prompt:
|
http://arxiv.org/abs/2409.02814v1 | 20240904153233 | Segregation in binary mixture with differential contraction among active rings | [
"Emanuel F. Teixeira",
"Carine P. Beatrici",
"Heitor C. M. Fernandes",
"Leonardo G. Brunnet"
] | physics.bio-ph | [
"physics.bio-ph",
"nlin.AO",
"physics.comp-ph"
] |
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, CP 15051, CEP 91501-970 Porto Alegre - RS, Brazil
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, CP 15051, CEP 91501-970 Porto Alegre - RS, Brazil
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, CP 15051, CEP 91501-970 Porto Alegre - RS, Brazil
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, CP 15051, CEP 91501-970 Porto Alegre - RS, Brazil
§ ABSTRACT
Cell cortex contraction is essential for shaping cells, enabling movement, ensuring proper division, maintaining tissue integrity, guiding development, and responding to mechanical signals — all critical for the life and health of multicellular organisms. Differential contractions in cell membranes, particularly when cells of different types interact, play a crucial role in the emergence of segregation. In this study, we introduce a model where rings composed of active particles interact through differential membrane contraction within a specified cutoff distance. We demonstrate that segregation arises solely from differential contraction, with the activity of the rings functioning similarly to an effective temperature.
Additionally, we observed that segregation involves cluster fusion-diffusion process. However, the decay exponent of the segregation parameter we found is close to λ∼ -1/3, which differs from the λ∼ -1/4 predicted by previous theoretical approaches and simulations.
Segregation in binary mixture with differential contraction among active rings
Leonardo G. Brunnet
September 9, 2024
==============================================================================
In cellular systems made up of different species, spontaneous sorting is a common emergent behavior.
During embryonic development, cells undergo differentiation, which leads to the spontaneous segregation of cell types in tissue formation. Based on strong experimental evidence <cit.> and motivated by physical systems like binary mixtures, where spontaneous separation of two liquids is observed, Steinberg proposes <cit.> that the general mechanism for cell segregation lies in the difference in adhesion between cells of different types.
This proposal became known in the literature as the Differential Adhesion Hypothesis (DAH) or Steinberg Hypothesis.
Harris <cit.> questions the foundation of the DAH hypothesis, highlighting that mere maximization of intercellular adhesion does not necessarily lead to the observed effects of cell sorting. He suggested that within a heterogeneous cell aggregate, variation in surface contraction, driven by the active regulation of the acto-myosin cortex, could be the driving force behind tissue engulfment and cell sorting in vivo.
The surface contraction of a particular cell appearing more pronounced when it comes into contact with a cell of a different histological type.
Besides that, cells from different tissues exert different surface contraction when they contact the medium. This sorting mechanism was named Differential Surface Contraction Hypothesis (DSCH).
In line with Harris's ideas, Brodland <cit.> successfully described segregation using finite element simulations that incorporated adhesion and surface contractions, both contributing to the total interfacial tension, a mechanism known as differential interfacial tension hypothesis (DITH).
In fact, these hypotheses have been tested using various other extended numerical models, including Cellular Potts <cit.> and Vertex models<cit.>.
All theses models define an effective interfacial tension between different units but do not explicitly separate adjacent cell membranes or detail the forces involved, such as adhesion and surface contraction.
However, from an experimental perspective, the role of membrane fluctuations was clearly highlighted by Mombach <cit.>, and the distinction between adhesion and cortical tension was emphasized in the work of Krieg et al.<cit.> and Manning et al. <cit.>.
In this work, we present a model of active rings that interact through differential membrane contraction when within a specific cutoff distance. By utilizing two interacting membranes, this model provides a more detailed representation of biological processes, offering insights into how different layers interact and differentiating the roles of fluctuations, adhesion, and cortex contraction. To our knowledge, this is the first time Harris's mechanism has been simulated in isolation.
Boromand and collaborators <cit.> introduced a ring system composed of passive particles connected by springs.
Building on this model, we incorporated active properties into the particles forming the ring in previous articles <cit.>. We model a 2D system with N rings, each representing a cell formed by n active particles (see Fig. <ref>).
The system is set up as a binary mixture of active rings confined in a circular arena with repulsive walls and radius R_0.
The set of coupled overdamped equations governing the dynamics of each particle are: ṙ_i,j = v_0 𝐧_j -μ 𝐅_i,j and ṅ_j = √(2 D_R) ξ_j×𝐧_j,
where 𝐫_i,j denotes the position of the i-th particle within the j-th ring at time t, μ represents its mobility and v_0 is the magnitude of active velocity with its orientation given by 𝐧_j.
The second term, 𝐅_i,j = -∇_i,j E represents the total force acting on the i-th particle within the j-th ring.
The direction of the active force, described by unit vector 𝐧_j, experiences angular Gaussian white noise ξ_j = ξ_jê_z with correlation ⟨ξ_j(t_1)ξ_k(t_2) ⟩ = δ_jkδ (t_1-t_2). The noise term D_R represents the rotational diffusion coefficient, which defines a characteristic timescale given by τ_R = 1/D_R.
The energy function has contributions from (i) a perimeter energy (springs connecting ring neighboring particles), (ii) area conservation, (iii) contact-dependent contraction term, (iv) core repulsion among non-neighboring particles from any ring and inter-cellular adhesion,
E = ∑_j=0^N{ϵ_P/2∑_i=0^n( |l⃗_i,j|/l_0 - 1 )^2 + ϵ_A/2( A_j/A_0 - 1 )^2.
+ . ∑_i=0^nΛ_αβ |l⃗_i,j| } + ϵ_c/2∑_r_ik≤σ( r_ik/σ - 1 )^2
+ ϵ_adh/2∑_σ < r_ik≤ l_adh( r_ik/σ - 1 )^2,
where l⃗_i,j = r⃗_i,j - r⃗_i-1,j is the vector connecting consecutive particles in ring j, ϵ_P is the elastic energy of the spring controlling perimeter fluctuations, and l_0 is the equilibrium distance in the ring.
The elastic energy related to area control is ϵ_A, A_j is the inner area of the j-th ring (not considering the area of particles), and A_0 is the equilibrium area. The last two terms in Eq. <ref> represent the core repulsion between non-neighboring particles and the adhesion between particles of different rings, respectively. Parameter ϵ_c denotes the characteristic energy of the core repulsion interaction, while r_ik is the distance between particles i and k. The equilibrium cut-off distance, σ, effectively defines the particle diameter. The adhesion energy and interaction distance are characterized by ϵ_adh and l_adh, respectively.
A key distinction between the ring model and other extensive models (Finite Elements <cit.> and Vertex-Voronoi model <cit.>) lies in the rings contact interface, which comprises two contact membranes.
This bring us to the third term in Eq. <ref> which incorporates differential surface contraction. In this term, parameter Λ_11 represents the line tension between particles belonging to type 1 rings (red rings in Fig. <ref>), Λ_22 represents the line tension between type 2 rings and Λ_12=Λ_21 represents the line tension between rings of different types. All these tensions acting when they are within a cutoff distance l_Λ.
A central parameter in this work is Λ = Λ_12/Λ̅, with Λ̅≡ (Λ_11+Λ_22)/2.
Fig. <ref> shows a schematic representation of the active ring system, illustrating the involved tensions.
To handle the interaction with the walls, which can be seen as the medium in our context, we specify that the particles of a ring increase their contraction tension once they reach a distance σ/2 from the wall.
This prevents the rings from preferring to accumulate on the wall.
The contraction between rings of type 1 (red) and the “medium” (wall) is defined as Λ_1M/Λ̅ = 12, while for rings of type 2 (green), we use Λ_2M/Λ̅ = 0.
This choice satisfies the criterion Λ_12< Λ_1M-Λ_2M (valid for all values of Λ_12 in this work) for the engulfment of one type (red) by the other (green) <cit.>.
We measure spatial coordinates and time in units of l_0 and τ_R, respectively.
As a result, the model equations may be written in terms of the Péclet number <cit.>, defined as Pe ≡v_0 τ_R/l_0 , which controls the level of activity. Details of the numerical integration method and the remaining parameter values are provided in the Supplemental Material.
We begin our simulations with an initial random distribution of rings (see Fig. <ref>a). For Λ > 1, we observe a spontaneous segregation process among the rings, with larger Λ values resulting in a smaller interface between ring types, indicating improved segregation. This is illustrated at the bottom of Fig. <ref>a, where Λ = 10. For Λ = 1.5 (middle of Fig. <ref>a), the system still segregates, but a larger asymptotic interface remains. In contrast, for Λ < 1 (top of Fig. <ref>a), an organized mixed state emerges, forming a checkerboard-like pattern where each ring tends to be in contact with a ring of a different type.
To quantify the level of segregation in the system, we use the parameter γ, as introduced by Belmonte and collaborators <cit.>. This parameter is defined as the mean fraction of neighboring rings of type 2 (green) surrounding rings of type 1 (red): γ = ⟨n_2/n_1+n_2⟩_1, where ⟨..⟩_1 denotes the average over all rings of type 1, with n_1 and n_2 representing the number of first neighbors of type 1 and type 2, respectively. In the literature, it is well established that the segregation parameter is directly correlated with the interface I between the types, expressed as γ∼ I <cit.>.
We examined the evolution of the segregation parameter γ for various Λ values while keeping the Pe value fixed. As shown in Fig. <ref>b), the asymptotic value of γ reaches low levels (approximately γ∼ 0.1 with 10^3 rings) for the segregated state and increases to γ∼ 0.9 as Λ decreases. The line separating the segregated and checkerboard patterns occurs at Λ∼ 1, where the initial value of the mixed state (γ∼ 0.7) is maintained throughout the evolution.
Furthermore, we measured mean values of steady-state γ as a function of Λ for several values of Pe (see Fig. <ref>a).
Here we emphasize the role of parameter Pe, which broadens the transition as its value increases, and the importance of initial configurations to avoid stay trapped in a meta-stable state.
While at low Pe values (Pe ≤ 0.2 in Fig. <ref>a) and Λ <1, the system evolves to the checkerboard pattern, for Λ>1 it remains trapped close to the initial configuration, so we employed the segregated state as the initial condition to test the effect of Pe.
In brief, Pe functions like temperature, enabling the system to overcome potential barriers to approach the minimum energy state.
Simultaneously, it is responsible for deviating from the optimal value due to the introduced fluctuations.
In Fig. <ref>b we present a (Λ× Pe) diagram, colors indicate the mean value for γ.
At low Pe values, the segregation region occurs just above Λ = 1, but at high activity we observe a mixed state well above this limit.
This indicates that the segregation criterion <cit.> Λ_12 > (Λ_11+Λ_22)/2 is only valid at low Pe.
Similar arguments apply to the checkerboard state in the region defined by Λ<1.
Additionally, we conducted an analysis similar to previous works <cit.> and observed how the steady state of largest cluster of type 1 rings (red) changes with Pe and Λ.
We define the fraction of these rings that are inside the largest cluster as c_in = N_int/N_1. N_1 and N_int are the total number of type 1 rings and the number of rings in the largest asymptotic cluster, correspondingly.
Using c_out = 1 - c_in, we can construct a binodal curve that delineates regions of segregation and mixing as function of Λ^-1 and Pe (see Figures <ref>c-d).
Below the intersection we have the segregated state (c_in∼ 1 and c_out∼ 0, one large cluster) and above, the mixed state (several clusters of similar size).
Fig. <ref>d shows the effect of the activity parameter Pe on the binodal curves.
Depending on the value of Λ, there is a value of Pe beyond which the system transitions from segregated to mixed state.
Our focus now shifts to understanding how the system evolves from a randomly mixed configuration to a segregated one.
This is particularly important because the long time behavior before saturation of the segregation parameter and the mean size growth of clusters can provide information about the underlying mechanisms occurring during tissue formation.
The mean cluster's size of rings of type 1, M(t), is obtained using the cluster-counting algorithm used by Beatrici et al. <cit.>.
Here, we analyze the case where contact between different rings type is unfavorable (Λ = 10).
Initially, the system is in a mixed configuration with a corresponding value of γ≡γ_0=0.7, consequent of the (30:70) proportion of ring types.
In Fig. <ref>a, we show the evolution of the segregation parameter normalized by its initial value, γ̅(t) = γ(t)/γ_0.
The activity Pe determines the duration of the transient period, with higher activity implying shorter transients.
Subsequently we observe an algebric decay, γ∼ t^λ with an exponent close to -1/3 for at least two decades, independent of the activity value Pe.
Additionally, in this regime, the mean domain size M grows with an exponent close to 2/3 (see Fig. <ref>c).
Thus, we observe an inverse relationship, γ∼ M^-2, consistent with previous findings <cit.>.
In Fig. <ref>b, we observe that significantly unequal proportions slow down the domain growth process, as it relies on the coalescence of minority clusters diffusing within the majority type. However, when the proportions are equal (50:50), the dynamics change — the system quickly percolates, and the process of interface reduction becomes dominated by rounding.
In this case, we expect exponent ∼ -1/2 <cit.>. For other proportions, we observe asymptotic time exponents close to -1/3 and 2/3 for γ and M, respectively, while maintaining the inverse relationship, γ∼ M^-2 (see Fig. <ref>b-d).
In conclusion, we present a model where a cell is represented by a ring of active particles. Each ring is explicitly equipped with its own membrane, allowing it to accommodate negative curvatures and replicate all cell sorting mechanisms, whether in isolation or combined. This represents a significant advancement in the class of extended cell models, offering a path to more realistic depiction of cell behavior as observed in experiments.
By employing this comprehensive cellular model and incorporating a term for differential contraction among rings of different types, we conducted a numerical evaluation of Harris' proposed hypothesis on cellular differential surface contraction <cit.>.
We found that by keeping all cells with identical attraction forces and solely employing adequate differential interfacial contraction, the system segregates.
We observed a monotonic relationship between the differential contraction parameter Λ and the steady-state average value of the segregation parameter ⟨γ⟩_t, indicating that segregation occurs for Λ > 1 and a checkerboard pattern emerges for Λ < 1.
The rate at which the system transitions from an initially mixed state to its asymptotic state, as well as the ultimate value it reaches, also depends on the activity level Pe.
In the case of segregation (Λ>1), increased activity has a tendency to disrupt the configuration of minimum interfacial energy.
Binodal curves, showing a transition between the mixed and the segregated states, corroborate the inverse correlation between the contraction interaction Λ and the activity Pe.
These findings suggest that the activity Pe plays a role similar to temperature in thermodynamic equilibrium systems <cit.>.
Furthermore, we found that the segregation parameter γ(t) and the mean cluster size M(t) exhibit a power-law regime with an exponent close to -1/3 and 2/3, respectively.
The activity Pe changes the typical timescale at which the asymptotic regime begins but does not modify the growth exponent of the domains.
Similarly, the proportions do not alter the characteristic exponent, except in the (50:50) case where the system starts close to percolation and the evolution appears to be dominated by rounding. We observed an inverse relationship between the segregation parameter and the typical size, γ∼ M^-2.
Finally, the observed segregation exponent of λ=-1/3 is at odds with the literature. The cluster-cluster aggregation mechanism seen in our simulations would typically correspond to an exponent of λ=-1/4, as predicted by surface diffusion models <cit.> or mean cluster models <cit.>, particularly in the absence of alignment interactions. An exponent of λ=-1/3 would instead be expected from the Cahn-Hilliard equation if the mechanism were evaporation-condensation, which we do not observe. The reasons for these discrepancies will be explored in future studies.
We express our gratitude to the Brazilian agencies CAPES, CNPq, and FAPERGS for their financial support. H.C.M.F. and L.G.B. acknowledge the support from the National Council for Scientific and Technological Development – CNPq (procs. 402487/2023-0 and 443517/2023-1). E.F.T. acknowledges ICTP-SAIFR/IFT-UNESP. The simulations were conducted using the https://pnipe.mcti.gov.br/laboratory/19775VD Lab cluster infrastructure at IF-UFRGS.
§ SUPPLEMENTARY MATERIAL
§ A - BOUNDARY CONDITIONS - REPULSIVE CIRCULAR WALLS
We configure the system as a binary mixture of N active rings confined within a circular arena of radius R_0. The force exerted by the wall on a specific particle within a ring is given by
F⃗_w = -F_w (|r⃗_i,j-r⃗_cm|/R_0 - 1 ) ( r⃗_i,j-r⃗_cm )/|r⃗_i,j-r⃗_cm|
where ( r⃗i,j-r⃗cm )/|r⃗i,j-r⃗cm| is the unit vector connecting the center of particle i within ring j to the center of mass, and F_w is the characteristic force exerted by the wall on the particle.
§.§ B - Forces
We derive the analytical expressions for the forces acting on each particle i within a ring j. These particles are subjected to forces resulting from both shape and interaction energies, which are determined by the area, perimeter, differential line tension, particle-particle overlap, and adhesion terms as defined in the main text. The force on particle i in ring j is obtained by taking the vector derivative of the total energy E with respect to its coordinates.
r⃗_i,j = x_i,jê_x + y_i,jê_y,
F⃗_i,j = -∂ E/∂r⃗_i,j≡ -∂ E/∂ x_i,jê_x-∂ E/∂ y_i,jê_y.
§.§.§ Perimeter force
The perimeter energy, as defined in the main text for a configuration of N rings labeled by j = 1, …, N, with n particles labeled i = 1, …, n, is given by
E_P = ϵ_P/2∑_j=0^N∑_i=0^n( |l⃗_i,j|/l_0 - 1 )^2.
The force on particle i due to deviations in the segment length |l⃗_i,j| = |r⃗_i,j - r⃗_i-1,j| from its preferred value l_0 is given by
F⃗^i,j_P = -∂ E_P/∂r⃗_i,j = -∂ E_P/∂ x_i,jê_x - ∂ E_P/∂ y_i,jê_y,
F⃗^i,j_P = -ϵ_P/l_0 [ (|l⃗_i,j|/l_0-1 )∂ |l⃗_i,j|/∂r⃗_i,j - (|l⃗_i+1,j|/l_0-1 )∂ |l⃗_i+1,j|/∂r⃗_i+1,j ],
F⃗^i,j_P = ϵ_P/l_0 [ (|l⃗_i+1,j|/l_0-1 ) l⃗_i+1,j/|l⃗_i+1,j| - (|l⃗_i,j|/l_0-1 )l⃗_i,j/|l⃗_i,j| ],
where we use the following relations:
∂ |l⃗_i,j|/∂r⃗_i,j = ∂ |l⃗_i,j|/∂ x_i,jê_x+ ∂ |l⃗_i,j|/∂ y_i,jê_y = -l⃗_i,j/|l⃗_i,j|,
∂ |l⃗_i+1,j|/∂r⃗_i+1,j = ∂ |l⃗_i+1,j|/∂ x_i+1,jê_x+ ∂ |l⃗_i+1,j|/∂ y_i+1,jê_y = l⃗_i+1,j/|l⃗_i+1,j|.
§.§.§ Area force
The force related to area energy on particle i in ring j is given by
F⃗^i,j_A= -∂ E_A/∂r⃗_i,j = -ϵ_A/A_0 ( A_j/A_0 -1 )∂ A_j/∂r⃗_i,j,
being E_A the area energy defined by
E_A = ϵ_A/2∑_j=0^N( A_j/A_0 - 1 )^2.
The ring area A_j is calculated using the vector product property, given by
A_j = 1/2∑_j^n |(r⃗_i,j-r⃗_cm)×l⃗_i,j|
= 1/2∑_j^n (x_i,j- x_cm)(y_i,j-y_i-1,j)
- (y_i,j-y_cm)(x_i,j-x_i-1,j),
where the factor 1/2 is included to avoid double-counting the area. Therefore, we obtain:
∂ A_j/∂ x_i,j = ( y_i+1,j -y_i-1,j )/2,
∂ A_j/∂ y_i,j = ( x_i-1,j -x_i+1,j )/2,
thus,
F⃗^i,j_A = -∂ E_A/∂r⃗_i,j = - ϵ_A/2A_0 ( A_j/A_0 -1 )
× [ ( x_i+1,j - x_i-1,j ) ê_x - ( y_i+1,j - y_i-1,j )ê_y ].
§.§.§ Interaction forces
The core repulsion for non-neighboring particles within the same ring and the adhesion between particles of different rings are accounted for by the interaction energy E_int, which is described by a truncated harmonic potential,
E_int = {[ ϵ_c/2( r_ik/σ - 1 )^2, r_ik≤σ; ϵ_adh/2( r_ik/σ - 1 )^2, l_adh≥ r_ik > σ; 0, r_ik > l_adh. ].
Therefore, the corresponding force
F⃗^i,j_int = -∂ E_int/∂r⃗_i,j,
F⃗^i,j_int = r̂_ik{[ -ϵ_c/σ( r_ik/σ - 1 ), r_ik≤σ; -ϵ_adh/σ( r_ik/σ - 1 ), l_adh≥ r_ik > σ; 0, r_ik > l_adh ].
where r̂_ik = r⃗_ik/r_ik is a unit vector connecting particle i and k.
§.§.§ Differential line tension (contraction)
The contraction energy represents a line tension that acts between two particles from different rings when they are within a distance cutoff l_Λ. This energy term is given by
E_Λ = ∑_j=0^N∑_i=0^nΛ_αβ |l⃗_i,j|.
The force acting on particle i inside ring j is
F⃗^i,j_Λ = -∂ E_Λ/∂r⃗_i,j,
F⃗^i,j_Λ = Λ_αβ [ l⃗_i+1,j/|l⃗_i+1,j| - l⃗_i,j/|l⃗_i,j| ].
Therefore, the force F⃗^i,j_Λ on particle i within ring j depends on the type α of its ring and the type β of the ring of the particle it interacts with. If the particle is within the cutoff distance and interacts with multiple particles from other rings, the tension value will be the mean of all the individual tensions. Specifically, this mean is given by
Λ_αβ = 1/n_c∑_k^Λ_α k,
where the sum k runs over all contacts (both α-α and α-β interactions) and n_c represents the total number of contacts.
§.§ C - Control parameters
The value of the total equilibrium area A_r is the sum of the equilibrium area imposed by the area elastic energy term plus half the area of each particle composing the ring,
A_r = A_0 + nπσ^2/8.
We define a packing fraction ϕ for the system, relative to a circular region
of radius R_0, through the relation
ϕ = NA_r/π R_0^2.
We fix ϕ = 0.895 allowing for neighbor exchange and neighbor interaction simultaneously and we use a fixed dimensionless perimeter-area equilibrium ratio p_0 = P_0/√(A_0) = 4, where P_0 = nl_0 is the equilibrium perimeter.
The shape parameter p_0 defines the degree of stiffness of the ring.
The choice of this value is based on previous works with Vertex <cit.>, Voronoi <cit.>, and ring models <cit.>, where an excess of cell perimeter is found for p_0 > 3.81 as well as the emergence of a liquid-like behavior. To prevent overlap among rings, we reached a compromise by setting comparable values for ϵ_c, ϵ_P and ϵ_A/n, while assigning a much lower value to ϵ_adh.
So, we simulate the ring system keeping the following parameters fixed: ϵ_c/ϵ_P = 1, ϵ_A/ϵ_P = 35, (F_wσ)/ϵ_P = 1, ϵ_adh/ϵ_P = 5.10^-4, n = 10, l_0= 1, σ = l_0, l_Λ = l_adh = 1.5 l_0 and μ = 1.
This choice of parameters ensures that ϵ_P and ϵ_A are sufficiently large to maintain the bond length close to l_0 and the equilibrium area close to A_0. The adhesion forces between different rings are kept equal for all rings.
Furthermore, to emphasize the effects of differential contractions, we adopted an adhesion value that is considerably lower compared to the other energy terms involved. We integrate the equations of motion, using the Euler-Maruyama algorithm with a time step Δ t = 0.01.
§.§ D - Neighbors criterion for measurements
In this work, we use the criterion for defining whether two rings are neighbors based on the distance between their centers of mass, d_jk = |R⃗_j - R⃗_k| ≤P_0/2 = 5σ. This relation ensures that even two completely flattened rings in contact will be considered neighbors.
apsrev4-2
|
http://arxiv.org/abs/2409.03013v1 | 20240904181038 | Angular Spread Statistics for 6.75 GHz FR1(C) and 16.95 GHz FR3 Mid-Band Frequencies in an Indoor Hotspot Environment | [
"Dipankar Shakya",
"Mingjun Ying",
"Theodore S. Rappaport"
] | eess.SP | [
"eess.SP",
"cs.SY",
"eess.SY"
] |
Angular Spread Statistics for 6.75 GHz FR1(C) and 16.95 GHz FR3 Mid-Band Frequencies in an Indoor Hotspot Environment
Dipankar Shakya, Mingjun Ying, and Theodore S. Rappaport
NYU WIRELESS, Tandon School of Engineering, New York University, Brooklyn, NY, 11201
{dshakya, yingmingjun, tsr}@nyu.edu
September 9, 2024
=========================================================================================================================================================================================
firststyle
§ ABSTRACT
We present detailed multipath propagation spatial statistics for next-generation wireless systems operating at lower and upper mid-band frequencies spanning 6–24 GHz. The large-scale spatial characteristics of the wireless channel include Azimuth angular Spread of Departure (ASD) and Zenith angular Spread of Departure (ZSD) of multipath components (MPC) from a transmitter and the Azimuth angular Spread of Arrival (ASA) and Zenith angular Spread of Arrival (ZSA) at a receiver. The angular statistics calculated from measurements were compared with industry-standard 3GPP models, and ASD and ASA values were found to be in close agreement at both 6.75 GHz and 16.95 GHz. Measured LOS ASD was found larger than 3GPP ASD indicating more diverse MPC departure directions in the azimuth. ZSA and ZSD were observed smaller than the 3GPP modeling results as most multipath arrivals and departures during measurements were recorded at the boresight antenna elevation. The wide angular spreads indicate a multipath-rich spatial propagation at 6.75 GHz and 16.95 GHz, showing greater promise for the implementation of MIMO beamforming systems in the mid-band spectrum.
6G, angular spread, ASA, ASD, FR3, FR1(C), indoor, upper mid-band, ZSA, ZSD
§ INTRODUCTION
The progression towards 6G wireless communications has generated significant interest in the 6 to 24 GHz spectrum, as these frequency bands offer a promising balance between coverage and capacity. Regulatory bodies such as the ITU, NTIA, and FCC have emphasized the strategic importance of these bands, particularly the 7.125-8.4 GHz, 4.40-4.80 GHz, and 14.8-15.35 GHz segments, for future wireless networks <cit.>.
As the industry gears up for cellular deployments in these mid-band frequencies, understanding the propagation characteristics and angular spread statistics in indoor and outdoor environments is crucial for effective network planning and optimization.
Angular spread (AS) is a crucial parameter in wireless communication, significantly influencing key performance metrics such as beamforming effectiveness, channel hardening, spatial correlation, and channel estimation accuracy. In beamforming systems, narrower AS, typically observed at higher frequencies in the mmWave and sub-THz bands, enhance beamforming performance by focusing the signal within the narrow beams. Conversely, broader spreads at lower frequencies with omnidirectional (omni) antennas, such as 3.5 GHz, introduce challenges due to increased signal scattering, which can degrade the signal-to-interference-plus-noise ratio (SINR) <cit.>. Research has shown that AS has a non-monotonic effect on achievable rates in MIMO systems, particularly when transmit antenna array sizes are moderate between 20 and 50 antennas <cit.>. Pilot contamination is another aspect of MIMO communications where wider AS is known to degrade the channel estimation accuracy requiring longer pilot lengths <cit.>. In dense urban environments, AS plays a significant role in determining the power distribution and spectral efficiency for radio links <cit.>. Especially considering higher frequencies in upper mid-band and mmWave spectrum, wider half-power beamwidth (HPBW) is shown to be desirable for larger AS when accounting for beam misalignment in beamforming systems<cit.>.
Moreover, AS affects the spatial correlation in antenna arrays having closely spaced elements, with lower AS leading to higher spatial correlation and reduced scattering of MPC. This increased correlation due to lower AS can improve channel estimation accuracy, however, can negatively impact the performance of MIMO systems by reducing the effectiveness of spatial multiplexing and diversity techniques <cit.>. Additionally, the frequency-dependent nature of AS, where higher frequencies are associated with smaller AS due to increased directionality and reduced scattering, is a critical factor as wireless communications advance towards upper mid-band, mmWave, and THz frequencies. Understanding this behavior is essential for enhancing channel estimation accuracy and optimizing network performance in these higher frequency bands <cit.>.
AS behavior is of considerable importance for Air-to-Ground (A2G) links for 6G communications. Studies on A2G channels reveal that AS varies with UAV altitude, where azimuth spread of arrival (ASA) decreases and elevation/zenith spread of arrival (ZSA) increases. These findings highlight the unique characteristics of A2G channels, particularly the significant impact of ground reflections, which are crucial for designing reliable 6G networks involving UAV communications <cit.>.
Despite such broad implications, there is a notable scarcity of empirical data on the propagation characteristics within the frequency ranges of 6-24 GHz. Previous studies conducted in environments such as office corridors and university hallways have explored path loss exponents and RMS delay spreads at various frequencies, but comprehensive indoor measurements at FR1(C) and FR3 bands remain limited. This paper presents spatial statistics for indoor hotspot (InH) environments, evaluated from an extensive measurement campaign encompassing indoor, factory, and outdoor environments at 6.75 GHz (FR1(C)) and 16.95 GHz (FR3) using a 1 GHz bandwidth sliding correlation channel sounder. Conducted at the NYU WIRELESS Research Center, the indoor measurements provide detailed insights into the spatial propagation behavior and angular spread characteristics, encompassing both line-of-sight (LOS) and non-LOS (NLOS) scenarios. Over 30,000 power delay profiles (PDPs) were collected to analyze angular statistics, offering unique propagation insights at these frequencies and suggesting potential revisions to existing models.
The organization of this paper is as follows: Section <ref> describes the channel sounding system and the indoor hotspot scenario. Section <ref> details the angular statistical channel model at 6.75 and 16.95 GHz. The paper concludes with a summary of findings and implications for future wireless communication systems.
§ MEASUREMENT SYSTEM, ENVIRONMENT, AND PROCEDURES
§.§ Channel Sounding System
Measurements were conducted with a wideband sliding correlation channel sounder at 6.75 and 16.95 GHz. Key system features are described in <cit.> and include:
* 500 Mcps PN sequence sliding correlation baseband with 1 GHz RF bandwidth for high temporal resolution of MPC delays (1 ns)
* Dual-band co-located RF front-end modules for efficient frequency switching. One band is active at a time, while the other front-end module is powered off.
* 31 dBm EIRP for adequate coverage, while staying within FCC licensed limits.
* Directional horn antennas with 15 dBi (6.75 GHz) and 20 dBi (16.95 GHz) gain mounted on mechanically rotatable gimbals with a one-degree spatial resolution for directional measurements.
§.§ Indoor Hotspot Scenario
Measurements were performed in the open office environment of the NYU WIRELESS Research Center. Total of 20 TX-RX locations were measured covering the entire office space spanning distances between 11 and 97 m. Cubicles, offices, labs, and conference rooms in the research center were partitioned with drywall and glass panels with wooden or glass doors. A map of the environment is shown in Fig. <ref> (a) <cit.>.
§.§ Measurement Procedure
The channel sounder calibration process ensures accurate capture of the multipath power, delay, and direction, and is performed at the start and end of each day of propagation measurements. At each TX-RX location pair, the strongest azimuth and elevation pointing directions are determined by carefully observing changes in the recorded power level as the TX and RX antennas are moved in one-degree steps. The careful stepping ensures capturing the strongest MPC with maximum power. Keeping the same zenith of departure (ZOD) elevation angle, TX angles of departure (AOD) with significant RX received power are determined through rapid scans, as detailed in <cit.>.
For each TX AOD pointing direction, the RX is swept 360^∘ in the azimuth in antenna HPBW steps. Following the stepped sweep at boresight
zenith of arrival (ZOA) elevation, the RX is up and down tilted by the antenna HPBW and swept again across the azimuth. Next, the ZOD elevation is changed by down-tilting the TX by the antenna HPBW. RX azimuth sweeps in HPBW steps are performed for the down-tilted TX ZOD at RX boresight and HPBW down-tilted ZOAs.
§ ANGULAR STATISTICS MEAUSURED AT 6.75 AND 16.95 GHZ
The double directional channel impulse response for multipath propagation between the TX and RX can be defined with (<ref>) <cit.>.
h_omni(t,Θ,Φ)= ∑_n=1^N∑_m=1^M_na_m,ne^jφ_m,n·δ(t-τ_m,n)
·δ(Θ-Θ_m,n)·δ(Φ-Φ_m,n),
where t is the absolute propagation time, Θ=(ϕ_AOD,θ_ZOD) represents the 3D TX pointing direction vector, and Φ=(ϕ_AOA,θ_ZOA) is the RX pointing direction vector. N and M_n denote the number of time clusters (TCs) and the number of cluster subpaths (SPs), respectively, as defined in <cit.>; a_m,n is the magnitude of the m^th SP belonging to the n^th TC, while φ_m,n and τ_m,n represent the phase and propagation delay of the SP, respectively. Likewise, Θ_m,n and Φ_m,n are the vectors representing AOD/ZOD and azimuth of arrival (AOA)/ZOA for the SP, respectively. The terms MPC and SP are used interchangeably <cit.>.
The propagation behavior of the multipath is characterized through primary statistics including the number of time clusters and spatial lobes, cluster delay and multipath delay, received power of each SP in a cluster, and direction of arrival and departure of SPs. The secondary statistics for temporal and spatial propagation behavior including the RMS angular spread (AS) at both TX and RX help characterize the wireless channel at a large scale and are crucial parameters for cellular network and radio system designs. The RMS AS provides a measure of the spatial dispersion of angles at which MPCs arrive or depart from a receiver or transmitter. The azimuth angular spread of arrival (ASA) and the zenith angular spread of arrival (ZSA) characterize the spread of multipath arriving at the RX in horizontal and vertical planes, respectively. Similarly, at the TX side, the azimuth angular spread of departure (ASD) and the zenith angular spread of departure (ZSD) describe the spread of MPCs being transmitted over the wireless channel that are captured at the RX.
The NYU WIRELESS channel measurements include directional measurements, which are carefully combined to synthesize the omnidirectional antenna pattern, removing antenna gains <cit.>. The power angular spectrum (PAS) resulting from the synthesis of directional measurements is presented in Fig. <ref> (b) and (c) for the RX AOAs and TX AODs, respectively. The omni directional antenna patterns facilitate the implementation of arbitrary antenna patterns for simulations, with 3GPP providing omnidirectional models for the same reason <cit.>.
The blue dots on the PAS in Fig. <ref> represent the channel sounder RX AOA/ TX AOD pointing directions during the directional measurements at the T-R location. The AOA/AOD powers between the measured directions are obtained through linear interpolation of the measured powers, as the PAS has a spatial resolution corresponding to the antenna HPBW. Based on circular statistics, the omnidirectional RMS AS is evaluated in radians using (<ref>) for the circular standard deviation of the multipath departure/arrival directions<cit.>. Considering the PAS in Fig. <ref>, omni RMS AS considers all spatial directions with power above the threshold defined 10 dB below the peak power (“-10 dB Threshold" in Fig. <ref> (b) and (c)) in the PAS, which was proven sufficient for statistical analysis in <cit.>.
Using the same “-10 dB Threshold" on the PAS as the spatial lobe threshold (SLT), a spatial lobe (SL) is defined as contiguous spatial directions with powers above the SLT. The SLs in a PAS are illustrated in Fig. <ref> (b) and (c) as the orange-filled region constituting AOA/AOD directions with received powers above the SLT. Several SLs are observed in AOA and AOD PAS in both 6.75 GHz and 16.95 GHz measurement campaigns.
AS_omni = √(-2× ln|∑_l=1^L∑_m=1^M_le^(j(ϕ_l,m or θ_l,m)) a^2_l,m/∑_l=1^L∑_m=1^M_l a^2_l,m|),
ϕ_l,m in (<ref>) can represent the AOA or AOD and θ_l,m can represent ZOA or ZOD for the m^th MPC in the l^th SL. Evaluating the AS for each SL defined from the PAS, as shown in Fig. <ref> (b) and (c), results in the lobe AS for the SL. For each SL, the lobe AS is evaluated using (<ref>), as the circular standard deviation of the departure/arrival directions of MPCs within the SL<cit.>.
AS_lobe,l = √(-2× ln|∑_m=1^M_le^(j(ϕ_m or θ_m)) a^2_m/∑_m=1^M_l a^2_m|),
§ ANGULAR STATISTICS OF 3GPP TR38.901 MODELS
Table <ref> compares the RMS AS evaluated from the InH measurements, with the results from 3GPP TR 38.901 models for 0.5-100 GHz <cit.>.
§.§ 3GPP Omnidirectional RMS Angular Spread
3GPP calculates the AS using a formulation identical to (<ref>). Based on the evaluation of AS across various frequencies, 3GPP provides frequency-dependent models to characterize AS up to 100 GHz. These models are specifically detailed in <cit.> under “Table 7.5-6 Part-2" and “Table 7.5-10."
* Omnidirectional RMS ASA:
log_10 (μ_ASA, LOS [^∘] )= -0.19 log_10(1 + f_c) + 1.781,
log_10 (μ_ASA, NLOS [^∘] )= -0.11 log_10(1 + f_c) + 1.863,
log_10 (σ_ASA ) = 0.12 log_10(1 + f_c) + 0.119,
log_10 (σ_ASA, NLOS ) = 0.12 log_10(1 + f_c) + 0.059.
* Omnidirectional RMS ZSA:
log_10 (μ_ZSA, LOS [^∘] )= -0.26 log_10(1 + f_c) + 1.44,
log_10 (μ_ZSA, NLOS [^∘] )= -0.15 log_10(1 + f_c) + 1.387,
log_10 (σ_ZSA, LOS) = -0.04 log_10(1 + f_c) + 0.264,
log_10 (σ_ZSA, NLOS) = -0.09 log_10(1 + f_c) + 0.746.
* Omnidirectional RMS ASD:
μ_ASD, LOS [^∘] = 10^1.60 = 39.8,
μ_ASD, NLOS [^∘] = 10^1.62 = 41.7,
σ_ASD, LOS = 10^0.18 = 1.51,
σ_ASD, NLOS = 10^0.25 = 1.78.
* Omnidirectional RMS ZSD:
log_10 (μ_ZSD, LOS [^∘]) = -1.43 log_10(1+f_c) + 2.228,
log_10 (μ_ZSD, NLOS [^∘]) = 1.08,
log_10 (σ_ZSD, LOS) = 0.13 log_10(1+f_c) + 0.3,
log_10 (σ_ZSD, NLOS) = 0.36.
ZOD offset, LOS = ZOD offset, NLOS = 0.
§ DISCUSSIONS AND COMPARISON WITH 3GPP MODELS
The AS measured from the InH campaign showed wider spatial spreads at 6.75 GHz compared to 16.95 GHz in both azimuth and elevation planes. When compared to 3GPP models, NYU ASAs at the RX are similar, whereas NYU ASDs at the TX are wider. The 3GPP zenith spreads are wider than the NYU measured ZSA and ZSD.
The omni RMS ASA at 6.75 GHz was observed to be 40.9^∘ in LOS and 58.2^∘ in NLOS, which is nearly identical to the 3GPP values, with a negligible difference of 0.03^∘ in both cases, as shown in Table <ref>.
At 16.95 GHz, the NYU measured omni ASA was 34.2^∘ in LOS and 43.5^∘ in NLOS. The 3GPP omni ASA values were obtained as 34.89^∘ in LOS and 53.09^∘ in NLOS, showing a difference of 0.69^∘ in LOS and 9.59^∘ in NLOS, indicating slightly smaller ASA values in the NYU measurements at 16.95 GHz compared to the 3GPP models.
The mean lobe RMS ASA for SLs was measured at 9.4^∘ in LOS and 24.5^∘ in NLOS at 6.75 GHz, and 5.4^∘ in LOS and 14.3^∘ in NLOS at 16.95 GHz.
The wider SLs in NLOS suggest that MPCs arrive from multiple azimuth directions.
In terms of omni ASD, at 6.75 GHz, 67^∘ spread was recorded in LOS, while 47^∘ in NLOS. At 16.95 GHz, omni RMS ASD was obtained as 63^∘ in LOS and 44.2^∘ in NLOS. The mean of the LOS omni ASD is observed wider than the NLOS omni ASD for both 6.75 GHz and 16.95 GHz, which indicates the multipath-rich propagation over several different transmit angles across the azimuth plane in LOS in addition to the direct LOS path. The NYU measurements are consistently wider than the 3GPP values. At 6.75 GHz, the 3GPP LOS omni ASD of 39.8^∘ is 27.2^∘ narrower than the NYU value. In NLOS, the NYU omni ASD is 47^∘, which is 5.3^∘ wider than the 3GPP value of 41.7^∘. At 16.95 GHz, the NYU omni ASD is 63^∘ in LOS and 44.2^∘ in NLOS, while the 3GPP model results 39.8^∘ and 41.7^∘ in LOS and NLOS, respectively, showing a difference of 23.2^∘ in LOS and 2.5^∘ in NLOS. 3GPP models the omni ASD as a constant value across frequency, which may need revision. Further, the wider measured NYU ASDs, particularly in LOS, compared to 3GPP values at both 6.75 GHz and 16.95 GHz, indicate greater diversity in the TX AODs and spatial richness of MPCs.
The mean lobe ASD at 6.75 GHz was observed at 8.4^∘ in LOS and 13^∘ in NLOS. At 16.95 GHz, LOS lobe ASD was observed at 4.8^∘, and NLOS lobe ASD was 9.8^∘, which are on average 3.2^∘ narrower than the lobe ASDs at 6.75 GHz.
In the zenith plane, the lobe RMS ZSA at 6.75 GHz was observed to be 5.2^∘ in LOS and 8^∘ in NLOS, and at 16.95 GHz it was 3.6^∘ in LOS and 4.1^∘ in NLOS. Similarly, considering omni ZSA, at 6.75 GHz the zenith AS is 3.6^∘ in LOS and 5.1^∘ in NLOS, and at 16.95 GHz the spread is 3^∘ in LOS and 3.3^∘ in NLOS. At 6.75 GHz, the 3GPP models resulted in 17.3^∘ in LOS, and 17.9^∘ in NLOS. At 16.95 GHz, 3GPP RMS ZSA values were obtained as 13.93^∘ in LOS and 15.8^∘ in NLOS, respectively. On average, the NYU-measured ZSAs are observed smaller than the 3GPP values by 13.2^∘ at 6.75 GHz and 11.7^∘ at 16.95 GHz. The measured zenith spread of MPC departure angles was also observed to be small with lobe ZSD of 0.4^∘ in LOS and 1.8^∘ in NLOS at 6.75 GHz, and 0.5^∘ in LOS and 0.6^∘ in NLOS at 16.95 GHz. Considering all departing MPC angles and evaluating the omni ZSD, in LOS the spread was found to be 0.9^∘ and 3.7^∘ in NLOS at 6.75 GHz, while at 16.95 GHz the omni ZSD spread was evaluated to be 1.6^∘ and 3.6^∘ in LOS and NLOS, respectively. The 3GPP models yielded 9^∘ and 12^∘, respectively at 6.75 GHz, and 2.7^∘ and 12^∘, respectively at 16.95 GHz.
The NYU-measured zenith spreads are smaller than the 3GPP zenith spreads. The reasoning for this observation lies in the underlying measurement methodology behind the NYU measurements. As highlighted in Section II. C, the strongest pointing direction is determined at each TX-RX location by adjusting the TX and RX pointing angles in azimuth and elevation in 1-degree steps until the maximum power is captured at the RX. The RX performs azimuth sweeps at the boresight elevation angle and is up-tilted and down-tilted by the antenna HPBW. As a result, most of the MPC received power for a TX-RX location pair is captured in the azimuth measurement sweep at the boresight elevation. MPCs captured during the azimuth sweep at this elevation are assigned the boresight elevation of the antenna. The zenith spread is, therefore, smaller than that modeled in 3GPP. Individual MPC elevations can be further extracted from the captured PDPs through the use of post-processing methods such as ADME<cit.>, R-CLEAN<cit.>, or SAGE<cit.>.
§ CONCLUSION
This paper presented detailed large-scale spatial statistics of wireless channels at 6.75 GHz and 16.95 GHz for InH environments extracted from comprehensive measurements conducted at the NYU WIRELESS Research center. The AS statistics obtained were compared with the industry standard 3GPP models. The LOS ASA, NLOS ASA, and NLOS ASD were found to be in close agreement with 3GPP models. The measured LOS ASD was found to be larger than the 3GPP model result by 25.2^∘ on average, indicating a greater diversity in the TX AODs in the azimuth plane. Moreover, in the zenith plane, LOS and NLOS ZSAs and ZSDs were measured smaller than the 3GPP modeled values as most MPCs were captured in the azimuth sweep at boresight elevation. The wide azimuth spreads clearly indicate spatial richness of MPCs and potential for implementation of spatial multiplexing techniques with MIMO systems in the upper mid-band.
IEEEtran
|
http://arxiv.org/abs/2409.03545v1 | 20240905140710 | The Power of Second Chance: Personalized Submodular Maximization with Two Candidates | [
"Jing Yuan",
"Shaojie Tang"
] | cs.LG | [
"cs.LG",
"cs.DS"
] |
Personalized Submodular Maximization with Two Candidates
Department of Computer Science and Engineering, University of North Texas Department of Management Science and Systems, School of Management, University at Buffalo
The Power of Second Chance: Personalized Submodular Maximization with Two Candidates
Jing Yuan1 Shaojie Tang2
====================================================================================
§ ABSTRACT
Most of existing studies on submodular maximization focus on selecting a subset of items that maximizes a single submodular function. However, in many real-world scenarios, we might have multiple user-specific functions, each of which models the utility of a particular type of user. In these settings, our goal would be to choose a set of items that performs well across all the user-specific functions. One way to tackle this problem is to select a single subset that maximizes the sum of all of the user-specific functions. Although this aggregate approach is efficient in the sense that it avoids computation of sets for individual functions, it really misses the power of personalization - for it does not allow to choose different sets for different functions. In this paper, we introduce the problem of personalized submodular maximization with two candidate solutions. For any two candidate solutions, the utility of each user-specific function is defined as the better of these two candidates. Our objective is, therefore, to select the best set of two candidates that maximize the sum of utilities of all the user-specific functions. We have designed effective algorithms for this problem. We also discuss how our approach generalizes to multiple candidate solutions, increasing flexibility and personalization in our solution.
§ INTRODUCTION
A submodular function is defined by its intuitive diminishing returns property: adding an item to a smaller set will increase the return more in comparison with when this happens from a larger set. Such a function is extremely common in various combinatorial optimization problems naturally arising from machine learning, graph theory, economics, and game theory. Most of the work in submodular optimization focuses on selecting a subset of items from a ground set that maximizes a single submodular function. However, in many real-world scenarios, we are confronted with multiple user-specific functions denoted as f_1, ⋯, f_m : 2^Ω→ℝ_≥ 0. Each of these functions, such as f_i, captures the utility corresponding to some user type indexed by i. Our main goal will be to maximize the aggregate utility of all the m functions. One trivial way to achieve this would be to compute a solution individually for every single function f_i. Unfortunately, this would require to compute and store m solutions, which is infeasible or at least very inefficient if the number of user-specific functions is large.
Another way is to look for a single feasible solution, denoted as S⊆Ω, that maximizes the summation of these m functions, i.e., max_S⊆Ω∑_i∈[m] f_i(S). This problem, also known as the maximization of decomposable submodular functions <cit.>, has been well-studied in the literature and efficient algorithms have been designed for the same. Nevertheless, such an aggregate approach, despite being efficient, is unable to harness the power of personalization. Specifically, it does not provide the flexibility in offering a personalized set for each function.
In our research, we introduce the innovative concept of personalized submodular maximization. Consider a pair of sets {S_1, S_2}, for each user-specific function f_i, we determine its utility based on the better-performing solution among these two candidates, represented as max{f_i(S_1), f_i(S_2)}. Mathematically, our problem can be expressed as follows:
max_S_1, S_2 ⊆Ω∑_i∈[m]max{f_i(S_1), f_i(S_2)}
|S_1|≤ k, |S_2|≤ k,
where k is the size constraint of a feasible solution.
In essence, our primary objective is to maximize the combined utility of user-specific functions while maintaining a personalized approach to item selection. An important and practical application of our study is in the context of two-stage optimization. Here, we consider that f_1, ⋯, f_m represent training examples of functions drawn from an unknown distribution, we aim to choose a pair of candidate solutions based on these m functions, ensuring that one of the chosen candidates performs well when faced with a new function from the same distribution.
In this paper, we also discuss the possibility of expanding our approach to accommodate multiple (more than two) candidate solutions. This potential extension would further enhance the flexibility and personalization options within our solution.
§.§ Related Work
The problem of submodular maximization has received considerable attention in the literature <cit.>. For example, one of the most well-established results is that a simple greedy algorithm achieves a tight approximation ratio of (1-1/e) for maximizing a single monotone submodular function subject to cardinality constraints <cit.>. Since most datasets are so big nowadays, several works were devoted to reducing the running time to maximize a submodular function. Examples include the development of accelerated greedy algorithms <cit.> and streaming algorithms <cit.>. All of these works, however, focus on finding a single set that maximizes a submodular function. In contrast, our goal is to identify a pair of candidates that maximizes the sum of the better-performing solution between them. This presents a unique challenge, as the resulting objective function is no longer submodular. Consequently, existing results on submodular optimization cannot be directly applied to our study.
Our work is closely related to the field of two-stage submodular optimization <cit.>, in which the key objective is to find a smaller ground set from a large one. This reduction should be designed in such a way that choosing the items from the small set guarantees approximately the same performance as choosing items from the original large set for a variety of submodular functions. This aligns with our objective of seeking two initial solutions that cut down on computational effort in optimization with a new function. However, problem formulations between our studies are largely different despite sharing the same objective. Thereby, new methodologies should be developed to cope with the distinctive challenges presented in our research. Moreover, note that in the traditional framework of two-stage submodular optimization, once a reduced ground set is computed, further optimization based on this reduced set usually involves algorithms with possibly high time complexity, such as the greedy algorithm. In contrast, our personalized optimization model requires only a comparison between the performance of two candidate solutions, significantly reducing the computational burden in the second stage.
§ PROBLEM FORMULATION
Our problem involves an input set of n items denoted as Ω, and a collection of m submodular functions, namely, f_1, ⋯, f_m : 2^Ω→ℝ_≥ 0. To clarify, the notation Δ_i(x, A) denotes the marginal gain of adding item x to set A with respect to the function f_i. That is, Δ_i(x, A)=f_i({x}∪ A) - f_i(A). Specifically, a function f_i is considered submodular if and only if Δ_i(x, A) ≥Δ_i(x, A') holds for any two sets A and A' where A⊆ A'⊆Ω and for any item x∈Ω such that x∉ A'.
Our aim is to select a pair of candidate solutions, S_1 and S_2, and the utility of each user-specific function is determined by the superior solution among these two candidates. These subsets should provide good performance across all m functions when we are limited to choosing solutions from either S_1 or S_2. Formally,
[0.7][c]
[t]0.7
P.0
max_S_1, S_2 ⊆Ω∑_i∈[m]max{f_i(S_1), f_i(S_2)}
|S_1|≤ k, |S_2|≤ k,
where k is the size constraint of a feasible solution.
A straightforward approach to solving P.0 is to transform it into a standard set selection problem. Specifically, we can introduce a ground set 𝒰 = {(i,j)| i∈Ω, j∈{1, 2}}. Here, selecting an element (i,j)∈𝒰 corresponds to placing item i in set S_j in our original problem. Let x_ij be a binary decision variable representing the selection of (i,j), such that x_ij=1 if and only if (i,j) is selected. Then P.0 is reduced to finding a set of elements from 𝒰 such that ∀ i∈Ω, x_i1+x_i2=1 and ∀ j∈{1, 2}, ∑_i∈Ωx_ij≤ k, which represents the intersection of two matroid constraints. Unfortunately, it is straightforward to verify that the utility function defined over 𝒰 is not necessarily submodular, even if each individual function f_i is submodular. Hence, existing solutions for submodular maximization subject to two matroid constraints are not directly applicable to our problem.
§ ALGORITHM DESIGN FOR CONSTANT M
We first study the case if the number of functions m is a constant. Before presenting our algorithm, we introduce a new optimization problem P.1. The objective of this problem is to partition the m functions into two groups such that the sum of the optimal solutions for these two groups is maximized. Formally,
[0.7][c]
[t]0.7
P.1
max_A, B⊆ [m] (max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S))
subject to B = [m]∖ A.
We next show that the optimal solution of P.1 serves as an upper bound for our original problem.
Let OPT_1 (resp. OPT_0) denote the value of the optimal solution of P.1 (resp. our original problem P.0), we have
OPT_1≥ OPT_0.
Proof: Assume S_1^* and S_2^* is the optimal solution of P.0, we can partition m functions to two groups A' and B' such that every function in A' favors S_1^* and every function in B' favors S_2^*. That is,
A'={i∈ [m]| f_i(S_1^*) ≥ f_i(S_2^*)}
and
B'={i∈ [m]| f_i(S_1^*) < f_i(S_2^*)}.
Hence,
OPT_0= ∑_i∈[m]max{f_i(S_1^*), f_i(S_2^*)}
= ∑_i∈ A'max{f_i(S_1^*), f_i(S_2^*)} + ∑_i∈ B'max{f_i(S_1^*), f_i(S_2^*)}
= ∑_i∈ A' f_i(S_1^*) + ∑_i∈ B' f_i(S_2^*)
where the first inequality is by the definition of OPT_0, the second equality is by the observation that A' and B' is a partition of [m] and the third equality is by the definitions of A' and B'.
Moreover, it is easy to verify that
max_S⊆Ω: |S|≤ k∑_i∈ A' f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B' f_i(S)
≥∑_i∈ A' f_i(S_1^*) + ∑_i∈ B' f_i(S_2^*).
This is because |S_1^*|≤ k and |S_2^*|≤ k. It follows that
OPT_0= ∑_i∈ A' f_i(S_1^*) + ∑_i∈ B' f_i(S_2^*)
≤max_S⊆Ω: |S|≤ k∑_i∈ A' f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B' f_i(S).
Therefore,
OPT_1 =
max_A, B⊆ [m](max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S))
≥max_S⊆Ω: |S|≤ k∑_i∈ A' f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B' f_i(S)
≥∑_i∈ A' f_i(S_1^*) + ∑_i∈ B' f_i(S_2^*) = OPT_0.
This finishes the proof of this lemma.
Now, we present our algorithm, called , which is listed in Algorithm <ref>. Our approach involves enumerating all possible partitions of [m]. For each partition, denoted as A and B, we utilize a state-of-the-art algorithm to solve two subproblems:
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
and
max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S).
This results in obtaining two sets, C_1 and C_2, respectively. Finally, we return the best pair of sets as the solution for our original problem P.0.
Since the number of functions m is a constant, the maximum number of possible partitions we must enumerate is at most O(2^m), which is also a constant. As long as max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) and max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S) can be solved in polynomial time, the is a polynomial time algorithm. Next we provide an approximation ratio of Algorithm <ref>.
Assuming the existence of α-approximation algorithms for
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
for any A ⊆ [m], our (Algorithm <ref>) provides an α-approximation solution for P.0.
Proof: Assuming that A^* and B^* represent the optimal solution for P.1, let us consider the round of our algorithm where it enumerates the partition of A^* and B^*. In this round, we denote the solutions obtained as C_1 and C_2. Given that there exist α-approximation algorithms for max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) for any A ⊆ [m], by adopting this algorithm as a subroutine, we have
∑_i∈ A^* f_i(C_1) ≥αmax_S⊆Ω: |S|≤ k∑_i∈ A^* f_i(S)
and
∑_i∈ B^* f_i(C_2)≥αmax_S⊆Ω: |S|≤ k∑_i∈ B^* f_i(S).
Hence,
∑_i∈[m]max{f_i(C_1), f_i(C_2)}≥∑_i∈ A^* f_i(C_1) + ∑_i∈ B^* f_i(C_2)
≥α(max_S⊆Ω: |S|≤ k∑_i∈ A^* f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B^* f_i(S))
= α OPT_1
where the equality is by the assumption that A^* and B^* represent the optimal solution for P.1.
This, together with Lemma <ref>, implies that
∑_i∈[m]max{f_i(C_1), f_i(C_2)}≥α OPT_1≥α OPT_0.
This lemma is a consequence of the above inequality and the fact that the final solution obtained by our algorithm is at least as good as ∑_i∈[m]max{f_i(C_1), f_i(C_2)}.
Observe that if all f_i are monotone and submodular functions, then there exists (1-1/e)-approximation algorithms for max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) for any A ⊆ [m]. Therefore, by substituting α=1-1/e into Lemma <ref>, we obtain the following theorem.
Assume all f_i are monotone and submodular functions, (Algorithm <ref>) provides an (1-1/e)-approximation solution for P.0.
§ ALGORITHM DESIGN FOR LARGE M
When dealing with a large value of m, relying on an enumeration-based approach can become impractical. In this section, we introduce a , outlined in Algorithm <ref>, that provides provable performance bounds. Instead of exhaustively enumerating all possible partitions of [m], we examine T random partitions. For each partition, we follow the same procedure as in Algorithm <ref> to compute two candidate solutions. Specifically, for each sampled partition, we employ a state-of-the-art α-approximation algorithm to solve two subproblems. Ultimately, we return the best pair of sets as the final solution.
In the following two lemmas, we provide two performance bounds for Algorithm <ref>. The first bound is independent of the number of samples T; thus, it holds even if T=1. The second bound depends on T, increasing as T increases.
Assuming the existence of α-approximation algorithms for
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
for any A ⊆ [m], our (Algorithm <ref>) provides an α/2-approximation solution for P.0.
Proof: We first recall some notations form the proof of Lemma <ref>. Assume S_1^* and S_2^* is the optimal solution of P.0, we partition all m functions to two groups A' and B' such that every function in A' favors S_1^* and every function in B' favors S_2^*. That is,
A'={i∈ [m]| f_i(S_1^*) ≥ f_i(S_2^*)}
and
B'={i∈ [m]| f_i(S_1^*) < f_i(S_2^*)}.
Without loss of generality, we assume that ∑_i∈ A' f_i(S_1^*)≥∑_i∈ B' f_i(S_2^*), implying that ∑_i∈ A' f_i(S_1^*)≥ OPT_0/2. Now, let us consider any arbitrary partition sample denoted as A and B, generated by our algorithm, we have
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S)
≥∑_i∈ A f_i(S_1^*) + ∑_i∈ B f_i(S_1^*) ≥∑_i∈ A' f_i(S_1^*) ≥ OPT_0/2
where the first inequality is by the observation that |S_1^*|≤ k, the second inequality is by the observation that A' ⊆ A∪ B and the third inequality is by the observation that ∑_i∈ A' f_i(S_1^*)≥ OPT_0/2. Because there exist α-approximation algorithms for max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) for any A ⊆ [m], by adopting this algorithm as a subroutine to compute C_1 and C_2, we have
∑_i∈ A f_i(C_1) ≥αmax_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
and
∑_i∈ B f_i(C_2)≥αmax_S⊆Ω: |S|≤ k∑_i∈ B f_i(S).
Hence,
∑_i∈[m]max{f_i(C_1), f_i(C_2)}≥∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2)
≥α(max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S))≥ (α/2) OPT_0
where the third inequality is by inequality (<ref>). This finishes the proof of this lemma.
Assuming the existence of α-approximation algorithms for
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
for any A ⊆ [m], our (Algorithm <ref>), after T rounds, provides an αγ(T) (1/2 + ϵ/√(m))-approximation solution for P.0 in expectation where γ(T)=1-(1/2+ϵe/π)^T.
Proof: Consider an arbitrary round of our algorithm, and let A and B denote the sampled partition, and let (C_1, C_2) denote the solution returned from this round. Observe that
∑_i∈[m]max{f_i(C_1), f_i(C_2)}≥∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2).
Hence, the expected value of ∑_i∈[m]max{f_i(C_1), f_i(C_2)}, where the expectation is taken over A, B, is at least 𝔼_A, B[∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2)]. Recall that our algorithm runs T rounds and returns the best (C_1,C_2) as the final solution, to prove this lemma, it suffices to show that the expected value of ∑_i∈[m]max{f_i(C_1), f_i(C_2)} is at least αγ(T) (1/2 + ϵ/√(m))OPT_0. To achieve this, we will focus on proving
𝔼_A, B[∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2)]≥αγ(T) (1/2 + ϵ/√(m))OPT_0. The rest of the proof is devoted to proving this inequality.
First,
𝔼_A, B[∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2)]
≥𝔼_A, B[αmax_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + αmax_S⊆Ω: |S|≤ k∑_i∈ B f_i(S)]
= α𝔼_A, B[max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) + max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S)]
=α𝔼_A[max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)] + α𝔼_B[max_S⊆Ω: |S|≤ k∑_i∈ B f_i(S)]
≥α𝔼_A[ ∑_i∈ A f_i(S_1^*)] + α𝔼_B[∑_i∈ B f_i(S_2^*)].
Next, we provide lower bounds for 𝔼_A[ ∑_i∈ A f_i(S_1^*)] and 𝔼_B[∑_i∈ B f_i(S_2^*)]. Recall that we defined A'={i∈ [m]| f_i(S_1^*) ≥ f_i(S_2^*)} and B'={i∈ [m]| f_i(S_1^*) < f_i(S_2^*)}. Now, for some β∈[0,1], let us denote the event as E, which occurs when the following condition holds for at least one partition (A, B) that is enumerated by our algorithm: |A∩ A'|/|A'|≥β. Because each item of A' is included in A independently with a probability of 1/2, for any β∈[0,1], we have the following:
𝔼_A[ ∑_i∈ A f_i(S_1^*)] ≥[1_E=1]·β∑_i∈ A' f_i(S_1^*).
Consider a random sample A from [m] and observe that each item of A' is included in A independently with a probability of 1/2, by an “anti-concentration” result on binomial distributions (Lemma 22.2 in <cit.>), we have
[|A∩ A'| ≥|A'|/2+ϵ√(|A'|)] ≥1/2 - ϵe/π.
This implies that
[|A∩ A'|/|A'|≥1/2 + ϵ/√(|A'|)] ≥1/2 - ϵe/π.
Given that |A'| ≤ m, we further have
[|A∩ A'|/|A'|≥1/2 + ϵ/√(m)] ≥1/2 - ϵe/π.
If we set β=1/2 + ϵ/√(m), then we can establish a lower bound on the probability of event E occurring after T rounds as follows:
[1_E=1] ≥ 1-(1-[|A∩ A'|/|A'|≥1/2 + ϵ/√(m)])^T
≥ 1-(1/2+ϵe/π)^T.
This, together with inequalities (<ref>), implies that
𝔼_A[ ∑_i∈ A f_i(S_1^*)] ≥[1_E=1]·β∑_i∈ A' f_i(S_1^*)
≥ (1-(1/2+ϵe/π)^T) · (1/2 + ϵ/√(m)) ∑_i∈ A' f_i(S_1^*).
Following the same argument, we can prove that
𝔼_B[ ∑_i∈ B f_i(S_2^*)]≥ (1-(1/2+ϵe/π)^T) · (1/2 + ϵ/√(m)) ∑_i∈ B' f_i(S_2^*).
Let γ(T)=1-(1/2+ϵe/π)^T. The above two inequalities, together with inequality (<ref>), imply that
𝔼_A, B[∑_i∈ A f_i(C_1) + ∑_i∈ B f_i(C_2)]≥α𝔼_A[ ∑_i∈ A f_i(S_1^*)] + α𝔼_B[∑_i∈ B f_i(S_2^*)]
≥αγ(T) (1/2 + ϵ/√(m)) ∑_i∈ A' f_i(S_1^*) + αγ(T) (1/2 + ϵ/√(m)) ∑_i∈ B' f_i(S_2^*)
= αγ(T) (1/2 + ϵ/√(m))( ∑_i∈ A' f_i(S_1^*)+∑_i∈ B' f_i(S_2^*))
= αγ(T) (1/2 + ϵ/√(m)) OPT_0.
This finishes the proof of this lemma.
By selecting a tighter bound derived from Lemma <ref> and Lemma <ref>, we can establish the following corollary.
Assuming the existence of α-approximation algorithms for
max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S)
for any A ⊆ [m], our Sampling-based algorithm (Algorithm <ref>), after T rounds, provides an max{1/2, γ(T) (1/2 + ϵ/√(m))}·α-approximation solution for P.0 in expectation where γ(T)=1-(1/2+ϵe/π)^T.
Observe that if all f_i are monotone and submodular functions, then there exists (1-1/e)-approximation algorithms for max_S⊆Ω: |S|≤ k∑_i∈ A f_i(S) for any A ⊆ [m]. Therefore, substituting α=1-1/e into Corollary <ref>, we derive the following theorem.
Assume all f_i are monotone and submodular functions, Sampling-based algorithm (Algorithm <ref>), after T rounds, provides an max{1/2, γ(T) (1/2 + ϵ/√(m))}·(1-1/e)-approximation solution for P.0 in expectation where γ(T)=1-(1/2+ϵe/π)^T.
Discussion on Scenarios with More than Two Candidates
We next discuss the case if we allowed to keep l≥ 2 candidate solutions. In this extension, our aim is to select l candidate solutions, S_1, ⋯, S_l, and the utility of each user-specific function is determined by the superior solution among these candidates. Hence, our problem can be formulated as max_S_1, ⋯, S_l ⊆Ω∑_i∈[m]max{f_i(S_1), ⋯, f_i(S_l)} subject to |S_1|≤ k, ⋯, |S_l|≤ k
where k is the size constraint of a feasible solution. To tackle this challenge, we can utilize our enumeration-based partition algorithm (Algorithm <ref>) to find an approximate solution. The procedure involves enumerating all possible ways to partition the set [m] into l groups. For each partition, we employ a state-of-the-art (1-1/e)-approximation algorithm to solve the maximization problem within each group. This process generates l sets, and we then choose the best l sets among all partitions as the final solution. By following the same argument used to prove Theorem <ref>, we can show that this approach guarantees an (1-1/e)-approximation solution.
splncs04
|
http://arxiv.org/abs/2409.03613v1 | 20240905151729 | Periodic Pitman transforms and jointly invariant measures | [
"Ivan Corwin",
"Yu Gu",
"Evan Sorensen"
] | math.PR | [
"math.PR"
] |
equationsection
problems
⌈⌉
⌊⌋
|
http://arxiv.org/abs/2409.02852v1 | 20240904162258 | Key Compression Limits for $k$-Minimum Value Sketches | [
"Charlie Dickens",
"Eric Bax",
"Alexander Saydakov"
] | cs.DS | [
"cs.DS",
"cs.IT",
"math.IT"
] |
groupplots
compat=1.16
calc,tikzmark
background
background,main
B-.05emi-.025em b-.08em
T-.1667em.7exE-.125emX
|
http://arxiv.org/abs/2409.03294v1 | 20240905065956 | Federated Prototype-based Contrastive Learning for Privacy-Preserving Cross-domain Recommendation | [
"Li Wang",
"Quangui Zhang",
"Lei Sang",
"Qiang Wu",
"Min Xu"
] | cs.IR | [
"cs.IR"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Federated Prototype-based Contrastive Learning for Privacy-Preserving Cross-domain Recommendation
Li Wang, Quangui Zhang, Lei Sang, Qiang Wu, Senior Member, IEEE, and Min Xu^*, IEEE, Member
Li Wang, Qiang Wu, and Min Xu are with the School of Electrical and Data Engineering, University of Technology Sydney, Sydney 2000, Australia. Shoujin Wang is with the institute of data science, University of Technology Sydney, Sydney 2000, Australia. Quangui Zhang is with the School of Artificial Intelligence, Chongqing University of Arts and Sciences, Chongqing 402160, China. *Corresponding author: Min Xu (e-mail: [email protected])
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Cross-domain recommendation (CDR) aims to improve recommendation accuracy in sparse domains by transferring knowledge from data-rich domains. However, existing CDR methods often assume the availability of user-item interaction data across domains, overlooking user privacy concerns. Furthermore, these methods suffer from performance degradation in scenarios with sparse overlapping users, as they typically depend on a large number of fully shared users for effective knowledge transfer. To address these challenges, we propose a Federated Prototype-based Contrastive Learning (CL) method for Privacy-Preserving CDR, named FedPCL-CDR. This approach utilizes non-overlapping user information and prototypes to improve multi-domain performance while protecting user privacy. FedPCL-CDR comprises two modules: local domain (client) learning and global server aggregation. In the local domain, FedPCL-CDR clusters all user data to learn representative prototypes, effectively utilizing non-overlapping user information and addressing the sparse overlapping user issue. It then facilitates knowledge transfer by employing both local and global prototypes returned from the server in a CL manner. Simultaneously, the global server aggregates representative prototypes from local domains to learn both local and global prototypes. The combination of prototypes and federated learning (FL) ensures that sensitive user data remains decentralized, with only prototypes being shared across domains, thereby protecting user privacy. Extensive experiments on four CDR tasks using two real-world datasets demonstrate that FedPCL-CDR outperforms the state-of-the-art baselines.
Contrastive Learning, Federated Learning, Prototype, and Cross-Domain Recommendation
§ INTRODUCTION
Cross-domain recommendation (CDR) has emerged as a critical strategy to address the challenge of data sparsity in recommendation systems by transferring knowledge, such as user-item interaction histories and review texts, across domains <cit.>.
Based on different recommendation scenarios, existing CDR can be divided into two categories: single-target CDR and multi-target CDR. The first genre <cit.> aims to improve recommendation performance in the target domain by utilizing rich information from the source domain. However, it cannot enhance model performance across multiple domains simultaneously. To address this issue, multi-target CDR <cit.> has emerged. Literature focusing on feature aggregation <cit.> and feature disentanglement <cit.> has been proposed to further improve recommendation performance. Feature aggregation methods typically learn representations in each domain separately and then design an aggregation function to combine these representations. On the other hand, feature disentanglement approaches concentrate on separating domain-common and domain-specific embeddings and transferring domain-common embeddings across domains. Although these works have achieved good performance, they still face two major challenges.
CH1. How to effectively protect user privacy when transferring knowledge across domains?
Most existing literature <cit.> cannot solve this problem well. They usually assume that the user-item ratings or representations are directly transferred across domains. However, in the setting of privacy-preserving, these methods are unsuitable <cit.>.
To resolve this problem, some privacy-preserving CDR methods <cit.> have gained lots of attention. For example, PriCDR <cit.> first introduces Differential Privacy (DP) technology to publish the source rating matrix and subsequently conduct CDR modeling. P2FCDR <cit.> independently learns embeddings in each domain using orthogonal functions and applies the local differential privacy (LDP) technique to protect these embeddings.
However, these approaches introduce a trade-off between privacy preservation and recommendation accuracy, as the added noise can distort the underlying data patterns.
CH2. How to improve recommendation performance with few overlapping users across domains? Many existing CDR methods depend on fully overlapping users as a bridge to transfer knowledge across domains <cit.>. For example, as shown in Figure <ref>, John is an overlapping user with interactions in both the Movie and Book domains. If knowledge transfer is solely based on fully overlapping users, John may be interested in the comedic book “Bossypants" because he watches movies in the same genre, like “Anchorman." However, in real-world datasets, there are very few overlapping users. For instance, the Amazon dataset has only a 5% overlapping user ratio <cit.>. As a result, the model performance will be degraded with such sparse overlapping users.
To address this problem, we can utilize non-overlapping user information to improve recommendation performance. For instance, Lily shares similar interests with John based on their common interaction with the movie “Anchorman". By effectively utilizing Lily's interests, we can infer that John may like the thriller book “Gone Girl" based on Lily's preference for the thriller movie “Get Out", which couldn't be realized by relying solely on fully overlapping users.
To address these challenges, we propose FedPCL-CDR, a federated prototype-based contrastive learning (CL) approach for privacy-preserving CDR (PPCDR). Our method consists of two key modules, i.e., local domain learning and global server aggregation, where user-item interaction histories and review texts are stored in local domains and knowledge is transferred via prototypes within the federated learning (FL) framework, ensuring user privacy. Specifically, in the local domain, we first learn comprehensive user and item embeddings from user-item interactions and review texts. Subsequently, we utilize k-means clustering <cit.> to generate prototypes (cluster centroids). On one hand, these prototypes convey overlapping and non-overlapping user preferences as they are derived from the interest alignments of all entities in the domain. On the other hand, these prototypes serve as generalized representations of group-level preferences, making it more difficult for attackers to infer sensitive information about individual users. We then select representative prototypes based on overlapping users and upload them to the global server. The global server, in turn, models both local and global prototypes by aggregating these representative prototypes and transmits them back to the respective local domains. To effectively transfer knowledge, the local domain refines user embeddings by using both local and global prototypes in a CL manner. This dual-prototype approach allows for transferring knowledge at varying granularities, enabling more nuanced learning of user embeddings from different perspectives.
In summary, our proposed model makes the following contributions:
* We introduce a novel federated prototype-based CL approach for PPCDR that aims to solve the sparse overlapping user and privacy protection concerns.
* Our method effectively leverages non-overlapping users for knowledge transfer by clustering all users, thereby enhancing model performance in scenarios with sparse overlapping users across domains.
* We utilize local and global prototypes to transfer knowledge within the FL framework, protecting user privacy.
* Extensive experiments on four CDR tasks from two large-scale real-world datasets, Amazon and Douban, demonstrate the effectiveness of FedPCL-CDR compared to state-of-the-art baselines.
§ RELATED WORK
§.§ Cross Domain Recommendation
Cross-domain recommendation (CDR) aims to address the challenge of data sparsity by transferring cross-domain knowledge <cit.>. The core step in CDR involves designing an effective transfer method to improve recommendation accuracy in sparse domains.
With the advancements in deep learning, various transfer methods have emerged in CDR, including
learning mapping functions across domains <cit.>, feature combination <cit.>, feature alignment <cit.>,
and transfer methods based on Graph Neural Networks (GNNs) <cit.>.
However, these approaches depend on fully overlapping users for knowledge transfer and overlook concerns about user privacy leakage. In this paper, we propose FedPCL-CDR to address these challenges.
§.§ Privacy-Preserving CDR
With the increasing attention on user privacy, many scholars have begun studying PPCDR methods <cit.>.
PriCDR <cit.> and PPGenCDR <cit.> both use DP to protect user-item ratings during knowledge transfer. P2FCDR <cit.> is a privacy-preserving federated framework that learns an orthogonal mapping matrix to transform embeddings across domains and applies LDP to the transformed embeddings for user privacy protection. Meanwhile, PPCDR <cit.> introduces a federated graph framework that utilizes FL and LDP technologies to protect user privacy. Despite their effectiveness, these PPCDR methods must strike a balance between utility and privacy. We propose a prototype-based FL framework to address these limitations.
§.§ Contrastive Learning
Contrastive Learning (CL) has been widely used in computer vision <cit.> and natural language processing <cit.>.
It is a self-supervised learning technique that aims to maximize the mutual information between two representations. To achieve this, InfoNCE <cit.> is proposed to
learn representations by contrasting positive pairs (similar samples) against negative pairs (dissimilar samples), which
discovers the semantic information shared by different views. Nowadays, CL has been applied to the CDR to improve representation learning <cit.>. For instance, DCCDR <cit.> leverages CL to learn domain-specific and domain-invariant representations. Meanwhile, CL-DTCDR <cit.> utilizes CL to learn more representative user and item embeddings with user-item interaction data and side information.
However, these methods directly utilize user-item ratings or representations to construct positive and negative pairs across domains, which is not feasible under privacy-preserving constraints.
In this work, we construct prototype-based CL tasks to transfer cross-domain user interests, thereby protecting user privacy.
§ METHODOLOGY
§.§ Definitions and Notations
We assume there are M domains (clients) denoted as {D^1, D^2,...,D^M} and a global server, where D^i denotes the i-th domain.
Within each domain, there exists a user set U^i and an item set V^i.
There are partial overlapping users and non-overlapping items across domains. The overlapping user set is denoted as U^o.
Let R^i∈{0,1}^|U^i|× |V^i| represent the binary user-item interaction matrix, which is private and cannot be shared across domains.
Figure <ref> depicts the overall framework of FedPCL-CDR. We illustrate the paradigm for domain D^i, and the corresponding paradigm for other domains can be easily inferred accordingly.
§.§ Local domain Learning Module
§.§.§ Graph Representation Learning
Inspired by the effectiveness of GNNs in capturing high-dimensional and complex relationships between users and items, we adopt LightGCN <cit.> to learn embeddings for user and item IDs as well as their review texts. We construct a graph G^i, where nodes represent users and items, and edges indicate interactions between them. By utilizing the graph convolution and propagation layers of LightGCN, we encode user and item embeddings based on G^i. Specifically, we denote ID embeddings and review text embeddings at the l-th layer as E_l^i(id) and E_l^i(rev), respectively. Initially, ID embeddings E_0^i(id) are randomly initialized, while review text embeddings E_l^i(rev) are learned using the document embedding model Doc2Vec <cit.>.
Given the graph G^i, E_l^i(id) and E_l^i(rev) are calculated as follows:
E_l^i(id) = (D^-1/2AD^-1/2)E_l-1^i(id);
E_l^i(rev) = (D^-1/2AD^-1/2)E_l-1^i(rev),
where D is a diagonal matrix and A is a adjacency matrix.
After l times of propagation, we can generate the final user and item ID embedding matrices E_u^i(id) and E_v^i(id) by concatenating multiple embedding matrices from E_0^i(id) to E_l^i(id). Similarly, we obtain the user and item review text embedding matrix
E_u^i(rev) and E_v^i(rev). Finally, we concatenate the ID embeddings and review text embeddings to learn comprehensive user and item embeddings:
E_u^i = f(E_u^i(id);E_u^i(rev));
E_v^i = f(E_v^i(id);E_v^i(rev)),
where f represents the concatenation function. Here, we use the element-wise sum aggregation method.
§.§.§ Clustering and Prototype Selection
In this subsection, our objective is to generate representative prototypes by clustering the user embeddings E_u^i.
The clustering process incorporates data from all users within a domain, not limited to overlapping users.
This not only leverages the shared knowledge among overlapping users but also explores the effective
utilization of knowledge from non-overlapping users.
We begin by introducing the k-means algorithm <cit.>, which aims to cluster user embeddings into K groups.
We obtain the cluster centroid set with K clusters:
T^i = {t_j^i}_j=1^K= Kmeans(E_u^i),
Each cluster is defined by its centroid, acting as the central representation for that particular group.
These centroids are regarded as prototypes, providing a comprehensive integration of information from similar users.
As a result, we derive the prototype set T^i={t_1^i, t_2^i, ..., t_K^i}.
We then select representative prototypes by considering overlapping users across domains.
This involves choosing prototypes whose clusters include overlapping users.
The rationale behind this choice lies in the intention that overlapping users have similar interests in different domains.
The representative prototype set is calculated as follows:
C^i = {t_j^i}_j=1^K {c_j^i}_j=1^K', K'≤ K.
In addition, we select overlapping users in each cluster to form the overlapping user set O^i:
O^i = {o_j^i}_j=1^K', o_j^i ⊂ U^o.
Finally, we upload each domain's representative prototype set C^i and overlapping user set O^i to the global server.
§.§.§ Prototype-based Contrastive Learning
In the previous subsection, we uploaded representative prototypes to the global server.
The server then aggregates these prototypes to derive both local and global prototypes (as further discussed in the next section)
and subsequently returns them to local domains. Due to data privacy constraints, direct transfer of original data is not feasible. To address this problem, we employ prototypes and introduce prototype-based CL losses, which comprise a global part and a local part. These prototypes represent the collective preferences of multiple users rather than any single individual, making it more difficult
for an attacker to associate a prototype with a specific user.
To transfer knowledge from other domains to domain D^i, we enforce the alignment of user embeddings with corresponding global prototypes while distancing them from distinct global prototypes. The global prototype-based CL loss is defined as follows:
L_global^i = -logexp(f(e_u^i,g_k^i))/exp(f(e_u^i,g_k^i))+∑_g_j^i∈ A(g_k^i),j≠ kexp(f(e_u^i,g_j^i)),
where g_k^i denotes the global prototype corresponding to cluster k, to which the user embedding e_u^i belongs. We regard (e_u^i,g_k^i) that belongs to the same cluster as a positive pair.
Conversely, g_j^i denotes the global prototype corresponding to cluster j, to which user embedding e_u^i doesn't belong, and (e_u^i,g_j^i) forms a negative pair.
A(g_k^i) is the set of global prototypes excluding g_k^i.
f indicates a similar function. We define it as:
f(e_u^i,g_k^i) = e_u^i·g_k^i/||e_u^i||×||g_k^i||/τ,
where τ represents the temperature coefficient, which controls the concentration strength of representation <cit.>.
In addition to the global prototype-based CL loss, we introduce the local prototype-based CL loss to align e_u^i with local prototypes of each domain through domain-wise CL in the latent space, enhancing inter-domain knowledge sharing. The local prototype-based CL loss is defined as follows:
L_local^i=-1/M∑_m=1^Mlogexp(f(e_u^i,l_k^m))/exp(f(e_u^i,l_k^m))+∑_l_j^i∈ A(l_k^i),j≠kexp(f(e_u^i,l_j^i)),
where l_k^m denotes the local prototype of cluster k from domains that have overlapping users with the cluster to which the user embedding e_u^i belongs.
l_k^i indicates the local prototype of cluster k that includes e_u^i, and A(l_k^i) is the set of local prototypes excluding l_k^i.
The global and local prototypes capture cluster-relevant information at different granularity, guiding the transfer of user interests from various perspectives.
§.§.§ Local Training
After refining the user embedding e_u^i, we concatenate it with the item embedding e_v^i and feed them into MLP layers to predict the user's preferences.
We aim to minimize the following loss function:
L_prd^i = l(r̂^i,r^i),
where l denotes the cross-entropy loss function, r̂^i is the predictive label, and r^i is the ground-truth label.
The total loss function is defined as follows:
L^i = L_prd^i + α(L_global^i +L_local^i),
where α is the trade-off parameter that balances the prediction loss and prototype-based CL losses. The detailed training process is in Algorithm 1 of Appendix A.
§.§ Global Server Aggregation
After receiving representative prototype set C and overlapping user set O from all domains, the global server calculates global prototypes for each domain.
First, for overlapping user set o_k^i∈ O^i, we construct the prototype set Ĉ_k^i that includes all representative prototypes containing overlapping users with o_k^i across domains:
Ĉ_k^i = ⋃_i'∈ D,k'≤K'{c_k'^i'|o_k'^i'∩ o_k^i ≠∅}.
Then, we calculate the global prototype g_k^i for cluster k:
g_k^i= 1/K̂∑_k=1^K̂ĉ_k^i,
where K̂ is the length of prototype set Ĉ^i_k.
Finally, we form the global prototype set G^i={g_1^i, g_2^i,...,g_K'^i}.
The global prototype incorporates user preferences across domains from a high-level perspective.
Different from the global prototypes, for the local prototypes, we select some representative prototypes in all domains rather than a single one via similarity calculation.
Specifically, for each representative prototype c_k^i, we first calculate the cosine similarity between c_k^i and representative prototypes from other domains that include overlapping users with o_k^i. Then, we select the representative prototype with maximum similarity to c_k^i in each domain to form the local prototype set L_k^i for cluster k:
L_k^i = {ĉ_k^i}_k=1^K̂{l_k^i}_k=1^M, M≤K̂.
The local prototype set can be represented as L^i={L_1^i,L_2^i,...,L_K'^i}.
After aggregation, the global server sends the global and local prototype sets G^i and L^i into the local domain D^i.
§.§ Privacy Preserving Analysis
The proposed FedPCL-CDR effectively ensures user privacy. Firstly, within the FL framework, the data in each domain remains private and localized, significantly reducing the risk of user privacy leakage <cit.>. Secondly, knowledge transfer across domains is facilitated through prototypes, which naturally protect data privacy <cit.>. These prototypes are 1-D vectors derived from averaging low-dimensional representations of samples within the same cluster, which is an irreversible process.
We provide a detailed example in Appendix B.
§ EXPERIMENTS
§.§ Experimental Settings
§.§.§ Datasets.
Motivated by CDR methods <cit.>, we conduct experiments on four real-world benchmark subsets from the Amazon dataset[https://cseweb.ucsd.edu/ jmcauley/datasets/amazon/links.html]: Phone, Electronics (Elec), Clothing, Shoes nad Jewelry (Cloth), and Sport, and three subsets from the Douban dataset[https://www.dropbox.com/s/u2ejjezjk08lz1o/Douban.tar.gz?e=2&dl=0]: Book, Movie, and Music.
We combine them into four CDR tasks. Table 1 of Appendix C presents basic statistics for these datasets.
For each dataset, we transform the explicit ratings into implicit feedback.
To improve data quality, we employ filtering criteria, removing records with fewer than 10 interactions for all users and items, in accordance with existing research <cit.>.
§.§.§ Parameter Settings and Evaluation.
We obtain optimal hyperparameters by optimizing the loss function (<ref>) using the Adam optimizer with a learning rate of 0.001.
The weight α for the prototype-based CL losses is set to 0.01. Additionally, the embedding size is fixed at 64, and the batch size is set to 128. The temperature coefficient in CL is established at 0.2, and the cluster number is set to 10. Furthermore, we apply batch normalization, dropout, and early stopping techniques to prevent overfitting. We use Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) as evaluation metrics, which are frequently used in CDR methods <cit.>.
§.§.§ Baseline Methods.
We compare the performance of FedPCL-CDR with SOTA baselines including NeuMF <cit.>, LightGCN <cit.>, FedNCF <cit.>, GA-DTCDR <cit.>, NMCDR <cit.>, CL-DTCDR <cit.>, GA-MTCDR-P <cit.>, PriCDR <cit.>, and P2FCDR <cit.>, which are widely used in CDR approaches <cit.>. In Appendix D, we provide detailed descriptions of these baselines.
§.§ Experimental Results and Analysis
§.§.§ Performance Evaluation.
We evaluate the performance of FedPCL-CDR and the baselines using commonly used evaluation metrics w.r.t. HR@10 and NDCG@10. From the experimental results in Table <ref>, We can observe that:
* Our model, FedPCL-CDR, surpasses other baselines, showcasing its capability to achieve satisfactory performance while also safeguarding user privacy. Specifically, FedPCL-CDR outperforms the top-performing CDR baseline by an average of 7.38% in HR@10 and 4.86% in NDCG@10 across all tasks. This improvement can be attributed to the following reasons: (1)
FedPCL-CDR efficiently utilizes non-overlapping user data to transfer cross-domain knowledge, which is particularly beneficial in scenarios with sparse overlapping users, such as Phone&Sport. (2) By constructing prototype-based CL tasks, FedPCL-CDR achieves more effective knowledge transfer.
* FedPCL-CDR outperforms single-domain federated methods such as FedNCF. This demonstrates the significant role of cross-domain knowledge in enhancing recommendation performance within the FL framework. In addition, FedPCL-CDR performs better than PriCDR and P2FCDR, indicating that methods leveraging prototypes and FL for user privacy protection surpass those utilizing differential privacy technology. Moreover, FedPCL-CDR exceeds the performance of GA-DTCDR, which depends on fully overlapping users for knowledge transfer. This indicates that effectively utilizing non-overlapping user information can improve model performance. Finally, although our method and CL-DTCDR both use CL to transfer knowledge, FedPCL-CDR still performs better than CL-DTCDR, showing that our method not only protects user privacy but also improves model performance.
* GNN-based methods outperform non-graph methods, such as LightGCN vs NeuMF. This demonstrates that incorporating high-order neighbor information can improve model accuracy.
* CDR methods consistently outperform single-domain approaches, as evidenced by the comparison between GA-DTCDR and NeuMF. This shows that cross-domain knowledge can alleviate the data-sparsity issue.
§.§.§ Ablation Studies.
To validate the effectiveness of each component in FedPCL-CDR, we conducted ablation experiments.
We created two variants:
(1) w/o loc-proto: we eliminate the local prototype-based CL loss.
(2) w/o glob-proto: we remove the global prototype-based CL loss.
The results of the ablation studies are shown in Table <ref>. We can observe that:
(1) The inferior performance of models w/o loc-proto and w/o glob-proto demonstrates the significant contributions of local and global prototype-based CL to the outcome.
(2) In general, w/o loc-proto contributes more, which shows that local prototype-based CL plays an important role in improving model performance.
In conclusion, each component in FedPCL-CDR plays a crucial role, demonstrating the rationality and effectiveness of our design.
§.§.§ Performance for different proportions of overlapping users.
To assess FedPCL-CDR's capability in addressing sparse overlapping users within CDR, we manipulate the overlapping ratio specifically for Task 1 across different settings. These varying ratios signify different levels of commonality, where a higher ratio indicates a greater number of overlapping users across domains. For instance, in Task 1 with the “Phone-Sport" dataset and an overlapping ratio of 30%, the number of overlapping users is computed as 655 * 30% = 196. Due to space limitations, we report results about several representative CDR methods. The corresponding results with different overlapping ratios are shown in Table <ref>. As the overlapping ratio increases, the performance of all models demonstrates improvement. This is intuitively sensible, as a higher overlapping ratio implies a greater number of shared users, facilitating a more straightforward transfer of cross-domain knowledge. The performance of GA-DTCDR and PriCDR shows significant fluctuations, primarily because they heavily rely on fully overlapping users. In contrast, both NMCDR, CL-DTCDR and our method, FedPCL-CDR, demonstrate relatively minor changes, indicating that effectively transferring knowledge across non-overlapping users can enhance performance and ensure model stability.
§.§.§ Empirical Study of Privacy.
In this subsection, we demonstrate the privacy-preserving capabilities of our FedPCL-CDR model by simulating an attack in which an attacker attempts to reconstruct the original user embeddings from intercepted prototypes. We analyze the difficulty of this reconstruction and evaluate the effectiveness of our privacy-preserving mechanisms. We assume that an attacker intercepts the local prototypes during the client-server communication. The attacker employs a deep neural network model to infer the original user embeddings from these intercepted prototypes, training the model to minimize the difference between the reconstructed and actual embeddings.
We use Mean Squared Error (MSE) as an evaluation metric to measure the accuracy of the reconstructed embeddings compared to the original ones. We conducted experiments on Tasks 1 and 2 and reported the results in Table <ref>. These high MSE values indicate that the attacker faces significant difficulty in accurately inferring the original user embeddings from the intercepted prototypes, demonstrating the strong privacy-preserving capabilities of our model.
§.§.§ Impact of Hyper-parameters.
In this section, we assess the model's performance across different configurations of two pivotal parameters: the weight parameter α associated with prototype-based CL losses, and the cluster number c. Due to space limitations, we only report HR@10 and NDCG@10 on Tasks 1 (Phone&Sport) and 3 (Movie&Music).
* Impact of α.
We employ the parameter α to control the degree of knowledge transfer across domains. To evaluate its impact, we conduct experiments with different α values, namely [0.001, 0.01, 0.1, 0.2]. Figure <ref> illustrates the outcomes for HR@10 and NDCG@10.
We find that as the weight parameter α increases, the performance first rises and then decreases. FedPCL-CDR achieves optimal results when α is set to 0.01.
* Impact of cluster number c.
The number of clusters significantly influences the generalization of prototypes, thereby affecting the learning of user preferences. As shown in Figure <ref>, our model reaches its peak performance when the number of clusters is set to 10. With the increase in the number of clusters, the HR@10 and NDCG@10 metrics initially rise, reaching a maximum at c=10, and subsequently decline. This trend can be attributed to the fact that an excessive number of clusters results in overly specific prototypes, which lack generalization capabilities and lead to suboptimal knowledge transfer.
§ CONCLUSION AND FUTURE WORK
In this paper, we propose a Federated User Preference Modeling framework (FUPM) for PPCDR to address the data sparsity issue while protecting user privacy. Within FUPM, we first design a comprehensive preference exploration module to learn comprehensive user preferences from review texts and potentially positive items.
We then devise a private preference transfer module to privately transfer user preferences within the FL framework. Importantly, in order to protect user privacy during cross-domain knowledge transfer, we learn local prototypes and apply LDP techniques to them before transfer. Extensive experimental results on four CDR tasks based on real-world Amazon and Douban datasets demonstrate the effectiveness of our proposed FUPM.
Our study assumes overlapping users and non-overlapping items across domains. While our method can be extended to scenarios with partially overlapping users, it is not applicable to scenarios with no user overlap or only partial item overlap. Future work includes exploring effective methods to address these challenges.
IEEEtran
|
http://arxiv.org/abs/2409.03740v1 | 20240905175354 | Differentiable Discrete Event Simulation for Queuing Network Control | [
"Ethan Che",
"Jing Dong",
"Hongseok Namkoong"
] | cs.LG | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC"
] |
=8pt plus0pt minus3pt
=8pt plus0pt minus3pt
Differentiable Discrete Event Simulation
for Queuing Network Control
Ethan Che Jing Dong Hongseok Namkoong
Columbia Business School
§ ABSTRACT
Queuing network control is essential for managing congestion in job-processing systems
such as service systems, communication networks, and manufacturing processes. Despite growing interest in applying reinforcement learning (RL) techniques, queueing network control
poses distinct challenges, including high stochasticity, large state and action spaces, and lack of stability.
To tackle these challenges, we propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our main insight is that by implementing a well-designed smoothing technique for discrete event dynamics, we can compute policy gradients for large-scale queueing networks using auto-differentiation software (e.g., Tensorflow, PyTorch) and GPU parallelization. Through extensive empirical experiments, we observe that our policy gradient estimators are several orders of magnitude more accurate than typical -based estimators.
In addition, we propose a new policy architecture, which drastically improves stability while maintaining the flexibility of neural-network policies.
In a wide variety of scheduling and admission control tasks, we demonstrate that training control policies with pathwise gradients leads to a 50-1000x improvement in sample efficiency over state-of-the-art RL methods.
Unlike prior tailored approaches to queueing, our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments and those with non-exponential interarrival/service times.
§ INTRODUCTION
Queuing models are a powerful modeling tool to conduct performance analysis and optimize operational policies in diverse applications such as service systems (e.g., call centers <cit.>, healthcare delivery systems <cit.>, ride-sharing platforms <cit.>, etc), computer and communication systems <cit.>, manufacturing systems <cit.>, and financial systems (e.g., limit order books <cit.>). Standard tools for queuing control analysis involve establishing structural properties of the underlying Markov decision process (MDP) or leveraging analytically more tractable approximations such as fluid <cit.> or diffusion approximations <cit.>. These analytical results often give rise to simple control policies that are easy to implement and interpret. However, these policies only work under restrictive modeling assumptions and can be highly sub-optimal outside of these settings. Moreover, deriving a good policy for a given queuing network model requires substantial queuing expertise and can be theoretically challenging.
Recent advances in reinforcement learning (RL) have spurred growing interest in applying learning methodologies to solve queuing control problems, which benefit from increased data and computational resources <cit.>. These algorithms hold significant potential for generating effective controls for complex, industrial-scale networks encountered in real-world applications, which typically fall outside the scope of theoretical analysis.
However, standard model-free RL algorithms <cit.> often under-perform in queuing control, even when compared to simple queuing policies <cit.>, unless proper modifications are made. This under-performance is primarily due to the unique challenges posed by queuing networks, including (1) high stochasticity of the trajectories, (2) large state and action spaces, and (3) lack of stability guarantees under sub-optimal policies <cit.>.
For example, when applying policy gradient methods, typical policy gradient estimators based on coarse feedback from the environment (observed costs) suffer prohibitive error due to high variability (see, e.g., the estimators in Figure <ref>).
To tackle the challenges in applying off-the-shelf RL solutions for queuing control, we propose a new scalable framework for policy optimization that incorporates domain-specific queuing knowledge. Our main algorithmic insight is that queueing networks possess key structural properties that allow for several orders of magnitude more accurate gradient estimation.
By leveraging the fact that the dynamics of discrete event simulations of queuing networks are governed by observed exogenous randomness (interarrival and service times), we propose a differentiable discrete event simulation framework. This framework enables the computation of a gradient of a performance objective (e.g., cumulative holding cost) with respect to actions.
Our proposed gradient estimator, denoted as the estimator, can then be used to efficiently optimize the parameters of a control policy through stochastic gradient descent (SGD). By utilizing the known structure of queuing network dynamics, our approach provides finer-grained feedback on the sensitivity of the performance objective to any action taken along the sample path. This offers an infinitesimal counterfactual analysis: how the performance metric would change if the scheduling action were slightly perturbed.
Rather than relying on analytic prowess to compute these gradients, we utilize the rapid advancements in scalable auto-differentiation libraries such as PyTorch <cit.> to efficiently compute gradients over a single sample path or a batch of sample paths.
Our proposed approach supports very general control policies, including neural network policies, which have the potential to improve with more data and computational resources.
Notably, our method seamlessly handles large-scale queuing networks and large batches of data via GPU parallelization. Unlike off-the-shelf RL solutions whose performance is exceedingly sensitive to implementation details <cit.>, our method is easy to implement (see e.g., Figure <ref>) and requires minimal effort for parameter tuning.
Across a range of queueing networks, we empirically observe that our estimator substantially improves the sample efficiency and stability of learning algorithms for queuing network control while preserving the flexibility of learning approaches. In Figure <ref>, we preview our main empirical findings which show that gradients lead to a 50-1000x improvement in sample efficiency over model-free policy gradient estimators (e.g., <cit.>).
Buoyed by the promising empirical results, we
provide several theoretical insights explaining the observed efficiency gains.
Our proposed approach draws inspiration from gradient estimation strategies developed in the stochastic modeling and simulation literature, particularly infinitesimal perturbation analysis (IPA) <cit.>. While IPA has been shown to provide efficient gradient estimators for specific small-scale queuing models (e.g., the G/G/1 queue), it is well-known that unbiased IPA estimates cannot be obtained for general multi-class queuing networks due to non-differentiability of the sample path <cit.>. Our framework overcomes this limitation by proposing a novel smoothing technique based on insights from fluid models/approximations for queues and tools from the machine learning (ML) literature. To the best of our knowledge, our method is the first to provide a gradient estimation framework capable of handling very general and large-scale queuing networks and various control policies. Our modeling approach is based on discrete-event simulation models, and as a result, it can accommodate non-stationary and non-Markovian inter-arrival and service times, requiring only samples instead of knowledge of the underlying distributions.
Our second contribution is a simple yet powerful modification to the control policy architecture. It has been widely observed that training a standard RL algorithm, such as proximal policy optimization <cit.> (PPO), may fail to converge due to instabilities arising from training with random initialization. To address this issue, researchers have proposed either switching to a stabilizing policy when instability occurs <cit.> or imitating (behavior cloning) a stabilizing policy at the beginning <cit.>. However, both methods limit policy flexibility and introduce additional complexity in the training process. We identify a key source of the problem: generic policy parameterizations (e.g., neural network policies) do not enforce work conservation, leading to scenarios where even optimized policies often assign servers to empty queues.
To address this, we propose a modification to standard policy parameterizations in deep reinforcement learning,
which we refer to as the `work-conserving softmax'. This modification is compatible with standard reinforcement learning algorithms and automatically guarantees work conservation. Although work conservation does not always guarantee stability, we empirically observe across many scenarios that it effectively eliminates instability in the training process, even when starting from a randomly initialized neural network policy.
This modification not only complements our gradient estimator but is also compatible with other model-free RL approaches. We find that while PPO without any modifications fails to stabilize large queuing networks and leads to runaway queue lengths, PPO with the work-conserving softmax remains stable from random initialization and can learn better scheduling policies than traditional queuing policies.
Since rigorous empirical validation forms the basis of algorithmic progress, we provide a thorough empirical validation of the effectiveness of the differentiable discrete event simulator for queuing network control. We construct a wide variety of benchmark control problems, ranging from learning the cμ-rule in a simple multi-class queue to scheduling and admission control in large-scale networks. Across the board, we find that our proposed gradient estimator achieves significant improvements in sample efficiency over model-free alternatives, which translate to downstream improvements in optimization performance.
* In a careful empirical study across 10,800 parameter settings, we find that for 94.5% of these settings our proposed gradient estimator computed along a single sample path achieves greater estimation quality than with 1000x more data (see section <ref>).
* In a scheduling task in multi-class queues, gradient descent with gradient estimator better approximates the optimal policy (the cμ-rule) and achieves a smaller average cost than with a value function baseline and 1000x more data (see section <ref>).
* In an admission control task, optimizing the buffer sizes with gradient estimator achieves smaller costs than randomized finite differences (SPSA <cit.>) with 1000x more data, particularly for higher-dimensional problem instances (see section <ref>).
* For large-scale scheduling problems, policy gradient with gradient estimator and work-conserving softmax policy architecture achieves a smaller long-run average holding cost than traditional queuing policies
and state-of-the-art RL methods such as , which use 50x more data (see section <ref>). Performance gains are greater for larger networks with non-exponential noise.
These order-of-magnitude improvements in sample efficiency translate to improved computational efficiency when drawing trajectories from a simulator and improved data efficiency if samples of event times are collected from a real-world system.
Overall, these results indicate that one can achieve significant improvements in sample efficiency by incorporating the specific structure of queuing networks, which is under-utilized by model-free reinforcement learning methods. In section <ref>, we investigate the M/M/1 queue as a theoretical case study and show that even with an optimal baseline, has a sub-optimally large variance under heavy traffic compared to a pathwise policy gradient estimator. This analysis identifies some of the statistical limitations of , and illustrates that a better understanding of the transition dynamics, rather than narrowly estimating the value-function or Q-function, can deliver large improvements in statistical efficiency. Given the scarcity of theoretical results comparing the statistical efficiency of different policy gradient estimators, this result may be of broader interest.
Our broad aim with this work is to illustrate a new paradigm for combining the deep, structural knowledge of queuing networks developed in the stochastic modeling literature with learning and data-driven approaches.
Rather than either choosing traditional queuing policies, which can be effective for certain queueing control problems but do not improve with data, or choosing model-free reinforcement learning methods, which learn from data but do not leverage known structure, our framework offers a favorable midpoint: we leverage structural insights to extract much more informative feedback from the environment, which can nonetheless be used to optimize black-box policies and improve reliability. Beyond queuing networks, our algorithmic insight provides a general-purpose tool for computing gradients in general discrete-event dynamical systems. Considering the widespread use of discrete-event simulators with popular modeling tools such as AnyLogic <cit.> or Simio <cit.> and open-source alternatives such as SimPy <cit.>, the tools developed in this work can potentially be applied to policy optimization problems in broader industrial contexts.
The organization of this paper is as follows. In section <ref>, we discuss connections with related work. In section <ref>, we introduce the discrete-event dynamical system model for queuing networks. In section <ref>, we introduce our framework for gradient estimation. In section <ref>, we perform a careful empirical study of our proposed gradient estimator, across estimation and optimization tasks.
In section <ref>, we discuss the instability issue in queuing control problems and our proposed modification to the policy architecture to address this. In section <ref>, we empirically investigate the performance of our proposed pathwise gradient estimation and work-conserving policy architecture in optimizing scheduling policies for large-scale networks. In section <ref>, we discuss the M/M/1 queue as a theoretical case study concerning the statistical efficiency of compared to estimators. Finally, section <ref> concludes the paper and discusses extensions.
§ RELATED WORK
We discuss connections to related work in queuing theory, reinforcement learning, and gradient estimation in machine learning and operations research.
Scheduling in Queuing Networks
Scheduling is a long-studied control task in the queuing literature for managing queues with multiple classes of jobs <cit.>. Standard policies developed in the literature include static priority policies such as the cμ-rule <cit.>, threshold policies <cit.>, policies derived from fluid approximations <cit.>, including discrete review policies <cit.>, policies that have good stability properties such as MaxWeight <cit.> and MaxPressure <cit.>. Many of these policies satisfy desirable properties such as throughput optimality <cit.>, or cost minimization <cit.> for certain networks and/or in certain asymptotic regimes. In our work, we aim to leverage some of the theoretical insights developed in this literature to design reinforcement learning algorithms that can learn faster and with less data than model-free RL alternatives. We also use some of the standard policies as benchmark policies when validating the performance of our policy gradient algorithm.
Reinforcement Learning in Queueing Network Control
Our research connects with the literature on developing reinforcement learning algorithms for queuing network control problems <cit.>. These works apply standard model-free RL techniques (e.g. Q-learning, , value iteration, etc.) but introduce novel modifications to address the unique challenges in queuing network control problems. Our work differs in that we propose an entirely new methodology for learning from the environment based on differentiable discrete event simulation, which is distinct from all model-free RL methods. The works <cit.> observe that RL algorithms tend to be unstable and propose fixes to address this, such as introducing a Lyapunov function into the rewards, or behavior cloning of a stable policy for initialization. In our work, we propose a simple modification to the policy network architecture, denoted as the work-conserving softmax as it is designed to ensure work-conservation. We find empirically that work-conserving softmax ensures stability with even randomly initialized neural network policies. In our empirical experiments, we primarily compare our methodology with the algorithm developed in <cit.>. In particular, we construct a baseline with the same hyper-parameters, neural network architecture, and variance reduction techniques as in <cit.>, although with our policy architecture modification that improves stability.
Differentiable Simulation in RL and Operations Research
While differentiable simulation is a well-studied paradigm for control problems in physics and robotics <cit.>, it has only recently been explored for large-scale operations research problems. For instance, <cit.> study inventory control problems and train a neural network using direct back-propagation of the cost, as sample paths of the inventory levels are continuous and differentiable in the actions. In our work, we study control problems for queuing networks, which are discrete and non-differentiable, preventing the direct application of such methods. To address this, we develop a novel framework for computing pathwise derivatives for these non-differentiable systems, which proves highly effective for training control policies. Another line of work, including <cit.>, proposes differentiable agent-based simulators based on differentiable relaxations. While these relaxations have shown strong performance in optimization tasks, they also introduce unpredictable discrepancies with the original dynamics. We introduce tailored differentiable relaxations in the back-propagation process only, ensuring that the forward simulation remains true to the original dynamics.
Gradient Estimation in Machine Learning
Gradient estimation <cit.> is an important sub-field of the machine learning literature, with applications in probabilistic modeling <cit.> and reinforcement learning <cit.>. There are two standard strategies for computing stochastic gradients <cit.>. The first is the score-function estimator or <cit.>, which only requires the ability to compute the gradient of log-likelihood but can have high variance <cit.>. Another strategy is the reparameterization trick <cit.>, which involves decomposing the random variable into the stochasticity and the parameter of interest, and then taking a pathwise derivative under the realization of the stochasticity. Gradient estimators based on the reparameterization trick can have much smaller variance <cit.>, but can only be applied in special cases (e.g. Gaussian random variables) that enable this decomposition. Our methodology makes a novel observation that for queuing networks, the structure of discrete-event dynamical systems gives rise to the reparameterization trick. Nevertheless, the function of interest is non-differentiable, so standard methods cannot be applied. As a result, our framework also connects with the literature on gradient estimation for discrete random variables <cit.>. In particular, to properly smooth the non-differentiability of the event selection mechanism, we employ the straight-through trick <cit.>, which has been previously used in applications such as discrete representation learning <cit.>. Our work involves a novel application of this technique for discrete-event systems, and we find that this is crucial for reducing bias when smoothing over long time horizons.
Gradient Estimation in Operations Research
There is extensive literature on gradient estimation for stochastic systems <cit.>, some with direct application to queuing optimization <cit.>.
Infinitesimal Perturbation Analysis (IPA) <cit.> is a standard framework for constructing pathwise gradient estimators, which takes derivatives through stochastic recursions that represent the dynamics of the system. While IPA has been applied successfully to some specific queuing networks and discrete-event environments more broadly <cit.>, standard IPA techniques cannot be applied to general queuing networks control problems, as has been observed in <cit.>. There has been much research on outlining sufficient conditions under which IPA is valid, such as the commuting condition in <cit.> or the perturbation conditions in <cit.>, but these conditions do not hold in general. Several extensions to IPA have been proposed, but these alternatives require knowing the exact characteristics of the sampling distributions and bespoke analysis of event paths <cit.>. Generalized likelihood-ratio estimation <cit.> is another popular gradient estimation framework, which leverages an explicit Markovian formulation of state transitions to estimate parameter sensitivities. However, this requires knowledge of the distributions of stochastic inputs, and even with this knowledge, it may be difficult to characterize the exact Markov transition kernel of the system. Finally, finite differences <cit.> and finite perturbation analysis <cit.> are powerful methods, particularly when aided with common random numbers <cit.>, as it requires minimal knowledge about the system. However, it has been observed that performance can scale poorly with problem dimension <cit.>, and we also observe this in an admission control task (see Section <ref>).
Our contribution is proposing a novel, general-purpose framework for computing pathwise gradients through careful smoothing, which only requires samples of random input (e.g., interarrival times and service times) rather than knowledge of their distributions.
Given the negative results about the applicability of IPA for general queuing network control problems (e.g., general queuing network model and scheduling policies), we introduce bias through smoothing to achieve generality. It has been observed in <cit.> that biased IPA surrogates can be surprisingly effective in simulation optimization tasks such as ambulance base location selection. Our extensive empirical results confirm this observation and illustrate that while there is some bias, it is very small in practice, even over long time horizons (>10^5 steps).
§ DISCRETE-EVENT DYNAMICAL SYSTEM MODEL FOR QUEUING NETWORKS
We describe multi-class queuing networks as discrete-event dynamical systems. This is different from the standard Markov chain representation, which is only applicable when inter-arrival and service times are exponentially distributed.
To accommodate more general event-time distributions, the system description not only involves the queue lengths, but also auxiliary information such as residual inter-arrival times and workloads. Surprisingly, this more detailed system description leads to a novel gradient estimation strategy (discussed in Section <ref>) for policy optimization.
We first provide a brief overview of the basic scheduling problem. We then describe the discrete-event dynamics of multi-class queuing networks in detail and illustrate with a couple of well-known examples. While queuing networks have been treated as members of a more general class of Generalized Semi-Markov Processes (GSMPs) that reflect the discrete-event structure of these systems <cit.>, we introduce a new set of notations tailored for queuing networks to elaborate on some of their special structures.
In particular, we represent the discrete event dynamics
via matrix-vector notation that maps directly to its implementation in auto-differentiation frameworks, allowing for the differentiable simulation of large-scale queueing networks through GPU parallelization.
§.§ The Scheduling Problem
A multi-class queuing network consists of n queues and m servers. The core state variable is the queue lengths associated with each queue, denoted as x(t) ∈_+^n, which evolves over continuous time. As a discrete-event dynamical system, the state also includes auxiliary data denoted as (t)— consisting of residual inter-arrival times and workloads at time t—which determines state transitions but are typically not visible to the controller.
The goal of the controller is to route jobs to servers, represented by an assignment matrix u ∈{0, 1 }^m × n, to manage congestion. More concretely, the problem is to derive a policy π(x), which only depends on the observed queue lengths and selects scheduling actions, to minimize the integral of some instantaneous costs c(x,u). A typical instantaneous cost is a linear holding/waiting cost:
c(x,u)=h^⊤x
for some vector h∈_+^n. The objective is to find a
policy π that minimizes the cumulative cost over a time horizon:
min_π𝔼[∫_0^Tc(x(t),π(x(t)))dt].
Optimizing a continuous time objective can be difficult and may require an expensive discretization procedure. However, discrete-event dynamical systems are more structured in that x(t) is piecewise constant and is only updated when an event occurs. For the multi-class queuing networks, events are either arrivals to the network or job completions, i.e., a server finishes processing a job.
It is then sufficient to sample the system only when an event occurs, and we can approximate the continuous-time objective with a performance objective in the discrete-event system over N events,
min_π{J_N(π) := 𝔼[∑_k=0^N-1c(x_k,π(x_k))τ_k+1^*]
}
where x_k is the queue lengths after the kth event update. τ^*_k+1 is an inter-event time that measures the time between the kth and (k+1)th event, and N is chosen such that the time of the Nth event, a random variable denoted as t_N, is “close" to T.
The dynamics of queuing networks are highly stochastic, with large variations across trajectories. Randomness in the system is driven by the random arrival times of jobs and the random workloads (service requirements) of these jobs. We let ξ_1:N={ξ_i}_i=1^N denote a single realization, or `trace', of these random variables over the horizon of N events.
We can then view the expected cost (<ref>) more explicitly as a policy cost averaged over traces.
In addition, we focus on a parameterized family of policies {π_θ:θ∈Θ}, for some Θ⊆^d, in order to optimize (<ref>) efficiently. In this case, we utilize the following shorthand J_N(θ;ξ_1:N) for the
policy cost over a single trace and J_N(θ) for the average policy
cost under π_θ,
which leads to the parameterized control problem:
min_θ{J_N(θ) := 𝔼[J(θ;ξ_1:N)]
:=
𝔼[∑_k=0^N-1c(x_k,π_θ(x_k))τ^*_k+1]
}.
We now turn to describe the structure of the transition dynamics of multi-class queuing networks, to elaborate how scheduling actions affect the queue lengths.
§.§ System Description
Recall that the multi-class queuing network consists of n queues and m servers, where each queue is associated with a job class, and different servers can be of different compatibilities with various job classes. Recall that x(t)∈ℕ_+^n denotes the lengths of the queues at time t ∈_+.
The queue lengths x(t) are updated by one of two types of events: job arrivals or job completions.
Although the process evolves in continuous time, it is sufficient to track the system only when an event occurs. We let k∈_+ count the kth event in the system, and let t_k denote the time immediately after the kth event occurs. By doing so, we arrive at a discrete-time representation of the system. Given that we do not assume event times are exponential, the queue lengths x_k alone are not a Markovian descriptor of the system. Instead, we must consider an augmented state s_k = (x_k, _k), where x_k ∈^n_+ is the vector of queue lengths and _k = (τ_k^A, w_k),∈^2n_+ is an auxiliary state vector that includes residual inter-arrival times τ_k^A = {}_j = 1^n∈^n_+ and residual workloads w_k = {}_j = 1^n∈^n_+ of the `top-of-queue' jobs in each queue. The auxiliary state variables determine the sequence of events.
More explicitly, for each queue j∈[n], the residual inter-arrival time keeps track of the time remaining until the next arrival to queue j occurs. Immediately after an arrival to queue j occurs, the next inter-arrival time is drawn from a probability distribution F_j^A. When a job arrives to queue j, it comes with a workload (service requirement) drawn from a distribution F_j^S. We allow the distributions F_j^A's and F_j^S's to vary with time, i.e., the interarrival times and service requirements can be time-varying. For notational simplicity, we will not explicitly denote the time dependence here. We refer to the residual workload at time t_k of the top-of-queue job in queue j as , which specifies how much work must be done before the job completion. A job is only processed if it is routed to a server i∈[m], in which case the server processes the job at a constant service rate μ_ij∈_+. We refer to μ∈_+^m × n as the matrix of service rates.
Under this scheduling decision, the residual processing time, i.e., the amount of time required
to process the job, is τ^S_k,j= /μ_ij.
The augmented state s_k is a valid Markovian descriptor of the system and we now describe the corresponding transition function f such that
s_k+1 = f(s_k, u_k, ξ_k+1),
where u_k is an action taken by the controller and ξ_k+1 contains external randomness arising from new inter-arrival times or workloads drawn from F^A_j's or F^S_j's depending on the event type.
The transition is based on the next event, which is the event with the minimum residual time. The controller influences the transitions through the processing times, by deciding which jobs get routed to which servers. We focus on scheduling problems where the space of controls 𝒰 are feasible assignments of servers to queues. Let _n∈^n denote an n-dimensional vector consisting of all ones. The action space is,
𝒰:={ u∈{0,1}^m× n:u_n = _m, _m^⊤ u=_n,u≤ M},
where M∈{0,1}^m× n is the topology of the network, which indicates which job class can be served by which server. Following existing works on scheduling in queuing networks <cit.>, we consider networks for which each job class has exactly 1 compatible server.
For every queue j, there is 1 compatible server,
i.e., ∑_i=1^mM_ij=1.
Given an action u, the residual processing time is /μ_ij when u_ij=1 and ∞
when u_ij=0. This can be written compactly as
τ^S_k,j≡/∑_i=1^mu_ijμ_ij = /_j,j,
where 𝐝𝐢𝐚𝐠(u^⊤μ) ∈^n × n extracts the diagonal entries of the matrix u^⊤μ∈^n × n.
As a result, at time t_k the residual event times τ_k∈_+^2n consists of the residual inter-arrival and processing times,
τ_k≡ (τ_k^A, τ_k^S) = (τ_k^A, ^-1 )
We emphasize that τ_k depends on the action u. The core operation in the transition dynamics is the event selection mechanism. The next event is the one with the minimum residual time in τ_k.
We define _k+1∈{0,1}^2n to be a one-hot vector representing the of τ_k – the position of the minimum in τ_k:
_k+1(_k, u_k) ≡ (τ_k) ∈{0,1}^2nEvent Select
_k+1(_k, u_k) indicates the type of the (k+1)th event.
In particular, if the minimum residual event time is a residual inter-arrival time, then the next event is an arrival to the system. If it is a residual job processing time, then the next event is a job completion.
We denote τ^*_k+1 to be the inter-event time, which is equal to the minimum residual time:
τ^*_k+1(_k, u_k)
= min{τ_k}Event Time
τ^*_k+1(_k, u_k) is the time between the kth and (k+1)th event, i.e. t_k+1 - t_k.
After the job is processed by a server, it either leaves the system or proceeds
to another queue. Let R∈ℝ^n× n denote the routing matrix, where the jth column, R_j details the change in the queue lengths when a job in class j finishes service. For example, for a tandem queue with two queues, the routing
matrix is
R=[[ -1 0; 1 -1 ]]
indicating that when a job in the first queue completes service, it leaves its
own queue and joins the second queue. When a job in the second queue
completes service, it leaves the system.
We define the event matrix D as a block matrix of the form
D=[[ I_n R ]],
where I_n is the n× n identity matrix. The event matrix determines the update to the queue lengths, depending on which event took place. In particular, when the (k+1)th event occurs, the update to the queue lengths is
x_k+1 =x_k+D _k+1(_k, u_k)
Queue Update
Intuitively, the queue length of queue j increases by 1 when the next event is a class j job arrival; the queue lengths update according to R_j when the next event is a queue j job completion.
The updates to the auxiliary state _k = (τ_k^A, w_k) ∈_+^2n is typically given by
[
[ τ_k+1^A; w_k+1 ]]
=
[
[ τ_k^A; w_k ]]
-
τ_k+1^*[
[ _n; ]]
_reduce residual times +
[
[ T_k+1; W_k+1 ]] ⊙ e_k+1_draw new times / workloadsAux Update
where ⊙ is the element-wise product and
T_k+1 ={T_(k+1),j}_j=1^n∈^n are new inter-arrival times T_(k+1),j∼ F_j^A
and W_k+1 = {W_k+1,j}_j=1^n∈^n are workloads W_k+1,j∼ F_j^S.
Intuitively, after an event occurs, we reduce the residual inter-arrival times by the inter-event time. We reduce workloads by the amount of work applied to the job, i.e., the inter-event time multiplied by the service rate of the allocated server. Finally, if an arrival occurred we draw a new inter-arrival time; if a job was completed, we draw a new workload for the top-of-queue job (if the queue is non-empty).
r0.4
< g r a p h i c s >
M/M/1 queue.
There are two boundary cases that make the update slightly different from (<ref>). First, if a new job arrives at an empty queue j (either an external arrival or a transition from a job completion), we also need to update w_k+1,j to W_k+1,j. Second, if a queue j job completion leaves an empty queue behind, we set w_k+1, j = ∞, indicating that no completions can occur for an empty queue.
Let ξ_k denote exogenous noise in the environment, which consists of the sampled inter-arrival times and workloads for resetting the time of a completed event,
ξ_k = (T_k, W_k) ∈_+^2n.
We finally arrive at the stated goal of describing the transition dynamics of s_k = (x_k, _k) in terms of a function f(s_k, u_k, ξ_k+1). Notably, all the stochasticity is captured by ξ_k's, which are independent of the states and actions.
It is worth mentioning a few features of this discrete-event representation.
* While auxiliary data _k = (τ^A_k, w_k) is necessary for s_k = (x_k, _k) to be a valid Markovian system descriptor, this information is typically not available to the controller. We assume the controller only observes the queue lengths, i.e., the control policy π only depends on x_k.
* The representation can flexibly accommodate non-stationary and non-exponential event-time distributions, i.e., F_j^A's and F_j^S's can be general and time-varying.
* This model enables purely data-driven simulation, as it only requires samples of the event times ξ_k. One does not need to know the event time distributions F_j^A's and F_j^S's to simulate the system if data of these event times are available.
* The matrix-vector representation enables GPU parallelism, which can greatly speed up the simulation of large-scale networks.
* As we will explain later, this representation enables new gradient estimation strategies.
Queuing Network Examples
As a concrete illustration, we show how a few well-known queuing networks are described as discrete-event dynamical systems.
The M/M/1 queue (see Figure <ref>) with arrival rate λ>0 and
service rate μ≥λ features a single queue n=1 and a single server m=1, and exponentially distributed inter-arrival times and workloads, i.e., T_k ∼𝖤𝗑𝗉(λ) and
W_k ∼𝖤𝗑𝗉(1) respectively. The network topology is M=[1], the service rate is μ, and the routing matrix is R =[-1], indicating that jobs leave the system after service completion.
The scheduling policy is work-conserving, the server always serves the queue when it is non-empty, i.e. u_k=1{x_k>0}. The state update is,
x_k+1 =x_k+[[ 1 -1 ]]^⊤_k+1
_k+1 =min{τ_k^A,τ_k^S} =min{τ_k^A,/μ· 1{x_k>0 }}∈{0,1}^2
τ^*_k+1 = min{τ_k^A,τ_k^S}
[
[ τ_k+1^A; w_k+1 ]]
=
[
[ τ_k^A; w_k ]]
- τ_k+1^*[
[ 1; μ 1{ x_k > 0} ]] +
[
[ T_k+1; W_k+1 ]] ⊙ e_k+1.
The multi-class singer-server queue features an n queues and a single server m=1 (see Figure <ref>). While the inter-arrival times and workloads, i.e., (T_k, W_k)'s are usually exponentially distributed, they can also follow other distributions. The network topology is M=[1,...,1] ∈^n, the service rates are μ = [μ_1,...,μ_n], and the routing matrix is R =[-1,...,-1]∈^n, indicating that jobs leave the system after service completion.
A well-known scheduling policy for this system is the cμ-rule, a static priority rule.
Let h = (h_1,...,h_n) ∈^n denote the holding costs. The cμ-rule sets
u_k=_j∈ [n]{ h_jμ_j1{x_j>0}}∈{0,1}^n.
The state update is,
x_k+1 =x_k+[[ _n -_n ]]^⊤_k+1
_k+1 =min{τ_k,1^A,...,τ_k,n^A,τ_k,1^S,...,τ_k,n^S}
=min{τ_k,1^A,...,τ_k,n^A,
w_k,1/μ_1 u_k,1,...
w_k,n/μ_n u_k,n}
τ^*_k+1 = min{τ_k,1^A,...,τ_k,n^A,τ_k,1^S,...,τ_k,n^S}
[
[ τ_k+1^A; w_k+1 ]]
=
[
[ τ_k^A; w_k ]]
- τ_k+1^*[
[ _n; μ⊙ u_k ]] +
[
[ T_k+1; W_k+1 ]] ⊙ e_k+1.
The criss-cross network <cit.> features n=3 queues and m=2 servers (see Figure <ref>). External jobs arrive to queues 1 and 3. The first server can serve queues 1 and 3 with service rates μ_11 and μ_13 respectively, while the second server is dedicated to serving queue 2 with service rate μ_22. After jobs from queue 1 are processed, they are routed to queue 2; jobs from queues 2 and 3 exit the system after service completion. The inter-arrival times and workloads, i.e., (T_k, W_k)'s, can follow general distributions.
The network topology M, service rate matrix, and the routing matrix R are:
M = [ [ 1 0 1; 0 1 0; ]], μ = [ [ μ_11 0 μ_13; 0 μ_22 0; ]],
R = [ [ -1 0 0; 1 -1 0; 0 0 -1 ]]
<cit.> develop a work-conserving threshold policy for this system. For a threshold a ∈_+, server 1 prioritizes jobs in queue 1 if the number of jobs in queue 2 is below a. Otherwise, it prioritizes queue 3. This gives the scheduling action
u_k,11 = 1{x_k,2≤ a },
u_k,22 = 1{x_k,2 > 0 },
u_k,13 = (1 - u_k,11)1{x_k,3 > 0},
and the transition dynamics
x_k+1 =x_k+[[ I_3 R ]]
_k+1
_k+1 =min{τ_k,1^A, ∞, τ_k,3^A,
τ_k,1^S,
τ_k,2^S,
τ_k,3^S}
=min{τ_k,1^A, ∞, τ_k,3^A,
w_k,1/μ_11u_k,11,
w_k,2/μ_22u_k,22,
w_k,3/μ_13u_k,13}
τ^*_k+1 = min{τ_k,1^A, ∞, τ_k,3^A,
τ_k,1^S,
τ_k,2^S,
τ_k,3^S}
[
[ τ_k+1^A; w_k+1 ]]
=
[
[ τ_k^A; w_k ]]
- τ_k+1^*[
[ _3; ]] +
[
[ T_k+1; W_k+1 ]] ⊙ e_k+1.
Here, τ_k,2^A = ∞ since queue 2 has no external arrivals.
§ GRADIENT ESTIMATION
In this section, we introduce our proposed approach for estimating the gradient of the objective (<ref>), ∇ J_N(θ). We start with a brief discussion of existing methods for gradient estimation, including their advantages and limitations. We then outline the main challenges for computing pathwise derivatives in multi-class queuing networks, and introduce our strategy for overcoming these challenges. Finally, we formally define our gradient estimation framework and discuss its computational and statistical properties. Later in section <ref>, we perform a comprehensive empirical study and find that our gradient estimation framework is able to overcome many of the limitations of existing methods in that (1) it is capable of estimating gradients for general queuing networks, (2) it provides stable gradient estimations over very long horizons (>10^5 steps), (3) it provides greater estimation accuracy than model-free policy gradient methods with 1000x less data,
and (4) when applying to policy optimization, it drastically improves the performance of the policy gradient algorithm for various scheduling and admission control tasks.
L0.3
< g r a p h i c s >
Multi-class, single-server queue.
Our goal is to optimize the parameterized control problem (<ref>). A standard optimization algorithm is (stochastic) gradient descent, which has been considered for policy optimization and reinforcement learning <cit.>. The core challenge for estimating policy gradient ∇ J_N(θ) = ∇[J(θ;ξ_1:N)] from sample paths of the queuing network is that the sample path cost J(θ, ξ_1:N) is in general not differentiable in θ. As a consequence, one cannot change the order of differentiation and expectation, i.e.,
∇ J_N(θ)=∇[J_N(θ;ξ_1:N)]𝔼[∇ J_N(θ;ξ_1:N)],
where ∇ J_N(θ;ξ_1:N) is not even well-defined.
The non-differentiability of these discrete-event dynamical systems emerges from two sources. First, actions u_1:N are discrete scheduling decisions, and small perturbations in the policy can result in large changes in the scheduling decisions produced by the policy. Second, the actions affect the dynamics through the event times. The ordering of events is based on the `𝖺𝗋𝗀𝗆𝗂𝗇' of the residual event times, which is not differentiable.
In the stochastic simulation literature, there are two popular methods for gradient estimation: infinitesimal perturbation analysis (IPA) and generalized likelihood ratio (LR) gradient estimation.
To illustrate, consider abstractly and with a little abuse of notation a system following the dynamics s_k+1 = f(s_k, θ, ξ_k+1), where s_k ∈ is the state, θ∈ is the parameter of interest, ξ_k is exogenous stochastic noise, and f is a differentiable function. Then, the IPA estimator computes a sample-path derivative estimator
by constructing a derivative process D_k=∂ s_k / ∂θ via the recursion:
D_k+1 = ∂/∂θ f(s_k, θ, ξ_k+1) + ∂/∂ s_k f(s_k, θ, ξ_k+1) · D_kIPA
Likelihood-ratio gradient estimation on the other hand uses knowledge of the distribution of ξ_k to form the gradient estimator. Suppose that s_k is a Markov chain for which the transition kernel is parameterized by θ, i.e., s_k+1∼ p_θ(· |s_k).
For a fixed θ_0, let
∂/∂θ_θ[s_k] = ∂/∂θ_θ_0[s_kL_k(θ)] = _θ_0[s_k∂/∂θL_k(θ)]
L_k(θ) := ∏_j=1^k-1 p_θ(s_j+1|s_j)/∏_j=1^k-1 p_θ_0(s_j+1|s_j).
This allows one to obtain the following gradient estimator:
D_k = s_k∑_j=1^k-1∂/∂θp_θ(s_j+1|s_j)/p_θ_0(s_j+1|s_j)L_k(θ), s_j+1∼ p_θ_0(·|s_j),∀ j ≤ k.
LR
Despite their popularity, there are limitations to applying these methods to general multi-class queuing networks. While IPA has been proven efficient for simple queuing models, such as the G/G/1 queue through the Lindley recursion, it is well-known that unbiased IPA estimates cannot be obtained for general queuing networks <cit.>. The implementation of LR gradient estimation hinges on precise knowledge of the system's Markovian transition kernel <cit.>. This requires knowledge of the inter-arrival time and workload distributions, and even with this knowledge, it is non-trivial to specify the transition kernel of the queue lengths and residual event times in generic systems. Modifications to IPA <cit.> also require precise knowledge of event time distributions and often involve analyzing specific ordering of events which must be done on a case-by-case basis. As a result, none of these methods can reliably provide gradient estimation for complex queuing networks under general scheduling policies and with possibly unknown inter-arrival and service time distributions. Yet, the ability to handle such instances is important to solve large-scale problems arising in many applications.
Due to the challenges discussed above, existing reinforcement learning (RL) approaches for queueing network control mainly rely on model-free gradient estimators, utilizing either the estimator and/or Q-function estimation. As we will discuss shortly, these methods do not leverage the structural properties of queuing networks and may be highly sample-inefficient, e.g., requiring a prohibitively large sample for gradient estimation.
To address the challenges discussed above, we propose a novel gradient estimation framework that can handle general, large-scale multi-class queuing networks under any differentiable scheduling policy, requiring only samples of the event times rather than knowledge of their distributions. Most importantly, our approach streamlines the process of gradient estimation, leveraging auto-differentiation libraries such as PyTorch <cit.> or Jax <cit.> to automatically compute gradients, rather than constructing these gradients in a bespoke manner for each network as is required for IPA or LR. As shown in Figure <ref>, computing a gradient in our framework requires only a few lines of code. To the best of our knowledge, this is the first scalable alternative to model-free methods for gradient estimation in queuing networks.
§.§ The standard approach: the estimator
Considering the lack of differentiability in most reinforcement learning environments, the standard approach for gradient estimation developed in model-free RL is the score-function or estimator <cit.>. This serves as the basis for modern policy gradient algorithms such as Trust-Region Policy Optimization (TRPO) <cit.> or Proximal Policy Optimization (PPO) <cit.>. As a result, it offers a useful and popular baseline to compare our proposed method with.
The core idea behind the estimator is to introduce a randomized policy π_θ and differentiate through the action probabilities induced by the policy. Under mild regularity conditions on π_θ and c(x_k,u_k), the following expression holds for the policy gradient:
∇ J_N(θ) = 𝔼[∑_t=0^N-1(∑_k=t^N-1c(x_k,u_k)τ^*_k+1) ∇_θlogπ_θ(u_t|x_t)],
which leads to the following policy gradient estimator:
∇^𝖱 J_N(θ; ξ_1:N) = ∑_t=0^N-1(∑_k=t^N-1c(x_k,u_k)τ^*_k+1) ∇_θlogπ_θ(u_t|x_t).
While being unbiased, the 𝖱𝖤𝖨𝖭𝖥𝖮𝖱𝖢𝖤 estimator is known to have a very high variance <cit.>. The variance arises from two sources. First, the cumulative cost ∑_k=t^N-1c(x_k,u_k)τ^*_k+1 can be very noisy, as has been observed for queuing networks <cit.>. Second,
as the policy converges to the optimal policy, the score function ∇_θlogπ_θ(u_t|x_t) can grow large, magnifying the variance in the cost term. Practical implementations involve many algorithmic add-ons to reduce variance, e.g., adding a `baseline' term <cit.> which is usually (an estimate of) the value function V_π_θ(x_k),
∇^𝖱𝖡 J_N(θ; ξ_1:N) = ∑_t=0^N-1(∑_k=t^N-1c(x_k,u_k)τ^*_k+1
- V_π_θ(x_k)) ∇_θlogπ_θ(u_t|x_t).
𝖡𝖠𝖲𝖤𝖫𝖨𝖭𝖤
These algorithmic add-ons have led to the increased complexity of existing policy gradient implementations <cit.> and the outsized importance of various hyperparameters <cit.>. It has even been observed that seemingly small implementation “tricks" can have a large impact on performance, even more so than the choice of the algorithm itself <cit.>.
§.§ Our approach: Differentiable Discrete-Event Simulation
We can view the state trajectory as a repeated composition of the transition function s_k+1 = f(s_k,u_k, ξ_k+1), which is affected by exogenous noise ξ_1:N, i.e., stochastic inter-arrival and service times. If the transition function were differentiable with respect to the actions u_k, then under any fixed trace ξ_1:N, one could compute a sample-path derivative of the cost J(θ;ξ_1:N) using auto-differentiation frameworks such as PyTorch <cit.> or Jax <cit.>. Auto-differentiation software computes gradients efficiently using the chain rule. To illustrate, given a sample path of states, actions, and noise (s_k,u_k,ξ_k+1)_k=0^N-1, we can calculate the gradient of s_3 with respect to u_1 via
∂ s_3/∂ u_1 = ∂ s_3/∂ s_2∂ s_2/∂ u_1 =
∂ f(s_2,u_2, ξ_3)/∂ s_2∂ f(s_1,u_1, ξ_2)/∂ u_1.
This computation is streamlined through a technique known as backpropagation, or reverse-mode auto-differentiation. The algorithm involves two steps. The first step, known as the forward pass, evaluates the main function or performance metric (in the example, s_i's) and records the partial derivatives of all intermediate states relative to their inputs (e.g. ∂ s_2 / ∂ u_1).
This step constructs a computational graph, which outlines the dependencies among variables. The second step is a backward pass, which traverses the computational graph in reverse. It sequentially multiplies and accumulates partial derivatives using the chain rule, propagating these derivatives backward through the graph until the gradient concerning the initial input (in this example, u_1) is calculated. Due to this design, gradients of functions involving nested compositions can be computed in a time that is linear in the number of compositions. By systematically applying the chain rule in reverse, auto-differentiation avoids the redundancy and computational overhead typically associated with numeric differentiation methods.
However, as mentioned before, the dynamics do not have a meaningful derivative due to the non-differentiability of actions and the 𝖺𝗋𝗀𝗆𝗂𝗇 operation which selects the next event based on the minimum residual event time. Yet if we can utilize suitably differentiable surrogates, it would be possible to compute meaningful approximate sample-path derivatives using auto-differentiation.
§.§.§ Capacity sharing relaxation
First, we address the non-differentiability of the action space. Recall that u_k∈{0,1}^m × n are scheduling decisions, which assign jobs to servers. Since u_k lies in a discrete space, a small change in the policy parameters can produce a jump in the actions. To alleviate this, we consider the transportation polytope as a continuous relaxation of the original action space (<ref>):
𝒰:={ u∈ [0,1]^m× n:u_n = _m, _m^⊤ u=_n,u≤ M}.
The set of extreme points of 𝒰 coincide with the original, integral action space 𝒰.
For a fractional action u_k∈𝒰,
we can interpret it as servers splitting their capacity among multiple job classes motivated by the fluid approximation of queues <cit.>. As a relaxation, it allows servers to serve multiple jobs simultaneously. The effective service rate for each job class is equal to the fraction of the capacity allocated to the job class multiplied by the corresponding service rate.
As a result, instead of considering stochastic policies over discrete actions, we approach this problem as a continuous control problem and consider
deterministic policies over continuous actions, i.e., the fractional scheduling decisions. Under this relaxation, the processing times are differentiable in the (fractional) scheduling decision. Finally, it is worth mentioning that we only use this relaxation when training policies. For policy evaluation, we enforce that actions are integral scheduling decisions in 𝒰. To do so, we treat the fractional action as a probability distribution and use it to sample a discrete action.
Under the capacity sharing relaxation, the service rate for queue j under the routing decision u∈𝒰 is μ_j^⊤ u_j≡∑_i=1^mμ_iju_ij.
Thus, given workload w_j, the processing time of the job will be
τ^S_j =w_j/∑_i=1^mu_ijμ_ij = w_j/_j,j.
Note that this is identical to the original definition of the processing times in (<ref>). The only difference is that we now allow fractional routing actions, under which a server can serve multiple jobs at the same time.
For a concrete example, consider a single server i compatible with two job classes 1 and 2 with service rates μ_i1 = 9 and μ_i2 = 15 respectively. Suppose it splits its capacity between job classes 1 and 2 according to u_i1 = 1/3 and u_i2 = 2/3. Then for residual workloads w_1 and w_2, the corresponding processing times are τ^S_1=w_1/3 and τ^S_2=w_2/10. If u_i1=0 and u_i2=1 instead, then the corresponding processing times are τ^S_1=w_1/0 = ∞ and τ^S_2=w_2/15.
§.§.§ Differentiable event selection
To determine the next event type, the 𝖺𝗋𝗀𝗆𝗂𝗇 operation selects the next event based on the minimum residual event time. This operation does not give a meaningful gradient.
Pitfalls of `naive' smoothing
In order to compute gradients of the sample path, we need to smooth the 𝖺𝗋𝗀𝗆𝗂𝗇 operation. There are multiple ways to do this. A naive approach is to directly replace 𝖺𝗋𝗀𝗆𝗂𝗇 with a differentiable surrogate.
One such popular surrogate is . With some inverse temperature β > 0, _β applied to the vector of residual event times τ∈_+^2n returns a vector in _+^2n,
which we use to replace the event selection operation _k+1:
_k+1 = 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β(τ_k)
𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β(τ)_j=e^-βτ_j / ∑_l=1^2n e^-βτ_l.
Direct Smoothing
As β→∞, 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β converges to 𝖺𝗋𝗀𝗆𝗂𝗇. Thus, one may expect that for large β, 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β would give a reliable differentiable surrogate.
However, queuing networks involve a unique challenge for this approach: one typically considers very long trajectories when evaluating performance in queuing networks, as one is often interested in long-run average or steady-state behavior. Thus, even if one sets β to be very large to closely approximate , the smoothing nonetheless results in `unphysical', real-valued queue lengths instead of integral ones, and small discrepancies can accumulate over these long horizons and lead to entirely different sample paths.
This can be observed concretely in the left panel of Figure <ref>, which displays the sample paths of the total queueing length processes for a criss-cross queueing network (in Example <ref>) under the original dynamic and under direct smoothing, using the same inter-arrival and service times. We observe that when setting the inverse temperature β = 1, the sample path under direct smoothing is completely different from the original one, even though all of the stochastic inputs are the same.
Even when setting a very high inverse temperature, i.e., β = 1000, for which 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β is almost identical to 𝖺𝗋𝗀𝗆𝗂𝗇, the trajectory veers off after only a hundred steps.
This can greatly affect the quality of the gradient estimation.
We observe in the right panel of Figure <ref> that across a range of inverse temperatures, the average cosine similarity between the surrogate gradient and the true gradient (defined in (<ref>)) are all somewhat low.
In the same plot, we also show the average cosine similarity between our proposed gradient estimator, which we will discuss shortly, and the true gradient. Our proposed approach substantially improves the gradient estimation accuracy, i.e., the average cosine similarity is close to 1, and as we will show later, it does so across a wide range of inverse temperatures.
Our approach: `straight-through' estimation
The failure of the direct smoothing approach highlights the importance of preserving the original dynamics, as errors can quickly build up even if the differentiable surrogate is only slightly off. We propose a simple but crucial adjustment to the direct smoothing approach, which leads to huge improvements in the quality of gradient estimation.
Instead of replacing the 𝖺𝗋𝗀𝗆𝗂𝗇 operation with 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β when generating the sample path, we preserve the original dynamics as is, and only replace the Jacobian of 𝖺𝗋𝗀𝗆𝗂𝗇 with the Jacobian of _β when we query gradients. In short, we introduce a gradient operator ∇ such that
_k+1=𝖺𝗋𝗀𝗆𝗂𝗇(τ_k), ∇_k+1= ∇_β(τ_k).
where ∇ is respect to the input τ.
This is known as the `straight-through' trick in the machine learning literature and is a standard approach for computing approximate gradients in discrete environments <cit.>. To the best of our knowledge, this is the first application of this gradient estimation strategy for discrete-event dynamical systems. Using this strategy, we can use the chain rule to compute gradients of performance metrics that depend on the event selection.
Consider any differentiable function g of e_k+1,
∇ g(_k+1) = ∂ g(_k+1)/∂_k+1∇_k+1∇τ_k= ∂ g(_k+1)/∂_k+1∇_β(τ_k) ∇τ_k
In contrast, direct smoothing involves the derivative ∂ g(ẽ_k+1)/∂ẽ_k+1 where ẽ_k+1 = _β(τ_k). Evaluating the gradient of g at g(ẽ_k+1) is a cause of additional bias.
With these relaxations, the transition function of the system is differentiable. We can now compute a gradient of the sample path cost J_N(θ; ξ_1:N), using the chain rule on the transition functions. Given a sample path of states s_k = (x_k, _k), actions u_k = π_θ(x_k), and ξ_k = (T_k, W_k), the pathwise gradient of the sample path cost is J_N(θ; ξ_1:N) with respect to an action u_k is,
∇_u_k J_N(θ; ξ_1:N)
= ∇_u_kc(x_k,u_k)_current cost + ∑_t=k+1^N[ ∇_xc(x_t, u_t) + ∇_u c(x_t, u_t)∇_x_tπ_θ(x_t)_future costs]
∇_u_kx_t
The gradient consists of the sensitivity of the current cost with respect to the action as well the sensitivity of future costs via the current action's impact on future states. The policy gradient with respect to θ can then be computed as
∇_θ J_N(θ; ξ_1:N) = ∑_k=1^N. ∇_u_k J_N(θ; ξ_1:N) |_u_k = π_θ(x_k)∇_θπ_θ(x_k).
As a result of the straight-through trick, we do not alter the event selection operation _k's and thus the state trajectory {x_k}_k=1^N is unchanged.
We refer to the gradient estimator (<ref>) as the 𝖯𝖠𝖳𝖧𝖶𝖨𝖲𝖤 policy gradient estimator. Although this formula involves iterated products of several gradient expressions, these can be computed efficiently through reverse-mode auto-differentiation using libraries such as PyTorch <cit.> or Jax <cit.> with O(N) time complexity in the time horizon N. This time complexity is of the same order as the forward pass, i.e., generating the sample path itself, and is equivalent to the time complexity of 𝖱𝖤𝖨𝖭𝖥𝖮𝖱𝖢𝖤. The policy gradient algorithm with 𝖯𝖠𝖳𝖧𝖶𝖨𝖲𝖤 gradient is summarized in Algorithm <ref>. In Section <ref>, we perform a careful empirical comparison of and , and find that can lead to orders of magnitude improvements in sample efficiency.
It is important to re-emphasize that when computing the gradient, we evaluate the policy differently than we would for . Instead of drawing a random discrete action u_k∼π_θ(x), we use the probabilities output by the policy directly as a fractional routing matrix in 𝒰,
: u_k∼π_θ(x_k), u_k∈𝒰
: u_k = π_θ(x_k), u_k∈𝒰.
However, this is only for gradient computation. When we evaluate the policy, we draw u_k∼π_θ(x_k).
Bias-variance trade-off for inverse temperature: One-step analysis
The inverse temperature β is a key hyperparameter that determines the fidelity of the _β approximation to . The choice of β poses a bias-variance trade-off, with a higher β leading to a smaller bias but a higher variance and a smaller β incurring a higher bias but a lower variance.
In general, it is difficult to assess the bias since we often do not know the true gradient. However, for some simple examples, we can evaluate the true gradient explicitly. We next analyze the gradient of the one-step transition of the M/M/1 queue with respect to the service rate μ,
∇_μ[x_k+1 - x_k]= ∇_μ[D e_k+1] =∇_μλ - μ/λ + μ = -2λ/(λ + μ)^2.
This permits an exact calculation of the mean and variance of our proposed pathwise gradient estimator. Although we can derive analytical expressions for these quantities, we present the leading order asymptotics as β→∞ for conciseness of presentation. While it is straightforward to see that almost surely _β(τ)→(τ) as β→∞, it is much less clear whether the gradient converges, i.e., whether [∇_β(τ)] →∇[(τ)]. Since has a gradient of zero almost everywhere, the expectation and gradient operators cannot be interchanged.
Instead, we analyze the expectations directly using properties of the exponential distribution.
Let ∇_μ (x_k+1 - x_k) = ∇_μ De_k+1 denote the 𝖯𝖠𝖳𝖧𝖶𝖨𝖲𝖤 gradient estimator of the one-step transition of the M/M/1 queue with respect to μ. For x_k≥ 1, as β→∞,
[∇_μ De_k+1]
-∇_μ[ De_k+1]
= β^-2·π^2λ (μ^2 - λ^2 + 2μλ)/6(λ + μ)^2
+ o ( β^-2),
(∇_μ De_k+1)
= β·4λ/μ (λ + μ)^2 + o(β).
See section <ref> for the proof.
As β→∞, the bias is O(1/β^2) while the variance is O(β). This means that one can significantly reduce the bias with only a moderate size of β. The left panel of Figure <ref> shows the bias-variance trade-off of ∇_μ (x_k+1 - x_k) for various inverse temperatures β. The blue line is based on the analytical expression for the bias-variance trade-off curve. We observe that with the inverse temperature β∈ [0.5, 2], both the bias and the variance are reasonably small.
Suppose we compute the sample average of B iid samples of ∇_μ De_k+1, which are denoted as ∇_μ De_k+1, i, i=1,…, K. In particular, the estimator takes the form 1/B∑_i=1^B∇_μ De_k+1,i. The choice of β that minimizes the mean-squared error (MSE) of the estimator
is β^* = O(B^1/5) and MSE(β^*)=O(B^-4/5).
The estimator provides a more statistically efficient trade-off than other alternatives. As an example, a standard gradient estimator is the finite-difference estimator in which one evaluates the one-step transition at μ - h and μ + h for some small h ∈ (0,∞), and the estimator is constructed as
1/B∑_i=1^BDe_k+1,i(μ + h) - De_k+1,i(μ - h)/2h,
where De_k+1,i(μ + h)'s are iid samples of De_k+1(μ + h).
If we set h = 1/β, it is well-known that the bias scales as O(1/β^2) while the variance scales as O(β^2). The choice of β that minimizes the MSE is β^* = O(B^1/6) and MSE(β^*)=O(B^-1/3).
While this analysis is restricted to the one-step transition of the M/M/1 queue, these insights hold for more general systems and control problems. The right panel of Figure <ref> displays the average cosine similarity (defined in (<ref>)) between the gradient estimator and the true gradient for a policy gradient task in a 6-class reentrant network across different congestion levels and for different inverse temperatures. We observe that for a wide range of inverse temperatures, β∈{0.2, 0.5, 1, 2}, the estimator has near-perfect similarity with the true gradient, while a very large inverse temperature suffers due to high variance. This indicates that while there is a bias-variance trade-off, the performance of the gradient estimator is not sensitive to the choice of the inverse temperature within a reasonable range. In our numerical experiments, we
find that one can get good performance using the same inverse temperature across different settings without the need to tune it for each setting.
§ EMPIRICAL EVALUATION OF THE GRADIENTS
In the previous section, we introduced the gradient estimator for computing gradients of queuing performance metrics with respect to routing actions or routing policy parameters. In this section, we study the statistical properties of these gradient estimators and their efficacy in downstream policy optimization tasks. We use as the baseline gradient estimator. First, in section <ref> we empirically study the estimation quality across a range of queuing networks, traffic intensities, and policies. After that, in section <ref>, we investigate their performance in a scheduling task: learning the cμ rule in a multi-class queuing network. Finally, we demonstrate the applicability of our framework beyond scheduling: we investigate the performance of the gradient estimator for admission control tasks in section <ref>.
§.§ Gradient Estimation Efficiency
In general, it is challenging to theoretically compare the statistical properties of different gradient estimators, and very few results exist for systems beyond the M/M/1 queue (see section <ref> for a theoretical comparison between and for the M/M/1 queue). For this reason, we focus on numerical experiments across a range of environments and queuing policies typically considered in the queuing literature. Specifically, we will be comparing the statistical properties of
estimator with the baseline estimator . While introduces bias into the estimation, we find in our experiments that this bias is small in practice and remains small even over long time horizons. At the same time, the estimator delivers dramatic reductions in variance, achieving greater accuracy with a single trajectory than with 10^3 trajectories.
First, recall that a policy π(x) maps queue-lengths x to assignment between servers and queues, represented by an m × n matrix in 𝒰 (allowing for fractional routing matrices). We visit three classical queuing policies: priority policies <cit.>, MaxWeight <cit.>, and MaxPressure <cit.>. Each of these methods selects the routing that solves an optimization problem. This means that the routing generated by the policy is deterministic given the state and is not differentiable in the policy parameters. In order to apply either or the gradient estimator to compute a policy gradient, we require differentiable surrogates of these policies. To this end, we define softened and parameterized variants of these policies, denoted as soft priority (𝗌𝖯𝖱), soft MaxWeight (𝗌𝖬𝖶), and soft MaxPressure (𝗌𝖬𝖯),
π^𝗌𝖯𝖱_θ(x)_i= (θ_j·μ_i), π^𝗌𝖬𝖶_θ(x)_i=
(θ_jx_j·μ_i), π^𝗌𝖬𝖯_θ(x)_i = ((μ⊙ R(θ x))_i)
where θ∈_+^n are a vector of costs/weights for each queue, μ denotes the matrix of service rates with μ_i∈_+^n denoting the service rates associated with server i.
The operation ⊙ refers to element-wise multiplication and the
operation maps a vector a ∈^n into a set of probabilities (a)_i = e^a_i / ∑_j=1^n e^a_j.
We are interested in identifying the parameter θ that minimizes long-run average holding cost where c(x,u) = h^⊤ x.
We use the objective
J_N(θ) = [∑_k=0^N-1c(x_k,π_θ(x_k))τ^*_k+1]
where N is a large enough number to approximate the long-run performance, and the goal of the gradient estimation is to estimate ∇ J_N(θ).
We consider the following environments, which appear throughout our computational experiments and serve as standard benchmarks for control policies in multi-class queuing networks. We describe the network structure in detail in Figure <ref>.
* Criss-cross: The network introduced in Example <ref> (see Figure <ref> (c)).
* Re-entrant 1 (n classes): We consider a family of multi-class re-entrant networks with a varying number of classes, which was studied in <cit.>. The network is composed of several layers and each layer has 3 queues. Jobs processed in one layer are sent to the next layer. Arrivals to the system come to queues 1 and 3 in the first layer while queue 2 receives re-entered jobs from the last layer (see Figure <ref> (a) for a two-layer example).
* Re-entrant 2 (n classes): We consider another family of re-entrant network architecture that was studied in <cit.>. It also consists of multiple layers with 3 queues in each layer. It differs from the Re-entrant 1 environment in that only queue 1 receive external arrivals while queues 2 and 3 receive re-entered jobs from the last layer (see Figure <ref> (b) for a two-layer example).
For a gradient estimator ĝ, the main performance metric we evaluate is (ĝ), which is the expected cosine similarity with the ground-truth gradient,
(ĝ) ≡[𝖼𝗈𝗌( ĝ, ∇ J_N(θ) )] = [ ⟨ĝ, ∇ J_N(θ) ⟩/ĝ∇ J_N(θ)] ∈ [-1,1]
where the expectation is over randomness in ĝ. The higher the similarity is, the more aligned ĝ is to the direction of ∇ J_N(θ). This metric incorporates both bias and variance of the gradient estimator. If the gradient estimator is unbiased but has a high variance, then each individual realization of ĝ is likely to have low correlation with the true gradient, so the average cosine similarity will be small even if [ĝ]=∇ J_N(θ). At the same time, if the gradient estimator has a low variance but a high bias, then the (ĝ) could still be small if 𝖼𝗈𝗌([ĝ], ∇ J_N(θ) ) is small. We focus on this metric, because it directly determines how informative the gradient estimates are when applying various gradient descent algorithms.
For our experiments, we evaluate (a close approximation of) the ground-truth gradient ∇ J_N(θ)
by using the unbiased gradient estimator over exceedingly many trajectories (in our case, 10^6 trajectories).
We compare the similarity of with that of .
We denote B as the number of trajectories we use to calculate each or gradient estimator.
∇̂_θ J_N(θ; ξ^(1)_1:N)_ with B=1 ∇̂^𝖱_θ J_N,B(θ; ξ_1:N) := 1/B∑_b = 1^B∇̂^𝖱_θ J_N(θ; ξ^(b)_1:N)_ with B trajectories
We compute the gradient with only B = 1 trajectory, while gradient is calculated using B=10^3 trajectories.
For each policy and setting, we compute these gradients for 100 different randomly generated values of θ, which are drawn from a 𝖫𝗈𝗀𝗇𝗈𝗋𝗆𝖺𝗅(0,1) distribution (as the parameters must be positive in these policies). In total, we compare the gradients in 10,080 unique parameter settings, and each gradient estimator is computed 100 times to evaluate the average cosine similarity. When computing the policy gradient, we consider a time horizon of N = 10^3 steps.
Figure <ref> compares the estimator with B = 1 trajectory with the estimator averaged over B = 10^3 trajectories. For the estimator, costs are computed with a discount factor γ = 0.999, as using a lower discount rate introduced significant bias in the estimation. For , we use an inverse temperature β = 1 for the relaxation across all settings.
Each cell in Figure <ref> corresponds to a (policy, network, traffic-intensity) and the cell value is the average expected cosine similarity of the estimator averaged across the 100 randomly drawn θ values. We observe that across these diverse settings, the estimator consistently has a much higher average cosine similarity with the true gradient despite using only a single trajectory. In fact, for 94.5% of the 10,800 parameter settings, has a higher average cosine similarity with 99% confidence than with B = 1000 trajectories. In most cases, the cosine similarity of is close to 1, indicating almost perfect alignment with the true gradient even under high congestion. on the other hand suffers greatly from high variance.
Overall, this demonstrates that is able to deliver greater estimation accuracy with an order of magnitude fewer samples.
§.§ Learning the cμ rule
Given the strong improvements in estimation efficiency, we turn to evaluate how these translate to a downstream optimization task.
In single-server multi-class queues, it is well-known that the cμ-rule minimizes the long-run average holding cost <cit.>. We assess whether gradient descent with the or gradients is capable of converging to the cμ-rule, without knowing the holding costs h or μ and only using feedback from the environment. Despite its simplicity, it has been observed in prior work that this is a difficult learning task, particularly under heavy traffic <cit.>.
We revisit the soft priority policy mentioned before, but with only the parameters θ∈^n, i.e.,
π^𝗌𝖯𝖱_θ(x)_i= (θ_i)
We also modify the policy to ensure that it is work-conserving, i.e., not assigning the server to an empty queue (see section <ref> for further discussion).
We consider a family of multi-class single-server queues with n queues. Holding costs are identically h_j = 1. Inter-arrival and service times are exponentially distributed, the service rates are μ_1j = 1 + ϵ j, for some ϵ>0, and the arrival rates are identical λ_j = λ and λ are set such that the traffic intensity ∑_j=1^nλ/μ_1j = ρ
for some ρ∈ (0,1). Note that in this case, the cμ-rule prioritizes queues with higher indices j. We consider a grid of gap sizes ϵ∈{1.0, 0.5, 0.1, 0.05, 0.01 } to adjust the difficulty of the problem; the smaller ϵ is, the harder it is to learn.
We compare with B=1 trajectory and with B = 100 trajectories for trajectories of N = 1000 steps. In order to isolate the effect of the gradient estimator from the optimization scheme, for both estimators we use an identical stochastic gradient descent scheme with normalized gradients (as these two estimators may differ by a scale factor). That is, for gradient estimator ĝ, the update under step-size α is
θ_t+1 = θ_t - αĝ/ĝ.
We run T gradient descent steps for each gradient estimator. To allow for the fact that different estimators may have different performances across different step sizes, we consider a grid of step sizes α∈{0.01, 0.1, 0.5, 1.0 }. Gradient normalization may prevent convergence, so we use the averaged iterate θ̅_T for T. We then evaluate the long-run average holding cost under a strict priority policy determined by θ̅_T, i.e., π_θ̅_T(x)_i = θ̅_T,j.
The left panel of Figure <ref> displays the values of θ̅_T after T=50 gradient iterates for and with n=5, ϵ = 0.1, and ρ = 0.99. We observe that while sorts the queues in the correct order (it should be increasing with the queue index), even with B = 100 trajectories fails to prioritize queues with a higher c μ index. Remarkably, we observe in the right panel of the same figure that with just a single trajectory achieves a lower average holding cost than uniformly across various step sizes and difficulty levels, whereas the performance of varies greatly depending on the step size. This indicates that the improvements in gradient efficiency/accuracy of make it more robust to the step-size hyper-parameter.
It is also worth mentioning that when gap size ϵ becomes smaller, it is more difficult to learn. At the same time, since μ_1j's are more similar to each other, the cost difference between different priority rules also diminishes.
§.§ Admission Control
While we focus mainly on scheduling tasks in this work, our gradient estimation framework can also be applied to admission control, which is another fundamental queuing control task <cit.>. To manage congestion, the queuing network may reject new arrivals to the network if the queue lengths are above certain thresholds. The admission or buffer control problem is to select these thresholds to balance the trade-off between managing congestion and ensuring sufficient resource utilization.
Under fixed buffer sizes = {_j}_j=1^n, new arrivals to queue j are blocked if x_j = _j. As a result, the state update is modified as follows,
x_k+1 = min{x_k + De_k+1, }.
While a small can greatly reduce congestion, it can impede the system throughput. To account for this, we introduce a cost for rejecting an arrival to the network. Let o_k∈{0,1}^n denote whether an arrival is overflowed, i.e., an arrival is blocked because the buffer is full,
o_k+1 = De_k+1· 1{
x_k + De_k+1 > }.
Given a fixed routing policy, the control task is to choose the buffer sizes to minimize the holding and overflow costs:
J_N(;ξ_1:N) = ∑_k=0^N-1 (h^⊤x_k)τ_k+1^* + b^⊤o_k.
Similar to the routing control problem, despite the fact that overflow is discrete, our gradient estimation framework is capable of computing a gradient of the cost with respect to the buffer sizes, which we denote as ∇_ J_N(;ξ_1:N), i.e., we can evaluate gradients at integral values of the buffer size and use this to perform updates. Since the buffer sizes must be integral,
we update the buffer sizes via sign gradient descent to preserve integrality:
_t+1 = _t - 𝗌𝗂𝗀𝗇( ∇_ J_N(;ξ_1:N) ).
Learning for admission control has been studied in the queuing and simulation literature <cit.>. While exact gradient methods are possible in fluid models <cit.>, the standard approach for discrete queuing models is finite perturbation analysis <cit.>, given the discrete nature of the buffer sizes.
Randomized finite-differences, which is also known as Simultaneous Perturbation Stochastic Approximation (SPSA) <cit.>, is a popular optimization method for discrete search problems. This method forms a finite-differences gradient through a random perturbation. Let η∼𝖱𝖺𝖽𝖾𝗆𝖺𝖼𝗁𝖾𝗋(n, 1/2) ∈{-1,1 }^n be a random n-dimensional vector where each component is an independent 𝖱𝖺𝖽𝖾𝗆𝖺𝖼𝗁𝖾𝗋 random variable, taking values in {-1,1 } with equal probability. For each perturbation η, we evaluate the objective at ±η, i.e., J_N( + η; ξ_1:N) and J_N( - η; ξ_1:N), using the same sample path for both evaluations to reduce variance. For improved performance, we average the gradient across a batch of B perturbations, i.e., η^(b) for b=1,...,B, drawing a new sample path ξ^(b)_1:N for each perturbation. The batch SPSA gradient is
∇_^𝖲𝖯𝖲𝖠,B J_N() = 1/B∑_b=1^B1/2( J_N( + η^(b); ξ^(b)_1:N) - J_N( - η^(b); ξ^(b)_1:N) ) η^(b).
We update the buffer sizes according to the same sign gradient descent algorithm as in (<ref>).
In comparison with existing works in the queuing literature (e.g. <cit.>), which derive analytical results for simple single-class or multi-class queues, we consider admission control tasks for large, re-entrant networks with multiple job classes. Each job class has its own buffer, resulting in a high-dimensional optimization problem in large networks. Moreover, the buffer size for one job class affects downstream congestion due to the re-entrant nature of the networks. For our experiments, we fix the scheduling policy to be the soft priority policy π^𝗌𝖯𝖱_θ(x) in (<ref>) due to its simplicity and strong performance in our environments. We emphasize however that our framework can be applied to buffer control tasks under any differentiable routing policy, including neural network policies. For each gradient estimator, we perform T=100 iterations of sign gradient descent, and each gradient estimator is computed from trajectories of length N = 1000. For SPSA, we consider batch sizes of B = {10 , 100, 1000} whereas we compute with only B=1 trajectory. When evaluating the performance, we calculate the long-run average cost with the buffer size determined by the last iterate with a longer horizon N = 10^4 and over 100 trajectories. We also average the results across 50 runs of sign gradient descent.
The left panel of Figure <ref> displays iterates of the sign gradient descent algorithm with for the M/M/1 queue with holding cost h = 1 and overflow cost b = 100. We observe that sign gradient descent with (computed over a horizon of N=1000 steps) quickly reaches the optimal buffer size of L^* = 14 and remains there, oscillating between L = 14 and 15. The right panel shows the iterates for a simple 2-class queue with 1 server, h = 1, and b = 20 under a soft priority policy. We again observe that sign gradient descent with quickly converges to a near-optimal set of buffer sizes.
To see how the estimator performs in larger-scale problems, we consider the Re-entrant 1 and Re-entrant 2 networks introduced in Section <ref> with varying number of job classes (i.e., varying number of layers).
Figure <ref> compares the last iterate performance of SPSA and for these two families of queuing networks with instances ranging from 6-classes to 21-classes. Holding costs are h = 1 and overflow costs are b = 1000 for all queues. We observe that with only a single trajectory is able to outperform SPSA with B = 1000 trajectories for larger networks. Sign gradient descent using SPSA with only B = 10 trajectories is much less stable, with several of the iterations reaching a sub-optimal set of buffer sizes that assign L_j = 0 to several queues. This illustrates the well-known fact that for high-dimensional control problems, zeroth-order methods like SPSA must sample many more trajectories to cover the policy space and their performance can scale sub-optimally in the dimension. Yet , which is an approximate first-order gradient estimator, exhibits much better scalability with dimension and is able to optimize the buffer sizes with much less data.
§ POLICY PARAMETERIZATION
While our gradient estimation framework
offers a sample-efficient alternative for
learning from the environment, there is another practical
issue that degrades the performance of learning algorithms for
queuing network control: instability. Standard model-free RL algorithms are based on the `tabula rasa' principle, which aims to search over a general and unstructured policy class in order to find an optimal policy. However, it has been observed that this approach may be unsuitable for queuing network control. Due to the lack of structure, the policies visited by the algorithm often fail to stabilize the network,
which prevents the algorithm from learning and improving.
As a result, researchers have proposed structural modifications to ensure stability,
including behavior cloning of a stabilizing policy to find a good initialization <cit.>,
switching to a stabilizing policy
if the queue lengths exceed some finite thresholds <cit.>,
or modifying the costs to be stability-aware <cit.>.
We investigate the source of instability in various queuing scheduling problems and find a possible explanation.
We note that many policies obtained by model-free RL algorithms are not work-conserving and often allocate servers to empty queues. A scheduling policy is work-conserving if it always keeps the server(s) busy when there are compatible jobs waiting to be served.
Standard policies such as the cμ-rule, MaxPressure, and MaxWeight are all work-conserving, which partly explains their success in stabilizing complex networks. We treat work conservation as an `inductive bias'
and consider a simple modification to the policy architecture that guarantees this property without sacrificing the flexibility of the policy class.
The de-facto approach for parameterizing policies in deep reinforcement learning is to consider a function ν_θ(x),
which belongs to a general function family, such as neural networks, and outputs real-valued scores.
These scores are then fed into a layer, which converts the scores to probabilities over actions.
Naively, the number of possible routing actions can grow exponentially in the number of queues and servers.
Nonetheless, one can efficiently sample from the action space by having the output of
ν_θ(x) ∈^m× n be a matrix where row i, denote as ν_θ(x)_i, contains the scores for matching server i to different queues. Then by applying the for row i, i.e., (ν_θ(x)_i), we obtain the probability that server i is assigned to each queue. We then sample the assignment independently for each server to obtain an action in 𝒰.
For the purpose of computing the estimator, (ν_θ(x)_i) also gives a valid fractional routing in 𝒰. We let (ν_θ(x))∈𝒰 denote the matrix formed by applying the softmax to each row in ν_θ(x).
Under this `vanilla' softmax policy, the probability π_θ(x)_ij that server i is routed to queue j
(or alternatively, the fractional capacity server i allocated to j) is given by
π_θ(x)_ij = (ν_θ(x))_ij = e^ν_θ(x)_ij/∑_j=1^n e^ν_θ(x)_ij. Vanilla Softmax
Many of the policies mentioned earlier can be defined in this way, such as the soft MaxWeight policy, ν_θ(x)_i = {θ_jx_jμ_ij}_j=1^n. This parameterization is highly flexible
and ν_θ(x)
can be the output of a neural network. However, for a general ν_θ(x),
there is no guarantee that π(θ)(x)_i,j = 0 if x_j = 0. This means that such policies
may waste service capacity by allocating capacity to empty queues even when there are non-empty queues that
server i could work on.
We propose a simple fix, which reshapes the actions produced by the policy.
We refer to this as the work-conserving ,
π_θ^𝖶𝖢(x)_ij = 𝗌𝗈𝖿𝗍𝗆𝖺𝗑𝖶𝖢(ν_θ(x))_ij≡e^ν_θ(x)_ij1{x_j > 0}∧ϵ/∑_l=1^n e^ν_θ(x)_il1{x_l > 0}∧ϵ,
WC-Softmax
where ∧ is the minimum and ϵ is a small number to prevent
division by zero when the queue lengths are all zero.
This parameterization is fully compatible with deep reinforcement learning approaches.
ν_θ(x) can be a neural network and critically, the work-conserving
preserves the differentiability of π^𝖶𝖢_θ(x) with respect to θ.
As a result, and estimators can both be computed under this parameterization, since ∇_θlogπ_θ^𝖶𝖢(x) and ∇_θπ_θ^𝖶𝖢(x)
both exist.
This simple modification delivers substantial improvements in performance. Figure <ref> compares the average holding cost across policy iterations for PPO without any modifications, PPO initialized with a policy trained to imitate MaxWeight, and PPO with the work-conserving . Despite its empirical success in many other reinforcement learning problems, PPO without any modifications fails to stabilize the network and incurs an exceedingly high cost. It performs much better under an initial behavioral cloning step, which achieves stability but still underperforms the cμ-rule. On the other hand, with the work-conserving , even the randomly initialized policy stabilizes the network and outperforms the cμ-rule over the course of training. This illustrates that an appropriate choice of policy architecture, motivated by queuing theory, is decisive in enabling learning-based approaches to succeed. As a result, for all of the policy optimization experiments in sections <ref>, <ref>, and <ref>, we equip the policy parameterization with the work-conserving .
§ SCHEDULING FOR MULTI-CLASS QUEUING NETWORKS: BENCHMARKS
We now benchmark the performance of the policies obtained by policy gradient (Algorithm <ref>) with standard queuing policies and policies obtained using state-of-the-art model-free reinforcement learning algorithms.
We consider networks displayed in Figure <ref>, which were briefly described in section <ref> and appeared in previous works <cit.>. <cit.> used Criss-cross and Re-entrant-1 networks to show that can outperform standard queuing policies. <cit.> consider the Re-entrant-2 network, but did not include any RL baselines. We consider networks with exponential inter-arrival times and workloads in order to compare with previous results. We also consider hyper-exponential distributions to model settings with higher coefficients of variation, as has been observed in real applications <cit.>. The hyper-exponential distribution X∼𝖧𝗒𝗉𝖾𝗋𝖤𝗑𝗉(λ_1,λ_2,p) is a mixture of exponential distributions:
X d= Y· E_1 + (1 - Y) · E_2,
for Y∼𝖡𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂(p), E_1∼𝖤𝗑𝗉(λ_1), E_2∼𝖤𝗑𝗉(λ_2), and all are drawn independently of each other. We calibrate the parameters of the hyper-exponential distribution to have the same mean as the corresponding exponential distribution, but with a 1.5x higher variance.
Our empirical validation goes beyond the typical settings studied in the reinforcement learning for queuing literature, and is enabled by our discrete-event simulation framework.
We now describe the standard queuing policies considered in this section, which can all be expressed in the form
π(x)=_u∈𝒰∑_i∈[m],j∈[n]ρ_ij(x)u_ij
for some index ρ∈^m × n that differs per method.
* cμ-rule <cit.>: ρ_ij = h_jμ_ij1{x_j > 0 }. Servers prioritize queues with a higher holding cost and a larger service rate.
* MaxWeight <cit.>: ρ_ij(x) = h_jμ_ijx_j. Servers prioritize queues that are longer, with a higher holding cost, and a larger service rate.
* MaxPressure <cit.>: ρ_ij= ∑_ℓ=1^nμ_ijR_jℓh_ℓx_ℓ. MaxPressure is a modification of MaxWeight, in the sense that it takes workload externality within the network into account through the R_jl terms, e.g., processing a class j job may generate a new class j' job.
* Fluid <cit.>: The scheduling policy is based on the optimal sequence of actions in the fluid relaxation, which approximates the average evolution of stochastic queue lengths by deterministic ordinary differential equations. We aim to solve the continuous-time problem
min_u̅ ∫_0^T h^⊤ x(t)dt
s.t. ẋ(t) = λ - R(μ⊙u̅(t)), ∀ t∈ [0,T]
x(t) ≥ 0, ∀ t∈ [0,T]
u̅(t) ∈𝒰, ∀ t∈ [0,T].
For tractability, we discretize the problem with time increment Δ t > 0 and horizon H=T/Δ t, and solve as a linear program.
We then set u_k = u(t_k). The linear program is re-solved periodically to improve fidelity with the original stochastic dynamics.
We next describe the deep reinforcement learning methods considered in this section:
* -𝖣𝖦 <cit.>: is a standard model-free policy gradient method <cit.>. <cit.> implement for multi-class queuing networks and show that the policies obtained outperform several standard queuing policies.
Their implementation includes an initial behavioral cloning for stability and a carefully designed variance-reducing policy gradient estimation. We report the results from their paper, although in our experiments we include several problem instances not evaluated in their work.
* -𝖶𝖢 (ours): Given the empirical success of with the work-conserving , we use this algorithm as the main RL benchmark. For this policy, we use the same neural network architecture, hyper-parameters, and variance reduction methods as -𝖣𝖦 <cit.>.
* policy gradient (Algorithm <ref>): Trains a neural network policy with work-conserving using the policy gradient estimator. We use an inverse temperature of β = 10 for all experiments in this section.
While value-based methods have also been considered for queuing network control <cit.>, our focus in this work is on benchmarking policy gradient algorithms.
For the reinforcement learning policies, we train each method over 100 episodes, each consisting of N = 50,000 environment steps. Following <cit.>'s implementation, -𝖶𝖢 was trained with B = 50 actors, while was trained only with B = 1 actor, which means that -based methods used 50x more trajectories than . See Appendix <ref> for more details on the training process.
To evaluate each scheduling policy, we run 100 parallel episodes starting from empty queues x_0 = 0_n with a long horizon N to estimate the long-run average holding cost (typically N =200,000 steps). As in previous works (e.g. <cit.>), we consider holding costs h = _n in which case the holding cost is equivalent to the total queue length. To reiterate, for each policy π we estimate the following quantity
J_N(π) =
[1/t_N∑_k=0^N-1 (_n^⊤x_k)τ^*_k+1]
= [1/t_N∫^t_N_0∑_i=1^n x_i(t)dt ]
where t_N is the time of the Nth event. We measure the standard deviation across the 100 episodes to form 95% confidence intervals. For the reinforcement learning policies, we report the average holding cost for the best policy encountered during training.
Tables 1-5 display the results of our benchmarking across the problem instances discussed before. The column `% improve' records the relative reduction in holding cost achieved by over the best standard queuing policy (either cμ, MaxWeight, MaxPressure, or Fluid). Our main observations on the relative performance of the standard policies and policies obtained from reinforcement learning methods are summarized as follows.
* is a strong reinforcement-learning benchmark. with our proposed work-conserving is able to efficiently find policies that outperform standard policies across all problem instances, as well as the featured in <cit.> (under the same policy network and hyper-parameters). This illustrates that simply ensuring work-conservation is a powerful inductive bias that delivers stability.
* policy gradient outperforms in larger networks, using 50x less data. We observe that and achieve similar performances when the number of classes is small. However, when the number of job classes gets larger, consistently outperforms , as seen in Figure <ref>. This is likely due to the
sample efficiency gained by the gradient, enabling the algorithm to find better policies with less data.
* achieves large performance gains over for higher-variance problem instances, using 50x less data. We observe that for the Re-entrant-1 networks with hyper-exponential noise, the reduction of holding cost of relative to is often equivalent or even larger than the cost reduction of relative to the cμ-rule. This illustrates that even among optimized RL policies, there can be significant performance differences for difficult problem instances, and the sample efficiency of is particularly useful in noisier environments.
* While standard queuing methods work well, RL methods meaningfully improve performance in hard instances. We observe in the `% improve' column that achieves a 3-20% improvement over the best standard queuing policy in each setting.
Altogether, these results illustrate that policy gradient with gradient estimator and work-conserving policy architecture can learn effective queuing network control policies with substantially less data than model-free policy gradient algorithms for large networks with high-variance event times, mirroring real-world systems.
In particular, the improved sample efficiency of the gradient estimator and the stability brought by work-conserving policy architecture are the keys to enabling learning in large-scale systems with realistic data requirements.
§ WHY IS SAMPLE-INEFFICIENT? A THEORETICAL CASE STUDY FOR THE M/M/1 QUEUE
In this section, we provide a theoretical case study explaining how
gradients are able to learn more from a single observed trajectory compared to
. We focus on the special case of the M/M/1 queue to explain how
and its actor-critic variants utilizing baselines/advantages suffer from
sample inefficiency.
Although we are only able to analyze a substantially simpler setting than the control problems in general multi-class queueing networks we are interested in,
our theoretical results illustrate the essential statistical benefits of pathwise gradient estimators.
We highlight that while applies to virtually any setting by relying on random exploration,
it fundamentally struggles to assign credit to actions, especially in noisy environments. , on the other hand, is much better at assigning credit to actions. This allows us to crystallize why we see such a large improvement in sample efficiency in sections <ref> and <ref>. We also believe our result may be of broader interest in reinforcement learning, because it illustrates a practically relevant instance where , even with an optimal baseline, is provably sub-optimal.
We consider the M/M/1 queue with a fixed arrival rate λ under service rate control u=μ > λ. This setting permits an analytic approach to showing that the estimator has a sub-optimally large variance, particularly for congested systems when ρ = λ/μ→ 1. On the other hand, the estimator achieves an order of magnitude improvement in estimation efficiency.
In the M/M/1 queue setting, the approach
for general queuing networks
reduces to IPA based on Lindley recursion.
We consider a simple service-rate control problem where the cost is the steady-state average queue length in the M/M/1 queue
Q(μ) := _∞[x(t)] = λ/μ - λ = ρ/1-ρ.
Given that the service rate is continuous, it is natural to consider a policy that randomizes over [μ - h, μ]. One such option is a Beta-distributed policy π_θ: A = μ - hY, where Y ∼Beta(θ,1) and h>0. As θ→∞, the policy frequently sets service rates close to μ and as θ→ 0, it concentrates more probability mass on service rates close to μ - h. The task is to estimate the following policy gradient
∇_θ J(θ) = ∇_A∼π_θ[Q(A)].
The 𝖱𝖤𝖨𝖭𝖥𝖮𝖱𝖢𝖤 estimator of ∇_θJ(θ) involves sampling a random service rate from the policy π_θ, and then estimating the steady-state queue length from a trajectory under that service rate. For a trajectory with N steps, we denote the steady-state queue length estimator as Q_N(A). Then, the gradient estimator takes the form
∇^𝖱 J_N(θ; ξ_1:N)
= Q_N(μ - hY) ∇_θlogπ_θ(Y)
= Q_N(μ - hY) ( log Y + 1/θ).
The standard estimator for the steady-state queue length is simply the queue length averaged over a sample path:
Q_N(a) = 1/N∑_k=1^Nx_kτ_k+1^* when A=a. As long as μ - h > λ, it is known that as N→∞, Q_N(a) → Q(a), which implies that ∇^𝖱 J_N(θ; ξ_1:N) →∇ J(θ).
On the other hand, the estimator utilizes the structure of the single server queue. First, by inverse transform sampling, Y d= F_θ^-1(ω), where ω∼Uniform(0,1) and F_θ^-1(ω) = ω^1/θ. Then, we can substitute A = μ - hω^1/θ. Since Q(μ) is differentiable and the derivative is integrable, we can change the order of differentiation and integration
∇_θ J(θ)
= ∇_θ_ω[Q(μ - hω^1/θ)]
= -_ω[∇ Q(μ - hω^1/θ) · h ∇_θω^1/θ].
The preceding display involves the gradient of the steady-state queue-length Q(μ) with respect to the service rate μ, i.e., ∇ Q(μ).
For the M/M/1 queue, there are consistent sample-path estimators of ∇ Q(μ). One such estimator uses the fact that by Little's law Q(μ) = _∞[x(t)] = λ_∞[w(t)] =: λ W(μ) where W(μ) is the steady-state waiting time. The waiting time process W_i, which denotes the waiting time of the ith job arriving to the system, has the following dynamics, known as the Lindley recursion:
W_i+1 = ( W_i - T_i+1 + S_i/μ)^+,
where T_i+1iid∼Exp(1) is the inter-arrival time of between the ith job and (i+1)th job, and S_iiid∼Exp(1) is the workload of the ith job. Crucially, this stochastic recursion specifies how the service rate affects the waiting time along the sample path, which enables one to derive a pathwise derivative via the recursion:
∇W_i+1 =
( -S_k/μ^2 + ∇ W_i)
1{ W_i+1 > 0 },
where ∇ W_i together with W_i form a Markov chain following the above recursion. By averaging this gradient across jobs and using Little's law, we have the following gradient estimator for ∇ Q, which we denote as ∇ Q_N(μ):
∇ Q_N(μ) = λ1/L_N∑_i=1^L_N∇ W_i,
where L_N is the number of arrivals that occur during a sample path with N events.
Using this, the policy gradient estimator (a.k.a. IPA estimator) is
∇_θ J_N(θ; ξ_1:N)
= h ·∇ Q_N(μ - h Y) ( 1/θ Y log Y ).
As long as μ - h > λ, it has been established that ∇ Q_N(μ) is asymptotically unbiased, i.e., as N→∞, [∇ Q_N(μ)] →∇ Q(μ), which implies
[∇_θ J_N(θ; ξ_1:N)] →∇ J(θ).
Since both the and gradient estimators give an asymptotically unbiased estimation of ∇ J(θ), we compare them based on their variances, which determines how many samples are needed to reliably estimate the gradient. Although the variances of the estimators are not precisely known for a finite N, Q_N and ∇Q_N both satisfy the central limit theorem (CLT) with explicitly characterized asymptotic variances, which we denote as _∞(Q) and _∞(∇Q)
respectively. This implies that the variance of Q_N is approximately _∞(Q)/N. We define _∞(∇ J_N(θ; ξ_1:N)) to be the variance of the gradient estimator when we approximate the variance of Q_N and ∇Q_N using _∞(Q)/N and _∞(∇Q)/N respectively.
It is worth reiterating that the estimator only required an estimate of cost Q(μ), which does not require any domain knowledge, whereas the gradient required an estimate of ∇ Q(μ) which requires a detailed understanding of how the service rate affected the sample path dynamics. Utilizing this structural information can greatly improve the efficiency of gradient estimation. Since ∇_θ J_N(θ; ξ_1:N) = O(h) almost surely,
we have (∇_θJ_N(θ; ξ_1:N)) = O(h^2). On the other hand, the variance of the estimator can be very large even if h is small. To highlight this, consider the extreme case where h = 0 for which the policy gradient ∇ J(θ) is obviously zero since the policy deterministically sets the service rate to μ regardless of θ. Strikingly, does not have zero variance in this case:
If h = 0, then the variance of the estimators are
_∞(∇ J_N(θ; ξ_1:N) ) = 0, _∞(∇^𝖱 J_N(θ; ξ_1:N) )
= Θ( N^-1 (1 - ρ)^-4)
Note that even when h=0, the variance of the estimator can be quite high if the queue is congested, i.e. ρ is close to 1, while the pathwise estimator gives the correct estimate of zero with zero variance.
For non-trivial values of h, we focus on the so-called `heavy-traffic' asymptotic regime with ρ=λ/μ→ 1, which is of major theoretical and practical interest in the study of queues. Estimating steady-state quantities becomes harder as the queue is more congested, so (1 - ρ)^-1 emerges as a key scaling term in the variance.
We set h such that h/μ - λ<c and h/μ - λ→ c ∈ (0,1) as ρ→ 1. This resembles the square-root heavy-traffic regime for capacity planning, where the service rate is set to be λ + β√(λ) for some β > 0, and one considers the limit as λ→∞. In this case, if one were choosing a policy over the square-root capacity rules A∈[λ + a √(λ), λ + b √(λ)] for some b>a>0, this is equivalent to setting μ = λ + b√(λ) and h = (b - a) √(λ) = O(√(λ)). Note that if c = 0, the gradient is zero (identical to Observation <ref>), and if c ≥ 1, the queue with service rate μ-h is unstable.
Within this regime, we have the following comparison between the gradient estimators, which utilizes recent results concerning the asymptotic variance of ∇Q_N(μ) <cit.>.
Suppose h = c(μ - λ) for c∈(0,1) as ρ→ 1. Under this scaling
∇ J(θ) ∼ (1-ρ)^-1,
and
_∞(∇ J_N(θ; ξ_1:N) )
= O ( N^-1(1 - ρ)^-3_estimation noise + (1-ρ)^-2_policy randomization)
_∞(∇^𝖱 J_N(θ; ξ_1:N) ) = Θ( N^-1(1 -ρ)^-4
+ (1-ρ)^-2)
See Appendix <ref> for the proof.
Overall, the estimator is much more sample efficient than the estimator as ρ→ 1, with the variance scaling as (1-ρ)^-3 compared to (1-ρ)^-4.
The first terms in (<ref>) and (<ref>) represent the variance occurring from the Monte Carlo estimation and becomes smaller if one generates a longer sample path (larger N), and it scales as N^-1. The second terms are the variance resulting from randomness in the service rate induced by the policy.
Theorem <ref> illustrates that large improvements in statistical efficiency can be achieved by leveraging the structure of the system dynamics. An existing strategy for incorporating domain knowledge in is to subtract a baseline b from the cost, which preserves un-biasedness:
∇^𝖱𝖡 J_N
(θ, ξ_1:N)
= (Q_N(μ - hY) - b) ( log Y + 1/θ).
In this case, one can characterize the optimal variance-reducing baseline in closed form if one has knowledge of the true cost Q(μ). Under h = c(μ - λ),
b^* = [Q(h-hY) ∇_θlogπ_θ(Y)^2]/[∇_θlogπ_θ(Y)^2]
= λ/μ - λ[F^2_1(1,θ,1+θ,c) -2θ^2Φ(c,2,θ) + 2cθ^3Φ(c,3,1+θ)]
= O((1-ρ)^-1)
where F^2_1 is the hypergeometric 2F1 function and Φ is the Lerch Φ transcendental. The optimal baseline b^* is of the same order as Q(μ) as ρ→ 1.
Consider the estimator with the optimal baseline b^*. As ρ→ 1, the variance of the estimator scales as
_∞(∇^𝖱𝖡 J_N(θ; ξ_1:N) ) = Θ( N^-1(1 -ρ)^-4
+(1-ρ)^-2)
The proof of Corollary <ref> is provided in Appendix <ref>.
Simply, since the optimal baseline is a deterministic input, it is unable to improve upon the (1-ρ)^-4 dependence on ρ, which is driven by the statistical properties of Q_N.
This illustrates that the pathwise gradient estimator can offer an order of magnitude improvement in sample efficiency than the estimator even with an optimized baseline that requires knowledge of the true cost function (and thus precludes the need to estimate the cost in the first place).
Intuitively, the estimator is inefficient because it is unable to leverage the fact that
Q(μ) ≈ Q(μ + ϵ) when ϵ is small. After all, generic MDPs do not have such a structure; a slight change in the action can result in vastly different outcomes. The estimator cannot use the estimate of Q_N(μ) to say anything about Q(μ + ϵ), and must draw a new sample path to estimate Q(μ + ϵ). Meanwhile, using a single sample path, the pathwise estimator can obtain an estimate for Q(μ + ϵ) when ϵ is small enough via Q_N(μ + ϵ)≈Q_N(μ) + ϵ∇Q_N(μ). In this sense, the pathwise estimator can be seen as a infinitesimal counterfactual of the outcome under alternative—but similar—actions.
Even though we only study a single server queue here,
we believe the key observations may apply more broadly.
* Higher congestion (ρ→ 1) makes it more challenging to estimate the performance of queueing networks based on the sample path. This applies to both gradient estimators and baselines that could be used to reduce variance.
* It is important to reliably estimate the effects of small changes in the policy, as large changes can potentially cause instability. Pathwise gradient estimators provide a promising way to achieve this. For general networks with known dynamics, their dynamics are often not differentiable, which requires the development of the estimator.
§ CONCLUSION
In this work, we introduce a new framework for policy optimization in queuing network control. This framework uses a novel approach for gradient estimation in discrete-event dynamical systems. Our proposed policy gradient estimator is observed to be orders of magnitude more efficient than model-free RL alternatives such as across an array of carefully designed empirical experiments. In addition, we introduce a new policy architecture, which drastically improves stability while maintaining the flexibility of neural network policies. Altogether, these illustrate how structural knowledge of queuing networks can be leveraged to accelerate reinforcement learning for queuing control problems.
We next discuss some potential extensions of our approach:
* We consider policies with preemption. Our proposed method can also handle non-preemptive policies by keeping track of the occupied servers as part of the state.
* We focus on scheduling and admission control problems in queuing network satisfying Assumptions <ref>, but the algorithmic ideas can be extended to more general queuing networks by utilizing a larger state space that contains the residual workloads of all jobs in the network, rather than only the top-of-queue jobs as is done in this work. A higher dimensional state descriptor is required for more general networks as multiple jobs in the same queue can be served simultaneously.
* Beyond queuing network control, our methodology can be extended to control problems in other discrete-event dynamical systems. More explicitly, our methodology can handle systems that involve a state update of the form x_k+1 = g(x_k, e_k+1) where g is a differentiable function and e_k+1 is the selected event. Recall that in this work, the state update is linear in x_k and e_k+1: x_k+1 = x_k + De_k+1. We also require that e_k+1 is differentiable almost surely in the action u_k.
abbrvnat
§ TRAINING DETAILS
WC was trained over 100 episodes, each consisting of 50,000 environment steps parallelized over 50 actors. We closely follow the hyper-parameters and training setup as in <cit.>. We used a discount factor of 0.998, a GAE <cit.> parameter of 0.99, and set the Kullback–Leibler divergence penalty as 0.03. For the value network, we used a batch size of 2,500, while for the policy network, we used the entire rollout buffer (batch size of 50,000) to take one gradient step. We performed 3 PPO gradient updates on the same rollout data. For all the experiments, we used the Adam optimizer with a cosine decaying warming-up learning rate scheduler. The learning rates were set to 3 × 10^-4 for the value network and 9 × 10^-4 for the policy network. We used 3% of the training horizon to warm up to the maximum learning rate and then cosine decayed to 1 × 10^-5 for both networks. We used the same neural network architecture as those in <cit.>, see Appendix E of <cit.> for more details.
For the policy gradient, we used the same hyperparameters across all experiments. We use an inverse temperature of β = 10 for the relaxation of the event selection. We update the policy after every episode with the Adam optimizer, using constant step-size of 5× 10^-4, momentum parameters (0.8, 0.9), and gradient clipping of 1. For the policy neural network, we used a multilayer perceptron with 3 hidden layers, each hidden layer consisting of 128 hidden units. We use the work-conserving for the final output.
§ PROOFS
§.§ Proof of Theorem <ref>
We focus on the case where x_k≥1. Then,
x_k+1 =x_k+1{τ_k^A<w_k/μ}-1{τ_k^A>w_k/μ}
=x_k+e^-βτ_k^A/e^-βτ_k^A+e^-β w_k/μ-e^-β w_k/μ/e^-βτ_k^A+e^-β w_k/μ
=x_k+e^-βτ_k^A-e^-β w_k/μ/e^-βτ_k^A+e^-β w_k/μ.
Since the inter-arrival times and workloads are exponentially distributed, by the memoryless property, we have τ_k^A∼𝖤𝗑𝗉(λ) and
w_k∼𝖤𝗑𝗉(1).
The true gradient is
d/dμ𝔼[x_k+1-x_k]=d/dμλ-μ/λ+μ=-2λ/(λ+μ)^2.
Under our 𝗌𝗈𝖿𝗍𝗆𝗂𝗇_β approximation for the event-selection,
we have
𝔼[d/dμe^-βτ_k^A-e^-β w_k/μ/e^-βτ_k^A+e^-β w_k/μ]
=𝔼[-2βe^-β(τ_k^A+w_k/μ)/(e^-βτ_k^A+e^-β w_k/μ)^2w_k/μ^2]
=-2β/μ𝔼[τ_k^S(e^β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^21{τ_k^A<τ_k^S}+e^β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^21{τ_k^A>τ_k^S})],
for τ_k^S=w_k/μ.
Next, note that
𝔼[τ_k^Se^β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^21{τ_k^A<τ_k^S}]
=𝔼[.𝔼[τ_k^Se^β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^21{τ_k^A<τ_k^S}] |τ_k^A=t]
=𝔼[.𝔼[τ_k^Se^β(t-τ_k^S)/(e^β(t-τ_k^S)+1)^2|τ_k^A=t,τ_k^S>t]ℙ(τ_k^S>t|τ_k^A=t)]
=𝔼[𝔼[(t+S')e^-β S'/(e^-β S'+1)^2]ℙ(τ_k^S>t|τ_k^A=t)]
=𝔼[(tA(β,μ)+B(β,μ))ℙ(τ_k^S>t|τ_k^A=t)]
=𝔼[(τ_k^A A(β,μ)+B(β,μ))e^-μτ_k^A]
=λ/(λ+μ)^2A(β,μ)+λ/(λ+μ)B(β,μ)
where
A(β,μ) =μ(β-μ H(μ/2β)+μ H(μ/2β-1/2))/2β^2
=μ/2β+μ^2/2β^2(H(μ/2β-1/2)-H(μ/2β)_H̃(β,μ))
B(β,μ) =μ/4β^3(2β H(μ/2β)-2β H(μ/2β-1/2)-μψ^(1)(β+μ/2β)+μψ^(1)(2β+μ/2β))
=-μ/2β^2H̃(β,μ)+μ^2/4β^3(ψ^(1)(2β+μ/2β)-ψ^(1)(β+μ/2β)_ψ̃^(1)(β,μ))
Moreover, note that
H(μ/2β) =ψ^(0)(μ/2β+1)+γ
H(μ/2β-1/2) =ψ^(0)(μ/2β+1/2)+γ
Note that
H(μ/2β)-H(μ/2β-1/2) =log(μ/2β+1)+1/μ/2β+1
-log(μ/2β+1/2)+1/μ/2β+1.
Similarly,
𝔼[τ_k^Se^β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^21{τ_k^A>τ_k^S}]
=𝔼[𝔼[.se^-β(τ_k^A-s)/(e^-β(τ_k^A-s)+1)^21{τ_k^A>s}|τ_k^S=s]]
=𝔼[𝔼[.se^-β(τ_k^A-s)/(e^-β(τ_k^A-s)+1)^2|τ_k^A>s,τ_k^S=s]ℙ(τ_k^A>s|τ_k^S=s)]
=𝔼[𝔼[se^-β T'/(e^-β T'+1)^2]ℙ(τ_k^A>s|τ_k^S=s)]
=𝔼[τ_k^S A(β,λ)e^-λτ_k^S]
=μ/(λ+μ)^2 A(β,λ).
Then,
-2β/μ𝔼[τ_k^S(e^β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^21{τ_k^A<τ_k^S}+e^β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^21{τ_k^A>τ_k^S})]
=-2β/μ(λ/(λ+μ)^2A(β,μ)+λ/(λ+μ)B(β,μ)+μ/(λ+μ)^2A(β,λ))
=-2β/μ(λ A(β,μ)+μ A(β,λ)/(λ+μ)^2+λ/(λ+μ)B(β,μ))
=-2λ/(λ+μ)^2-2β/μ(λμ^2H̃(β,μ)+μλ^2H̃(β,λ)/2β^2(λ+μ)^2-λμ/2β^2(λ+μ)H̃(β,μ)+λ/(λ+μ)μ^2/4β^3ψ̃^(1)(β,μ)).
Note that as β→∞,
lim_β→∞H̃(β,μ) =γ+ψ^(0)(1/2),
lim_β→∞ψ̃^(1)(β,μ) =-π^2/3.
This means that the leading order term is O(1/β^2). In particular,
-2β/μ(λμ^2H̃(β,μ)+μλ^2H̃(β,λ)/2β^2(λ+μ)^2-λμH̃(β,μ)/2β^2(λ+μ)) ∼π^2λ^2(μ-λ)/6β^2(λ+μ)^2
Finally, we have the second-order term
-2β/μλ/(λ+μ)μ^2/4β^3ψ̃^(1)(β,μ)∼π^2λμ/6β^2(λ+μ).
Thus, we have the following characterization of the bias:
𝔼[d/dμe^-βτ_k^A-e^-β w_k/μ/e^-βτ_k^A+e^-β w_k/μ]-(-2λ/(λ+μ)^2)∼1/β^2π^2λ(μ^2-λ^2+2μλ)/6(λ+μ)^2+o(1/β^2)
For variance, we have
𝔼[(-2βe^β(τ_k^A+w_k/μ)/(e^βτ_k^A+e^β w_k/μ)^2w_k/μ^2)^2]
=𝔼[4β^2/μ^2e^2β(τ_k^A+τ_k^S)/(e^βτ_k^A+e^βτ_k^S)^4τ_k^S,2]
=4β^2/μ^2𝔼[τ_k^S,2(e^2β(τ_k^A+τ_k^S)/(e^βτ_k^A+e^βτ_k^S)^41{τ_k^A<τ_k^S}+e^2β(τ_k^A+τ_k^S)/(e^βτ_k^A+e^βτ_k^S)^41{τ_k^A>τ_k^S})]
=4β^2/μ^2𝔼[τ_k^S,2(e^2β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^41{τ_k^A<τ_k^S}+e^2β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^41{τ_k^A>τ_k^S})].
Note that
𝔼[τ_k^S,2e^2β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^41{τ_k^A<τ_k^S}]
=𝔼[𝔼[.τ_k^S,2e^2β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^41{t<τ_k^S}]|τ_k^A=t]
=𝔼[𝔼[(t+S')^2e^-2β S'/(e^-β S'+1)^4]ℙ(τ_k^S>t|τ_k^A=t)]
=𝔼[𝔼[(t^2+2S'+S'^2)e^-2β S'/(e^-β S'+1)^4]ℙ(S_i>t|T_i=t)]
=𝔼[(t^2Ã(β,μ)+tB̃(β,μ)+C̃(β,μ))ℙ(S_i>t|T_i=t)]
=𝔼[(T_i^2Ã(β,μ)+T_iB̃(β,μ)+C̃(β,μ))e^-μ T_i]
=2λ/(λ+μ)^3Ã(β,μ)+λ/(λ+μ)^2B̃(β,μ)+λ/(λ+μ)C̃(β,μ).
Similarly,
𝔼[τ_k^S,2e^2β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^41{τ_k^A>τ_k^S}]
=𝔼[𝔼[.s^2e^2β(s-τ_k^A)/(e^β(s-τ_k^A)+1)^41{τ_k^A>s}|τ_k^S=s]]
=𝔼[𝔼[s^2e^-2β(τ_k^A-s)/(e^-β(τ_k^A-s)+1)^4|τ_k^A>s,τ_k^S=s]ℙ(τ_k^A>s|τ_k^S=s)]
=𝔼[𝔼[s^2e^-2β T'/(e^-β T'+1)^4]ℙ(τ_k^A>s|τ_k^S=s)]
=𝔼[Ã(β,λ)τ_k^S,2e^-λτ_k^S]
=2μ/(λ+μ)^2Ã(β,λ).
Putting the above two parts together, we have
4β^2/μ^2𝔼[τ_k^S,2(e^2β(τ_k^A-τ_k^S)/(e^β(τ_k^A-τ_k^S)+1)^41{τ_k^A<τ_k^S}+e^2β(τ_k^S-τ_k^A)/(e^β(τ_k^S-τ_k^A)+1)^41{τ_k^A>τ_k^S})]
=4β^2/μ^2(2λ/(λ+μ)^3Ã(β,μ)+λ/(λ+μ)^2B̃(β,μ)+λ/(λ+μ)C̃(β,μ)+2μ/(λ+μ)^2Ã(β,λ))
∼4βλ/3μ(λ+μ)^2
§.§ Proof of Theorem <ref>
By assumption, h ≤ c(μ - λ) for some c < 1. First, we develop bound for _∞ (∇̂^𝖱 J_N(θ; ξ_1:N)).
We can compute the variance by conditioning on the value of Y:
_∞(Q̂_N(μ-hY)·(log Y-1/θ)) =𝔼[_∞(Q̂_N(μ-hy)(log y-1/θ)|Y=y)]
+(𝔼[Q̂_N(μ-hy)(log y-1/θ)|Y=y)]).
For the first term, note that the asymptotic variance in the CLT for the ergodic estimator Q̂_N(μ) is _∞(Q̂(μ))=2ρ(1+ρ)/(1-ρ)^4. Then, we have _∞(Q̂_N(μ)) = 2ρ(1+ρ)/N(1-ρ)^4. Since h ≤μ, _∞(Q̂_N(μ))= 2ρ(1+ρ)/N(1-ρ)^4.
Then,
[ _∞(Q̂_N(μ-hy)·(log y+1/θ)|Y = y) ]
=[ (log y+1/θ)^2_∞(Q̂_N(μ-hy)|Y = y) ]
≥[ (log Y+1/θ)^22ρ(1+ρ)/N(1-ρ)^4]
= 1/θ^22ρ(1+ρ)/N(1-ρ)^4=Θ( (1-ρ)^-4),
where the last equality uses the fact that for Y∼Beta(θ,1),
[log Y]=ψ(θ) - ψ(θ + 1)=-1/θ
and
[ (log Y+1/θ)^2]=(log Y) = ψ_1(θ) - ψ_1(θ + 1) = 1/θ^2.
For the second term, we plug in the true estimand as the expectation of Q_N(μ), i.e. Q(μ) = λ/μ - λ. Then,
(𝔼[Q̂_N(μ-hy)(log y+1/θ)|Y=y)])
= (λ/μ - hY - λ( log Y + 1/θ))
= [ (λ/μ - hY - λ)^2( log Y + 1/θ)^2]
- [ (λ/μ - hY - λ)
( log Y + 1/θ) ]^2.
We proceed to evaluate these expectations analytically.
[ (λ/μ - hY - λ)
( log Y + 1/θ) ]
= ρ/1-ρ(
Γ(θ)/Γ(1+θ)
F^2_1(1,θ,1+θ,h/μ-λ)
-
θΦ(h/μ - λ,2,θ)
)
=O((1-ρ)^-1),
where F^2_1 is the Hypergoemetric 2F1 function and Φ is the Lerch transcendental function.
We also have
[ (λ/μ - hY - λ)^2( log Y + 1/θ)^2]
= λ^2/θ^2(μ - λ)^3(
2(μ - λ)
+ hθ^3Φ(h/μ - λ,2,θ+1 )
+ hθ^3(θ - 1)Φ(h/μ - λ,3,θ+1 ) .
+ (μ - λ)
( θμ - λ/μ - h - λ - (θ - 1) F_1^2(1,θ,1+θ,h/μ-λ)
)
. +2(μ -λ) F_2^3((2,θ,θ),(1+θ,1+θ),h/μ-λ)
)
= O((1-ρ)^-2).
Taking the difference between the above two parts under the limit as (1-ρ)→ 0, we have
(λ/μ - hY - λ( log Y + 1/θ)) =O((1-ρ)^-2).
Next, we develop a bound for _∞ (∇̂ J_N(θ; ξ_1:N)).
_∞(
h ·∇̂ Q_N(μ - h Y) ( 1/θ Y log Y )
) =
[_∞(h ·∇̂ Q_N(μ - h Y) ( 1/θ Y log Y )|Y=y)]
+(𝔼[h ·∇̂ Q_N(μ - h Y) ( 1/θ Y log Y )|Y = y])
For the first term, we can use the fact that |Ylog Y| ≤ 1/e almost surely since Y∈[0,1]. We also use recent results in <cit.>, which compute the asymptotic variance of the IPA estimator:
_∞(∇̂ Q_N(μ)) = 1 + 16ρ + 27 ρ^2 + 2 ρ^3 + 6ρ^4/μ^2N(1 + ρ)(1-ρ)^5≤ 52μ^-2N^-1(1-ρ)^-5
Under μ - hy, the congestion factor 1 - λ/μ - hy = μ - hy - λ/μ - hy≥μ - cy(μ - λ) - λ/μ = (1-cy)(1-ρ) and μ-hy≥μ-h≥ (1-c)μ.
So we have the bound,
_∞(h∇̂ Q_N(μ - h Y) ( 1/θ y log y)|Y=y)
≤ h^2θ^-2(y log y)^2_∞(∇̂ Q_N(μ - h y))
≤ h^2μ^-2θ^-2e^-2 52N^-1(1-c)^-7(1-ρ)^-5
= O(N^-1 h^2μ^-2 (1-ρ)^-5)
= O(N^-1 (1-ρ)^-3)
since h = O(1-ρ).
For the second term, we plug in the true estimand as the mean of ∇̂ Q_N(μ), i.e., ∇ Q(μ) = -ρ/μ(1-ρ)^2,
[h ·∇̂ Q_N(μ - h Y) ( 1/θ Y log Y )|Y = y]
= - h1/θ (y log y) λ/(μ - hy - λ)^2.
We next evaluate the variance analytically,
(
- h1/θ (Y log Y) λ/(μ - hY - λ)^2)
=
[
(- h1/θ (Y log Y) λ/(μ - hY - λ)^2)^2]
- [
- h1/θ (Y log Y) λ/(μ - hY - λ)^2]^2
Since
[
- h1/θ (Y log Y) λ/(μ - hY - λ)^2]
= h/(1+θ)^2λ/(μ - λ)^2
F^3_2((2,1+θ,1+θ),
(2+θ, 2+ θ), h/μ -λ)
= O((1-ρ)^-1)
and
[
(- h1/θ (Y log Y) λ/(μ - hY - λ)^2)^2]
=
2hθ^2Γ(θ)^3Γ(2+θ)^-3λ^2/(μ - λ)^2×( F^4_3((3,1+θ,1+θ,1+θ),
(2+θ, 2+ θ,2+ θ), h/μ -λ) .
. -F^4_3((4,1+θ,1+θ,1+θ),
(2+θ, 2+ θ,2+ θ), h/μ -λ) )
= O((1-ρ)^-2),
we have
(
- h1/θ (Y log Y) λ/(μ - hY - λ)^2) = O( (1-ρ)^-2).
§.§ Proof of Corollary <ref>
First, we can explicitly characterize the optimal baseline:
b^* = [Q(μ -hY) ∇_θlogπ_θ(Y)^2]/[∇_θlogπ_θ(Y)^2]
= [Q(μ -hY) ( log Y + 1/θ)^2]/[( log Y + 1/θ)^2]
= λ/μ - λ[F^2_1(1,θ,1+θ,h/μ-λ) -2θ^2Φ(h/μ-λ,2,θ) + 2h/μ-λθ^3Φ(h/μ-λ,3,1+θ)]_b(θ)
= O((1-ρ)^-1).
Next, we plug this into the estimator.
_∞((Q̂_N(μ-hY)-b^*)·(log Y-1/θ)) =𝔼[_∞((Q̂_N(μ-hy)-b^*)(log y-1/θ)|Y=y)]
+(𝔼[(Q̂_N(μ-hy) - b^*)(log y-1/θ)|Y=y]).
Note that since b^* is a constant, the first term, i.e., the mean of the conditional variance given Y, has the same value as in Theorem <ref>. For the second term, note that since h = c(μ - λ),
[ (λ/μ - hY - λ - b^*)
( log Y + 1/θ) ]
= (λ/μ - λ)^2[ (1/1- cY - b(θ))
( log Y + 1/θ) ].
Since [ (1/1- cY - b(θ))
( log Y + 1/θ) ] > 0 and doesn't depend on μ or λ, this confirms that the second term is Θ((1-ρ)^-2)
|
http://arxiv.org/abs/2409.02901v1 | 20240904174452 | Topological Methods in Machine Learning: A Tutorial for Practitioners | [
"Baris Coskunuzer",
"Cüneyt Gürcan Akçora"
] | cs.LG | [
"cs.LG",
"cs.CG",
"math.AT"
] |
[email protected]
0000-0001-7462-8819
University of Texas at Dallas
Richardson
TX
USA
[email protected]
0000-0002-2882-6950
University of Central Florida
Orlando
FL
USA
§ ABSTRACT
Topological Machine Learning (TML) is an emerging field that leverages techniques from algebraic topology to analyze complex data structures in ways that traditional machine learning methods may not capture. This tutorial provides a comprehensive introduction to two key TML techniques, persistent homology and the Mapper algorithm, with an emphasis on practical applications. Persistent homology captures multi-scale topological features such as clusters, loops, and voids, while the Mapper algorithm creates an interpretable graph summarizing high-dimensional data. To enhance accessibility, we adopt a data-centric approach, enabling readers to gain hands-on experience applying these techniques to relevant tasks. We provide step-by-step explanations, implementations, hands-on examples, and case studies to demonstrate how these tools can be applied to real-world problems. The goal is to equip researchers and practitioners with the knowledge and resources to incorporate TML into their work, revealing insights often hidden from conventional machine learning methods. The tutorial code is available at <https://github.com/cakcora/TopologyForML>.
<ccs2012>
<concept>
<concept_id>10002950.10003648.10003700</concept_id>
<concept_desc>Mathematics of computing Algebraic topology</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010293.10010294</concept_id>
<concept_desc>Computing methodologies Learning paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010282</concept_id>
<concept_desc>Computing methodologies Machine learning algorithms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010293.10010319</concept_id>
<concept_desc>Computing methodologies Machine learning applications</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010497</concept_id>
<concept_desc>Applied computing Life and medical sciences</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010432</concept_id>
<concept_desc>Applied computing Physical sciences and engineering</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Mathematics of computing Algebraic topology
[500]Computing methodologies Learning paradigms
[500]Computing methodologies Machine learning algorithms
[500]Computing methodologies Machine learning applications
[300]Applied computing Life and medical sciences
[300]Applied computing Physical sciences and engineering
20 September 2024
[revised]12 March 20xx
[accepted]5 June 20xx
Topological Methods in Machine Learning: A Tutorial for Practitioners
Cüneyt Gürcan Akçora
Received XXXX; Accepted YYYY
=====================================================================
§ INTRODUCTION
As the complexity of datasets has grown in recent years, topological methods have emerged as powerful complements to state-of-the-art ML methods. The advent of ML has revolutionized the way we analyze and interpret complex data, yet there remain challenges in capturing the intrinsic topological structures inherent in such data. Traditional ML techniques, while powerful, often fall short in identifying and leveraging these structures, leading to the potential loss of valuable insights. Topological Machine Learning (TML) bridges this gap by integrating concepts from algebraic topology into ML workflows, enabling the discovery of patterns and features that are otherwise elusive. Despite this utility, much of the existing literature on topological methods in ML is highly technical, making it challenging for newcomers to grasp the direct connections to practical applications.
In this tutorial, we introduce the fundamental concepts of topological methods to the machine learning (ML) community and a broader audience interested in integrating these novel approaches into their research. No prior knowledge of topology or ML is required. Our primary aim is to address this pressing need by providing a practical guide for non-experts looking to employ topological techniques in various ML contexts. To maintain accessibility, we will simplify the exposition and offer references to more detailed technical resources for those interested in further exploration.
In this paper, we teach two cornerstone techniques of TML: Persistent Homology and Mapper algorithm, and their effective utilization in ML. Persistent homology offers a robust, multi-scale analysis of topological features, allowing researchers to detect and quantify structures such as clusters, loops, and voids across different scales within the data. This capability is particularly beneficial for understanding the intricate relationships and hierarchies that may exist in complex datasets. From an ML perspective, PH offers a great feature extraction method for complex datasets, which was impossible with other methods. In this part, we will focus on this aspect of PH, giving hands-on instructions with illustrations on deriving effective topological vectors from complex data. On the other hand, the Mapper algorithm complements this by providing a visual and interpretable summary of high-dimensional data. By constructing a summary graph that mirrors the underlying topology of data, the Mapper algorithm facilitates the exploration and interpretation of data in an intuitive and informative way. This technique is instrumental in uncovering data's geometric and topological essence, making it accessible for practical analysis.
Throughout the paper, we provide comprehensive explanations and step-by-step implementations of these techniques, supported by case studies spanning diverse applications such as cancer diagnosis, shape recognition, genotyping, and drug discovery. We aim to equip researchers and practitioners with the necessary knowledge and tools to integrate TML techniques into their studies, thereby unlocking new avenues for discovery and innovation. By demonstrating the practical utility of persistent homology and the Mapper algorithm, we highlight their potential to reveal insights that traditional methods may overlook, ultimately advancing the field of Machine Learning.
We note that this paper is not intended to be a survey of recent advances in topological data analysis but rather a tutorial aimed at introducing the fundamentals of the topic to ML practitioners. For comprehensive surveys in topological data analysis, refer to <cit.>. For in-depth discussions of these topics, see the excellent textbooks on TDA and computational topology <cit.>.
§.§ Roadmap for the Tutorial
We recommend reading the entire tutorial for a complete understanding, but readers may skip sections not pertinent to their needs. Here, we give a quick overview of the paper's structure.
In <Ref>, we provide the essential topological background needed to follow the concepts discussed throughout the rest of the paper. We aim to introduce these topological concepts, help build an intuition for TDA approaches, and demonstrate how they can be adapted and applied to various needs. For those unfamiliar with topology, we strongly encourage reading our introductory crash course in <Ref>. When discussing homology, we start with a non-technical, brief overview (<Ref>), which should suffice for following the rest of the paper. For readers interested in more technical details of homology computation, we offer a more in-depth explanation in <Ref>.
In <Ref>, we introduce Persistent Homology (PH) in three key sections: constructing filtrations (<Ref>), deriving persistence diagrams (PD) (<Ref>), and applying PDs to ML tasks, including vectorization (<Ref>) and neural networks (<Ref>). The filtration process is tailored to different data formats, as methods differ significantly based on the type of data—whether it’s point clouds (<Ref>), images (<Ref>), or networks (<Ref>). If you focus on a specific data type, you can skip sections on other formats. In each subsection, we also discuss the hyperparameter selection process. Lastly, <Ref> covers available software for PH.
In <Ref>, we give a brief introduction to a niche subfield, Multiparameter Persistence (MP), an effective extension of PH. Again, we outline how to tailor the method to specific data formats for multifiltrations, i.e., point clouds (<Ref>), images (<Ref>), and networks (<Ref>). Next, we outline the state-of-the-art methods on how to integrate MP information in ML pipelines (<Ref>).
In <Ref>, we begin with a friendly introduction to the Mapper method (<Ref>), an effective TML tool for unsupervised learning. We then cover hyperparameter selection for Mapper in <Ref>, which is key for applications. Although the original Mapper algorithm is intended for point clouds, <Ref> explores recent advancements that extend its application to images and networks. Lastly, we provide an overview of available software libraries for Mapper in <Ref>.
In <Ref>, we summarize five real-life applications of these methods from published works, namely shape recognition for point clouds (<Ref>), anomaly forecasting for transaction networks (<Ref>), cancer detection from histopathological images (<Ref>), computer-aided drug discovery (<Ref>), and cancer genotyping from RNA sequencing (<Ref>).
Finally, in <Ref>, we outline potential future directions to advance TML methods, aiming to improve their practical use in ML and discuss strategies for broadening the application of topological methods to new and emerging fields.
§ BACKGROUND
In this section, we provide some background that will be used later. We first introduce several mathematical concepts that will be used in the second part, where we describe homology, which is essential for introducing the methods in subsequent sections.
§.§ A Crash Course on Topology
Topology, a core discipline in mathematics, studies the properties of shapes and spaces that remain unchanged under continuous transformations such as stretching, crumpling, and bending, but not tearing or gluing. In machine learning, topology provides a powerful framework for examining complex data structures flexibly and intuitively. To provide clarity, we begin by defining several key terms essential for following the paper.
§.§.§ Topological Space
From a mathematical perspective, topology refers to the structure of a set. For instance, consider the set of real numbers between 0 and 1, i.e., = {x∈| 0 ≤ x ≤ 1}. Many readers might immediately consider the closed interval [0,1], whose shape resembles a stick. However, is merely a set with no defined structure yet. This means we do not know which points in are close to or far from others, as there is no concept of neighborhoods.
For example, if we define a distance (metric) on such as d_1(x, y) = 1 for x ≠ y and d_1(x, y) = 0 for x = y, the "shape" of the set would be entirely different, resembling an infinite number of points dispersed in a very high-dimensional space. Conversely, if we use a metric like d_2(x, y) = |x - y| on , we retrieve our familiar closed interval [0,1], a stick of length 1. Notice that due to the different metrics used, the neighborhood structures and shapes of (, d_1) and (, d_2) are completely different. This brings us to the first principle in Topology: the topology of a set is defined as the complete neighborhood information on the set.
The principle is not without exceptions: a significant research area called point-set topology studying topological spaces that do not necessarily have a metric <cit.>. However, these topologies are beyond our scope. Thus, throughout the paper, a topological space refers to a set (or dataset) equipped with a distance d(·,·), forming what is known as a metric space (,d). In other words, a set qualifies as a topological space if we can unambiguously identify the neighbors of its points.
Although we might not explicitly reference distance, most datasets inherently possess some form of a metric for detailed analysis. For instance, point clouds use the metric of the space they are embedded in, providing neighboring information. In graphs, adjacent nodes are naturally considered neighbors. Similarly, in images, neighboring pixels are deemed neighbors. This paper treats these examples as topological spaces with their natural topologies. However, specific datasets, such as RNA-sequencing data, lack an inherent metric, requiring users to define a metric to determine which data points are considered close and distant, depending on the context.
*Simplicial Complexes. One special family of topological spaces used throughout our paper will be simplicial complexes. A simplicial complex is a mathematical structure used in topology and combinatorics to study the properties of shapes and spaces in a discrete manner. It consists of vertices, edges, and higher-dimensional simplices such as triangles, tetrahedra, and their higher-dimensional counterparts, which are glued together in a specific way to form a coherent whole.
Each simplex is a generalization of the concept of a triangle to higher dimensions. By this, we mean that the properties and structure of a triangle (such as having vertices, edges, and faces) are extended into higher dimensions, even though the shapes look different as we go up in dimensions. In particular, a k-simplex is defined as the convex hull of affinely independent k+1 points in ^k. For example, 2-simplex is a triangle (with its inside filled), e.g., the convex hull of (smallest convex set containing) three points {(0,0), (1,0),(0,1)} in ^2. Similarly, 3-simplex is a tetrahedron (with inside filled), e.g., the convex hull of {(0,0,0), (1,0,0),(0,1,0),(0,0,1)} in ^3. A union of simplices is called a simplicial complex if any two simplices in the complex either do not intersect or intersect in a complete subsimplex (See <Ref>).
r3in
< g r a p h i c s >
Simplicial Complexes. Among the complexes, only b and f fail to be simplicial complexes, as their simplices do not intersect at complete subsimplices. All others are valid simplicial complexes.
*Dimension in Topology One of the most confusing concepts for non-experts in topology is the notion of dimension. To study topological spaces more effectively, we focus on a specific family of spaces that meet certain regularity conditions, known as manifolds. A k-dimensional manifold is a topological space that locally resembles a ball in ^k, i.e., a small neighborhood of any point looks like (homeomorphic) a ball in ^k.
Importantly, we do not concern ourselves with the ambient (i.e., surrounding) space in which our manifold resides; we focus solely on its intrinsic properties, e.g., the material the manifold is made of. For example, although a circle is often visualized in ^2 as a two-dimensional object, from a topological perspective, it is actually one-dimensional. This is because, at any point on the circle, its local neighborhood resembles a line segment, i.e., it is made of 1-dimensional material. Consider an ant walking on a circle; it has two options: moving forward or backward. While we might need two coordinates to describe the ant's position on the circle mathematically, from the ant's perspective, it only experiences a continuous path where it can move in either direction. To the ant, the circle feels like an endless line. Similarly, a sphere is a 2-dimensional manifold (surface) even though it is often visualized in ^3. This is because a small neighborhood of any point on the sphere resembles a piece of in ^2. Notice that with this definition, the dimension of the ambient space becomes irrelevant; only the topological properties define the dimension of a manifold, i.e., a loop (circle) in ^2 or ^3 is called 1-dimensional. Similarly, a torus (a hollow donut) and a genus-2 surface (a surface with two holes) are examples of 2-dimensional manifolds. If this discussion leads you to wonder about manifolds with more than two dimensions, we have some discouraging news: they do exist, but visualizing them is not intuitive.
*Boundary. In topology, understanding the boundary of a manifold is crucial to grasping its structure. A k-dimensional manifold M is a space where each point has a neighborhood that resembles an open subset of ^k. The boundary ∂ M of M comprises points where these neighborhoods resemble an open subset of the half-space ^k_+ = { x ∈^k | x_k ≥ 0 }. This implies that the local structure is similar to part of ^k_+ near each boundary point.
To illustrate, consider a sphere, a 2-dimensional surface without any boundaries. An ant walking on the sphere can indefinitely move in any direction without encountering an edge. In contrast, the unit disk in ℝ^2 is a 2-dimensional surface with a boundary, specifically the unit circle . An ant residing in could not continue its journey once it reaches the boundary , or the "border" of . This is denoted as ∂ =. Similarly, the unit ball in ^3 (the solid ball) is a 3-dimensional manifold with a boundary, and its boundary is the unit sphere , denoted by ∂ =. In this context, we say that the sphere bounds the ball .
If you're puzzled by the statement that a sphere has no boundary, whereas the solid ball does, consider it from the perspective of "its inhabitants." An ant living on the sphere's surface can move freely in any direction without ever encountering a limit (no border), which is why we say the sphere has no boundary. However, a mouse living inside the ball will eventually hit the boundary (border)—the sphere's surface—beyond which it cannot go. That limiting surface is what we call the solid ball's boundary.
A key principle in topology is that a boundary's boundary is always empty. Mathematically, this is expressed as ∂ (∂) = ∅. Furthermore, the boundary of a k-dimensional manifold is generally a (k-1)-dimensional manifold with no boundary. For example, the boundary of a disk (a 2-dimensional manifold with a boundary) is a circle (a 1-dimensional manifold without a boundary). This observation will soon be important when discussing homology.
§.§.§ Topological Equivalence
Topology primarily focuses on the global properties of shapes rather than their local characteristics. For example, topology is concerned with whether a space is connected or has holes without regard to the size of the object or the holes.
Topological equivalence can be intuitively understood as follows: two shapes, and are topologically equivalent if one can be continuously deformed into the other. Imagine and are made of Play-Doh. If you can reshape into without tearing, gluing, or collapsing any part of the shape, then they are considered to be continuously deformable into each other. In this context, "collapsing" refers to reducing a part of the shape to a lower dimension, such as squishing a surface or line down to a point or flattening a 3-dimensional object into a 2-dimensional plane.
In mathematical terms, this concept corresponds to a homeomorphism (the prefix "homeo-" comes from the Greek word "homoios," which means "similar" or "like"). In particular, such a deformation from to represents a bijective map, which means it creates a one-to-one correspondence between elements of and , ensuring that each element in is paired with a unique element in and vice versa. The map φ:→ keeps track of how each point x ∈ becomes a point y ∈ after the deformation, i.e., φ() = y. The condition of no tearing ensures that this map is continuous as nearby points in must map to nearby points in . Gluing is the opposite of tearing, so we also require that the inverse map φ^-1 is continuous. In summary, φ:→ is called a homeomorphism if φ is a continuous bijection with a continuous inverse.
We previously noted that size does not affect the topological structure. For example, let =, the set of all real numbers, and = (-1, 1), the open interval containing all real numbers between -1 and 1, excluding the endpoints. These two spaces are topologically equivalent via the homomorphism φ:→(-1,1) with φ(x) = x/1 + |x|. However, has an infinite diameter, while (-1,1) has a diameter of only 2. Hence, this is a good example of how the space size does not matter in topology. On the other hand, consider = (0,1) ∪ (1,2) and = (0,2). While both spaces are similar as sets, they are not topologically equivalent because consists of two separate connected components, whereas is a single connected piece. A continuous deformation (homeomorphism) must preserve the number of components.
*Homotopy. Another important concept in topology is homotopy, which can be seen as a flexible deformation of one space into another. Unlike homeomorphism, homotopy allows for collapsing but not tearing or gluing. We call two spaces homotopic to each other if such a deformation exists from one to another. For instance, a disk and a point are homotopic because the entire disk can be collapsed into a point by pushing inwards from all directions towards the center. Similarly, a punctured disk (𝐃^2-{(0,0)}) and a circle are homotopic, as a punctured disk can be continuously deformed towards its boundary, starting from the puncture point (0,0). Homotopy provides a powerful tool for classifying topological spaces and understanding their fundamental properties, as most topological invariants are homotopy invariants, meaning they yield the same output for two homotopic spaces. For example, the connectivity and the count of holes or cavities do not change under homotopy.
§.§.§ Topological Invariant
If two spaces and are topologically equivalent, showing that there is a homeomorphism φ: → would be sufficient. However, if they are topologically different, one must demonstrate that no such map can exist, which is highly challenging. Mathematicians use invariants to show such inequivalence. An invariant refers to a property or feature of an object that remains unchanged under certain transformations. A topological invariant is an invariant that is preserved under homeomorphism. A topological invariant can be a number (number of components, Euler characteristics, Betti numbers), a mathematical object (homology groups, fundamental group, cohomology ring), or a mathematical property (compactness, connectedness).
r3in
[Sphere]
< g r a p h i c s >
[Cube]
< g r a p h i c s >
[Torus]
< g r a p h i c s >
The sphere and the cube are topologically equivalent, whereas the torus is different from both.
In the example above, = (0,1) ∪ (1,2) versus = (0,2), we used the number of components as a topological invariant to show they are not equivalent. However, more subtle examples require more sophisticated invariants. For instance, a cube and a sphere are topologically equivalent because one can deform a cube into a sphere without tearing or gluing. In contrast, comparing a sphere and a torus (the surface of a donut as shown in <ref>) is more complex. They are not topologically equivalent because the torus has "holes." By holes, we mean the loops on the surface, which cannot be shrunk down to a point without leaving the surface. From this perspective, a sphere has no hole, however, on a torus, some loops (meridian and longitude) can't be shrunk down to a point in the torus, representing the "holes" in torus (see <Ref>). Despite this difference, proving that a sphere and a torus are not homeomorphic is challenging. Therefore, subtle topological invariants that detect these holes are crucial for distinguishing such spaces.
*Euler Characteristics One important and well-known example of such topological invariants is the Euler characteristic. If is a simplicial complex, we define the Euler characteristic as the alternating sum of the count of k-simplices in , i.e., χ() = ∑_k (-1)^k n_k, where n_k is the count of k-simplices in . It turns out the Euler characteristic is a topological invariant. i.e., if and are homeomorphic, then χ() = χ(). The Euler characteristic is, in fact, a homotopy invariant as well (see <cit.>). For example, a sphere is topologically equivalent to the surface of a hollow tetrahedron , which has 4 vertices (n_0), 6 edges (n_1), and 4 triangles (n_2). Therefore, we have χ() = χ() = n_0 - n_1 + n_2 = 4 - 6 + 4 = 2. Similarly, if one computes the Euler characteristic of a torus through a simplicial complex, we find that χ() = 0. Since χ() ≠χ(), it follows that a sphere is not homeomorphic to a torus.
§.§.§ Geometry vs. Topology
Before proceeding, let's clarify a common confusion between the terms topology and geometry. Both are branches of mathematics, but they focus on different aspects of metric spaces.
Geometry focuses on the local study of shapes, sizes, and properties of space, as well as how these spaces embed (fit into) higher-dimensional spaces. For example, a 2-dimensional sphere can be embedded into ^3 (or ^10) in various shapes and sizes.
Geometry focuses on precise measurements (e.g., angles, distances, curvature) and the relationships between points, lines, surfaces, and solids. It explores how shapes bend and twist, providing tools to understand complex surfaces and spaces through concepts such as curvature and geodesics (i.e., shortest paths).
In contrast, topology examines the global properties of space that remain unchanged under continuous deformations. It focuses on qualitative aspects like connectedness and continuity rather than precise measurements. For instance, topology considers a cube and a sphere as topologically equivalent because these shapes can be transformed into one another through continuous deformation despite their geometric differences—one being flat and the other curved. For instance, no matter how we embed a 2-sphere into ℝ^10, its topological properties remain unchanged. However, the geometry varies significantly: a round sphere with a radius of 1 differs greatly from one with a radius of 100. Moreover, an irregular sphere would be geometrically distinct from both.
In data science, geometry usually refers to the local shape characteristics of a dataset, such as distances, curvature, and angles, whereas topology pertains to global characteristics, such as connectedness and the number of holes/cavities. For example, dimension reduction methods like PCA <cit.> and UMAP <cit.> are considered geometric methods as they heavily depend on the distances and how the dataset sits in the high dimensional space. In contrast, methods that count the number of components or holes in the space are called topological methods.
§.§ Homology
To distinguish topological spaces, the most common method is to use topological invariants such as Euler characteristics (<Ref>), the fundamental group <cit.>, or homology. Among these, homology is the most versatile and robust invariant that applies to a wide range of spaces such as surfaces (e.g., spheres, tori), simplicial complexes (e.g., triangulated shapes), and manifolds (e.g., higher-dimensional analogs of curves and surfaces).
There are various ways to compute homology (cellular <cit.>, simplicial, Morse <cit.>), where the outputs are the same, but the computation methods applied are different. To utilize computational tools more effectively, it's more efficient to use discrete representations of topological spaces, like simplicial complexes. Simplicial homology is particularly suited for TDA because it deals with simplicial complexes, which are sets of vertices, edges, triangles, and their higher-dimensional counterparts. Considering these aspects, TDA mostly employs simplicial homology to capture the topological patterns in data,
although Morse or cellular homology are used in specific applications within TDA, such as cellular in image classification <cit.>.
Here, we provide a brief overview (TLDR) of homology, followed by a formal yet accessible introduction. For a detailed, friendly introduction to simplicial homology, refer to <cit.>, or for an in-depth study, see <cit.>.
§.§.§ TLDR
Homology is a fundamental invariant in topology that captures information about a space's structure by examining its holes/cavities of various dimensions. The focus on holes/cavities may surprise the reader, but holes are preferred because they are fundamental features that significantly influence the structure and properties of a space, as we outline below. To simplify the exposition, we use the concept of a k-hole in a topological space . Although this term slightly abuses notation, it refers to a k-dimensional submanifold in that cannot be continuously deformed into a point within the space. In reality, this k-hole corresponds to a (k+1)-dimensional "cavity" Ω in . Since this cavity represents a "missing" region within the space, we describe it by using its boundary ∂Ω =, which is non-contractible in . The k^th homology group _k() captures these k-holes, or k-dimensional manifolds in , that do not bound any (k+1)-dimensional region in the space.
* k-holes are topological invariants, meaning they remain unchanged under continuous deformations, such as stretching or bending, that do not involve tearing or gluing. And homology can detect them.
* 0-holes represent the connected components of a space. The dimension (or rank) of _0() corresponds to the number of these components in .
* 1-holes correspond to loops in the space that cannot be contracted to a point without leaving the space. The dimension of _1() represents the number of such loops (1-holes) in the space.
* 2-holes correspond to cavities within the space, which can be considered hollow regions enclosed by surfaces (e.g., the interior of a sphere or torus). The dimension of _2() represents the number of such cavities (2-holes) in the space.
r2.8in
< g r a p h i c s >
Toy examples for homology. We present the ranks of the homology groups of various topological spaces, i.e., for _0()=^k we write only _0=k for simplicity, representing the count of the k-dimensional holes in .
To make the concept of homology groups more accessible, we will offer a simplified and visual explanation.
We denote homology groups by _k(), where represents the space under consideration, and k indicates the dimension being analyzed. Independent k-holes are generators for the homology group _k(). Therefore, the rank of _k() indicates the number of k-dimensional holes present in the space . These numbers are also called Betti numbers {β_k()}. e.g., if _2()=^3, then we say rank(_2())=3 , or β_2()=3, meaning has three 2-holes. Below, we will give some examples for dimensions 0,1 and 2 (See <Ref>).
∙ _0(): 0-dimensional homology represents the connected components in . If has three components, then we say _0()=^3 (or _2^3 if used _2-coefficients), and rank(_0())=β_0()=3.
∙ _1(): 1-dimensional homology computes the non-contractible loops (holes) in .
For example, a sphere has no nontrivial loops, as any loop in the sphere bounds a disk in the sphere, i.e., β_1()=0.
A torus (hollow donut), on the other hand, has meridian and longitude circles (see Figure <ref>), both being non-contractible loops, making the total β_1()=2. On the other hand, a solid donut would have only the longitude circle as noncontractible, β_1()=1.
∙ _2(): 2-dimensional homology group captures two-dimensional cavities (voids) in the space . To count these cavities, we utilize surfaces in the space which does not bound a 3-dimensional domain in the space. For example, in the unit ball in ^3 (solid ball), any closed surface bounds a 3-dimensional domain within , and there are no cavities within it. Hence, we say rank(_2())=β_2()=0. Similarly, if one removes two disjoint smaller balls from , say ', we would have two different cavities, which can be represented by different spheres in , enclosing these balls. Then, we say β_2(')=2. In <Ref>, the sphere, the torus, or the genus-2 surface have one 2-dimesional cavities represented by themselves.
We conclude this section with an interesting fact. While we defined the Euler characteristic for simplicial complexes as the alternating sum of the number of k-simplices, there is an alternative way using Betti numbers. In particular, if is a topological space, the Euler characteristic is given by χ()=∑_k(-1)^kβ_k().
§.§.§ Computation of Homology
We now turn to a formal introduction. Homology is a mathematical operation whose inputs are topological spaces and outputs are groups. In particular, for a given space , _k() represents the k^th homology group of , summarizing the k-dimensional non-collapsible submanifolds in , each representing different (k+1)-dimensional "cavity" of . We will call these "k-holes" by abusing notation.
The concept of homology groups stems from the idea that a (k+1)-dimensional hole or cavity in is detected by the presence of its k-dimensional boundary, which cannot be continuously contracted within . While this method might seem indirect—using boundaries to infer the existence of cavities—the difficulty arises from the need to identify what is missing in . Here, a fundamental principle of topology comes into play: the boundary of a boundary is always empty. If Ω is a (k+1)-dimensional domain in with boundary = ∂Ω, then the boundary of is empty. i.e., ∂(∂Ω) = ∂=∅ Hence, to identify cavities, we first find all k-dimensional submanifolds with no boundary in (k-cycles). Then, by eliminating those that bound a domain in (k-boundaries), we are left with the true cavities. This process forms the core idea behind homology computation.
To keep the exposition focused, we will describe only simplicial homology with ℤ_2 coefficients, the most common version used in TDA. For other homology settings, refer to <cit.>. In particular, simplicial homology involves representing a given space 𝒳 as a simplicial complex (a collection of simplices) and performing computations on these simplices. This approach allows us to discretize the problem by focusing on the k-dimensional "building blocks" of the topological space, such as vertices for 0-dimension and edges for 1-dimension. By considering k-submanifolds (or k-subcomplexes) as unions of k-simplices, we can identify the special ones that correspond to true cavities in 𝒳 by using computational tools. To formalize this concept, we now introduce the relevant mathematical notions.
*i. Representation of k-simplices In a simplicial complex , we describe k-simplices by listing their vertices. For example, a 1-simplex (an edge) e with endpoints v_2 and v_4 is denoted as e = [v_2, v_4]. Similarly, a 2-simplex (a triangle) τ with vertices v_1, v_5, and v_7 is written as τ = [v_1, v_5, v_7]. Since we are using ℤ_2 coefficients, the order of the vertices does not matter. However, in other versions, such as with ℤ-coefficients, the order of the vertices would be significant. In general, a k-simplex Δ is represented by its k+1 vertices, i.e., Δ = [v_i_0, v_i_1, …, v_i_k]. We will call a union of k-simplices a k-subcomplex of .
*ii. k-chains _k() To describe all k-dimensional subcomplexes (which correspond to all k-submanifolds, with or without boundary) within a simplicial complex , we define a group _k(), known as the group of k-chains. Recall that any union of k-simplices forms a k-subcomplex. For instance, if the simplicial complex consists of three edges {e_1, e_2, e_3}, then all possible 1-subcomplexes are {e_1, e_2, e_3, e_1 ∪ e_2, e_1 ∪ e_3, e_2 ∪ e_3, e_1 ∪ e_2 ∪ e_3}. We represent this collection using group elements, where each 1-simplex acts as a generator. In particular, the union e_1 ∪ e_3 is represented by σ_1 = (1,0,1), while e_1 ∪ e_2 is represented by σ_2 = (1,1,0). Hence, we obtain the group _1() = _2^3, where 0 element corresponds to the empty set. The group operation is addition, and since 1+1=0 in _2, we have σ_1 + σ_2 = (1,0,1) + (1,1,0) = (0,1,1), corresponding to the union e_2 ∪ e_3. Similarly, if contains m k-simplices {Δ_1, Δ_2, …, Δ_m}, then _k() = _2^m, where a group element like (1,0,1,…,1) represents the k-subcomplex Δ_1 ∪Δ_3 ∪Δ_m within .
*iii. Boundary operator ∂_k To identify k-holes in a simplicial complex, we must first identify k-subcomplexes that have no boundary. This is accomplished by defining a boundary operator ∂_k: 𝒞_k() →𝒞_k-1(), which maps k-chains to their (k-1)-dimensional boundaries. In particular, ∂_k maps each k-simplex to its boundary. For example, if we have a 1-simplex (edge) e = [v_0, v_1], then ∂_1[e] = v_0 + v_1 ∈_0(), representing the boundary of the edge as the union of its end vertices {v_0}∪{v_1}. Similarly, for a 2-simplex (triangle) τ = [v_0, v_1, v_2], the boundary is given by ∂_2τ = [v_0, v_1] + [v_1, v_2] + [v_2, v_0] ∈_1(), representing the boundary of the triangle as the union [v_0, v_1] ∪ [v_1, v_2] ∪ [v_2, v_0].
To define ∂_k for general k-chains, we sum the boundaries of each k-simplex within the chain. For instance, if σ = Δ_1 + Δ_3 + Δ_7 is a k-chain, then ∂_kσ = ∂_kΔ_1 + ∂_kΔ_3 + ∂_kΔ_7.
A k-chain σ is said to have no boundary if ∂_kσ = 0. In other words, a k-subcomplex with no boundary in must map to 0 under the boundary operator.
The boundary operator ∂_k: 𝒞_k() →𝒞_k-1() is a linear operator and can be represented as a matrix. For example, if 𝒞_k() = ℤ_2^n and 𝒞_k-1() = ℤ_2^m, then ∂_k can be written as an m × n matrix 𝐀, with each column 𝐀_i corresponds to ∂_k Δ_i, where Δ_i is a k-simplex in . In <Ref>, we give an explicit example of a matrix representation of a boundary operator the simplicial complex in <Ref>.
r0.2
< g r a p h i c s >
Toy example for homology.
*iv. k-cycles _k() We define a special subgroup _k(), named k-cycles, within _k() for k-subcomplexes that have no boundary. As previously mentioned, a k-subcomplex with no boundary in must map to zero under the boundary operator. Hence, we define the subgroup _k() = ∂_k, which includes all k-chains that map to zero. Recall that 1-chains with no boundary correspond to loops, while 2-chains with no boundary correspond to closed surfaces, like a sphere or torus.
For example, consider a square-shaped simplicial complex (<Ref>) with four vertices {v_0,v_1,v_2,v_3} and four edges {e_1,e_2,e_3,e_4} where e_1=[v_0,v_1],e_2=[v_1,v_2],e_3=[v_2,v_3] and e_4=[v_3,v_0]. In this case, _1()=_2^4 and _0()=_2^4. Suppose σ_1=e_1+e_2; then ∂_1σ_1= (v_0+v_1)+(v_1+v_2)=v_0+v_2, meaning that σ_1 represents a 1-subcomplex with a boundary. However, if σ_2=e_1+e_2+e_3+e_4, then ∂_1σ_2= (v_0+v_1)+(v_1+v_2)+(v_2+v_3)+(v_3+v_0)=2(v_0+v_1+v_2+v_3)=0, indicating that σ_2 represents a 1-subcomplex with no boundary. Notice that σ_2 corresponds to a complete loop in . See <Ref> for more details.
r1.3in
∂_1 =
[ ∂ e_1 ∂ e_2 ∂ e_3 ∂ e_4; [ 1 0 0 1; 1 1 0 0; 0 1 1 0; 0 0 1 1 ] ][ ; v_0; v_1; v_2; v_3 ]
∂_1:_1()→_0() is represented as a 4× 4 binary matrix. Columns represent the edges in _1(), and rows correspond to the vertices in _0(). For example, ∂ e_3= v_2+v_3 can be read from the third column
We can also describe the boundary map ∂_1: _1() →_0() as a 4 × 4 matrix, as shown in <Ref>. It is clear that the only element in _1() that maps to (0,0,0,0) ∈_0() is (1,1,1,1), which corresponds to the loop σ_2. This means has only one 1-cycle, which is σ_2. Then, _k() is the subgroup of _k() generated by the single element (1,1,1,1). If you are unfamiliar with group theory, you can think of _1() as a four-dimensional vector space, where _1() is a one-dimensional subspace generated by the vector (1,1,1,1).
*v. k-boundaries _k() While we identified all k-subcomplexes with no boundary, none correspond to true cavities. We must eliminate the ones that bounds a domain in . To find them, we use again the boundary operator. As _k+1() represents all (k+1)-subcomplexes in , then the image of ∂_k+1, i.e., _k() = ∂_k+1𝒞_k+1() ⊂𝒞_k(), represents the ones which bounds a (k+1)-domain in . We call an element in _k() a k-boundary. Recall that ∂_k(∂_k+1σ)=0 from earlier discussion. This means for any k-boundary φ=∂_k+1σ in _k(), ∂_kφ=0. Therefore, we have _k()⊂_k(). For example, if is a simplicial complex formed by only one 2-simplex τ with vertices {v_0,v_1,v_2}, then _1() would be a subgroup in _1(), generated by ∂_2 τ=σ=(1,1,1) while _1() would be the same group, i.e., _1()=_1(). This means there is only one loop σ in but it does not represent a hole in as it is filled by τ, i.e., σ=∂τ.
* vi. Homology group _k() Now we are ready to define homology. Notice that with the boundary operator, we obtained the following sequence of groups and maps. We can consider these as vector spaces and linear maps. For each k, k-chains _k() represent all k-subcomplexes in , k-cycles _k()⊂_k() represent k-subcomplexes with no boundary, and finally, _k()⊂_k() represent the k-subcomplexes which bounds a (k+1)-domain in .
⋯[r, "∂_k+2"] _k+1() [r, "∂_k+1"] _k() [r, "∂_k"] _k-1() [r, "∂_k-1"] ⋯[r, "∂_1"] _0()[r] 0
Now, to identify k-holes (true cavities), we consider all k-cycles in (_k()), which potentially represent a cavity of , and, among them, eliminate all k-boundaries in (_k()), as they represent the "fake" cavities. Hence, we define k^th homology group as the quotient group
_k() = _k()/_k() = ker(∂_k)/image(∂_k+1)
In terms of the sequence above, this quotient _k() = _k()/_k() effectively counts the k-dimensional cycles that correspond to actual holes or cavities in , not those that are merely boundaries of higher-dimensional regions. From a computational perspective, with this formulation, we only need to compute the kernels and images of a sequence of linear maps {∂_k} (binary matrices) to compute homology.
r0.2
< g r a p h i c s >
Toy example <ref> for homology.
To clarify these notions, we give two toy examples for explicit computation of homology.
Consider the example in <Ref> where is a square-shaped simplicial complex with four vertices {v_0, v_1, v_2, v_3} and four edges. For k ≥ 2, _k() = {0} since there are no k-simplices in . Both _1() and _0() are isomorphic to _2^4, as discussed earlier (k-cycles above). The group _1() has only one generator, the sum of all edges. Since _2() = {0}, we have _1() = {0} because _1() = ∂_2 _2(). Therefore, _1() = _1()/_1() = _2/{0} = _2, meaning that _1() has rank 1, corresponding to the entire square loop. The boundary map ∂_1 is given in <Ref>.
For _0(), we need to determine _0() and _0(). Since ∂_0 sends everything to 0, _0() = _0() = _2^4. The group _0() = ∂_1 _1() is generated by the boundaries of the edges: (v_0 + v_1), (v_1 + v_2), (v_2 + v_3), (v_3 + v_0). However, these boundaries are not linearly independent because v_3 + v_0 is the sum of the other three boundaries. Thus, _0() ≃_2^3. Consequently, _0() = _0()/_0() = _2^4/_2^3 = _2. Recall that the rank of _0() represents the number of connected components and the fact that rank(_0()) = 1 confirms that there is only one connected component.
r3.2in
∂_2 =
[ ∂τ_1 ∂τ_2 ∂τ_3 ∂τ_4; [ 1 0 1 0; 1 1 0 0; 0 1 0 1; 0 0 1 1; 1 0 0 1; 0 1 1 0 ] ][ ; e_1; e_2; e_3; e_4; e_5; e_6 ]
∂_1 =
[ ∂ e_1 ∂ e_2 ∂ e_3 ∂ e_4 ∂ e_5 ∂ e_6; [ 1 0 0 1 1 0; 1 1 0 0 0 1; 0 1 1 0 1 0; 0 0 1 1 0 1; ] ][ ; v_0; v_1; v_2; v_3 ]
Boundary maps for Ex. <ref>. The boundary map ∂_2:_2()→_1() is represented as a 6× 4 binary matrix (left). The top of each column corresponds to the 2-simplex in _2() whose image is represented by that column, while each row's corresponding edge is given next to the column. For example, the boundary of 2-simplex τ_2, ∂τ_2= e_2+e_3+e_6 by reading the second column in ∂_2 matrix. The boundary map ∂_1:_1()→_0() is similar (right).
Second example is in one dimension higher.
Consider the hollow tetrahedron of <Ref>, composed of four triangular faces: τ_1 = [v_0, v_1, v_2], τ_2 = [v_1, v_2, v_3], τ_3 = [v_0, v_1, v_3], and τ_4 = [v_0, v_2, v_3]. The complex contains four 2-simplices (triangles), six 1-simplices (edges), and four 0-simplices (vertices). Let e_1=[v_0,v_1],e_2=[v_1,v_2], e_3=[v_2,v_3],e_4=[v_3,v_0], e_5=[v_0,v_2] and e_6=[v_1,v_3]. Therefore, we have _2() = _2^4, _1() = _2^6, and _0() = _2^4.
A straightforward computation shows that the kernel of ∂_2, denoted _2(), has a single generator: the sum of all the triangles, τ_1 + τ_2 + τ_3 + τ_4. Since there is no 3-simplex in , we find that _2() = {0}, leading to _2() = _2. This result aligns with the fact that rank(_2()) = 1, indicating the presence of one cavity in , consistent with being a hollow tetrahedron. We
Furthermore, calculating _1() = _2^3 and _1() = _2 shows that these cancel out, yielding _1() = {0}. In other words, any loop in is filled, so there is no 1-hole in .
Similarly, following the same reasoning as in the example above, we obtain _0() = _2 as expected.
§ PERSISTENT HOMOLOGY
In this section, we introduce Persistent Homology (PH), a foundational technique that played a pivotal role in the emergence of TDA, as developed by Carlsson, Edelsbrunner, Zomorodian, and others in the early 2000s <cit.>. PH captures the underlying shape patterns within complex data sets by studying the evolution of topological features across multiple scales.
r3in
< g r a p h i c s >
PH Pipeline. From data acquisition and filtration complex construction to generating persistence diagrams using software libraries. The final step highlights methods for integrating persistence diagrams into downstream ML tasks.
PH first constructs a nested sequence of simplicial complexes, known as filtration, and tracks the birth and death of features, such as connected components, loops, and voids, in this sequence. The resulting multi-scale representation highlights significant features while filtering out noise, making PH a valuable tool in various fields, including medical imaging <cit.>, biomedicine <cit.>, time series analysis <cit.>, material science <cit.>, geography <cit.>, shape analysis <cit.> and finance <cit.>.
In the following, we explain PH in an accessible way, focusing on its key aspects relevant to ML applications. The main idea behind PH is to capture the hidden shape patterns in the data by using algebraic topology tools. PH achieves this by keeping track of the evolution of the topological features (k-holes, components, loops, and cavities) created in the data while looking at it in different resolutions.
In simple terms, PH can be summarized as a three-step procedure as follows.
* Filtration: Generate a nested sequence of simplicial complexes derived from the data.
* Persistence Diagrams: Record the evolution of topological features across this sequence.
* ML Integration: Transform the persistence diagrams into vectors for efficient use in ML models.
We provide details of these steps in the following sections. Although the second and third steps are conceptually similar across most settings, the first step, constructing filtrations, varies significantly depending on the data type, i.e., point clouds, images, and networks. See <Ref> for a visual summary of PH pipeline.
§.§ Constructing Filtrations
In <Ref>, we introduced simplicial complexes which enable us to study the topological spaces via computational tools. To study the given data in different resolutions, PH generates a nested sequence of simplicial complexes _1⊂_2⊂…⊂_n induced from the data. Such a sequence is called a filtration. This step can be considered the most crucial for the effectiveness of PH in ML applications. The primary reason is that the filtration process involves examining the data at different resolutions by adjusting a "scale parameter". The choice of this "scale parameter" can greatly influence the performance of the method.
For each data type, there are well-established methods to construct filtrations that have proven highly effective in their respective contexts. These methods vary significantly depending on the data type. While point cloud <cit.> and image <cit.> settings use relatively common approaches in their settings, the construction of filtrations in graph settings is notably more versatile <cit.>.
r3in
[ϵ=0.4]
< g r a p h i c s >
[ϵ=0.7]
< g r a p h i c s >
[ϵ=2]
< g r a p h i c s >
For 8-shaped point cloud , {_()} filtration steps for thresholds =0.4,=0.7 an =2.
§.§.§ Filtrations for Point Clouds
As the process is generally similar for various metric spaces, for simplicity, we will describe it for a point cloud in ℝ^N using the Euclidean metric. Let = {x_1, x_2, …, x_m} be a point cloud in ℝ^N. We will define a nested sequence of simplicial complexes _1 ⊂_2 ⊂…⊂_n induced by . The central idea here is to build a series of simplicial complexes that progressively capture the topological picture of the point cloud in different resolutions as we move from _1 to _n. The "nested" part signifies that each simplicial complex is contained within the next one, like a series of Russian Matryoshka dolls.
Before moving on to simplicial complexes, we will define a simpler nested sequence for . Let _r(x)={y∈^N| d(x,y)≤ r} be the closed r-ball around x. Then, let r-neighborhood of , _r() = ⋃_i=1^m _r(x_i) be the union of r-balls around the points in . By declaring r as our scale parameter, we first fix a monotone sequence of threshold values 0 = r_1 < r_2 < … < r_n, where r_n = max_i,j{d(x_i, x_j)}, the diameter of . These values intuitively represent the resolution at which we observe the point cloud . In particular, a smaller value of r indicates a closer, more detailed examination of , while a larger value of r means observing the point cloud with a broader view from a greater distance, making it difficult to distinguish between points that are close together in . This naturally gives a nested sequence of topological spaces _r_1() ⊂_r_2() ⊂…⊂_r_n() (See <Ref>). While these neighborhoods {_r()} give a natural sequence for the data, to effectively leverage computational tools, we need to induce a sequence by simplicial complexes. There are two common ways to achieve this while preserving the underlying topological information.
r3in
[Cech Complex]
< g r a p h i c s >
[Rips Complex]
< g r a p h i c s >
[Cech Complex]
< g r a p h i c s >
Comparison of Rips and Čech Complexes. In panels <ref> and <ref>, the Čech and Rips complexes differ: the Čech complex does not form a 2-simplex among v_1, v_2, and v_3 due to insufficient ball overlap, while the Rips complex does. A larger radius, as in panel <ref>, is needed for the Čech complex to include this simplex.
*Method 1 - Rips complexes: For ⊂^N, and for r>0, the Rips complex (aka Vietoris-Rips complex) is the abstract simplicial complex _r() where a k-simplex σ=[x_i_0,x_i_1,…,x_i_k]∈_r()
if and only if d(x_i_m,x_i_n)< r for any 0≤ m,n≤ k. In other words, for r>0, if for k+1-points are pairwise r-close to each other, they form a k-simplex in _r().
*Method 2 - Čech complexes: Similarly, for ⊂^N, and for r>0, the Čech complex is the abstract simplicial complex _r() where a k-simplex σ=[x_i_0,x_i_1,…,x_i_k]∈_r() if and only if ⋂_j=0^k B_r(x_i_j)≠∅. Here, the condition to build k-simplex is different as we ask for a nontrivial intersection of the r-balls of the k+1-points.
The main relationship between our original filtration {_r_i()} and the simplicial complex filtrations arises from the Nerve Lemma. This lemma states that the Čech complex 𝒞̌_r() is homotopy equivalent to _r() for any r ≥ 0 <cit.>. Since homotopy equivalent spaces have the same homology, we have _k(𝒞̌_r()) ≃_k(_r()) for any k ≥ 0. Therefore, from a PH perspective, the filtrations {_r_i()}_1^n and {𝒞̌_r_i()}_1^n are essentially the same. Furthermore, for ⊂^N with the Euclidean metric, Rips and Čech complexes are closely related and produce similar topological information as 𝒞̌_r() ⊂_2r() ⊂𝒞̌_√(2)r() <cit.>.
While both complexes provide similar filtrations, Rips complexes are more commonly used in practice. This preference is due to the fact that Rips complexes only require the pairwise distances {d(x_i, x_j)} among points, which can be easily obtained at the beginning of the process. Hence, constructing Rips complexes is straightforward once these distances are known. In contrast, constructing Čech complexes requires checking whether collections of r-balls have nontrivial intersections. <Ref> depicts their differences in a toy scenario.
In particular, Rips complexes are the most common filtration type used for point clouds because of their computational practicality, and because of the Nerve Lemma, they capture very similar information produced by the simple neighborhood filtration {N_r()} described earlier.
Although we discuss the construction for point clouds in ^N, the same process can be applied to any metric space (any space with a distance function). Furthermore, in some cases, the point cloud is given in an abstract setting, where only pairwise distances {d(x_i, x_j)} among the points are provided. The Rips complex filtration can be effectively applied to such point clouds. In <Ref>, we detail the real-life application of this approach for shape recognition.
*Other complex types. One of the primary challenges in applying PH to point clouds is the computational cost. To mitigate this, witness complexes offer a valuable alternative for efficiently analyzing the topological features of point clouds, especially in high-dimensional spaces. Unlike the Rips complexes, which can be computationally expensive, witness complexes reduce complexity by utilizing a representative subset of the data points, known as witnesses, to construct the simplicial complex <cit.>. This method strikes a balance between computational efficiency and topological accuracy, making it particularly well-suited for large datasets where constructing full complexes would be impractical. By concentrating on representative data points (witnesses), witness complexes facilitate a more manageable and scalable computation of persistent homology, enabling the detection of the underlying shape and features of the point cloud across various scales. In addition to Rips and Čech complexes, other types of complexes, such as alpha complexes and Delaunay complexes, are particularly useful in lower-dimensional spaces. However, to maintain the focus of this paper, we refer the reader to <cit.> for an in-depth discussion of these complexes.
§.§.§ Filtrations for Images
For images, constructing filtrations differs significantly due to the unique structure of image data. To keep the explanation simple, we will focus on 2D images, though the concepts can be extended to 3D and other types of images. Filtration for images typically involves nested sequences of binary images, known as cubical complexes <cit.>. Starting with a given color (or grayscale) image of dimensions r × s, we first select a specific color channel (e.g., red, blue, green, or grayscale). The color values γ_ij∈ [0, 255] of individual pixels Δ_ij⊂ are used, where Δ_ij represents a closed box (square including its boundary) in the i^th row and j^th column of the image .
Next, we choose the number of thresholds "n" to span the color interval [0, 255], i.e., 0 = t_1 < t_2 < … < t_n = 255. This determines the length of our filtration sequence, with a typical range being between 50 and 100. Using these thresholds, we define a nested sequence of binary images (cubical complexes) _1 ⊂_2 ⊂…⊂_n such that _m = {Δ_ij⊂|γ_ij≤ t_m} (see <Ref>).
In particular, this involves starting with a blank r × s image and progressively activating (coloring black) pixels as their grayscale values reach each specified threshold t_m. This process is known as sublevel filtration, applied to relative to the chosen color channel (in this case, grayscale). Alternatively, pixels can be activated in descending order, referred to as superlevel filtration. In this context, let _m = {Δ_ij⊂|γ_ij≥ s_m}, where 255 = s_1 > s_2 > … > s_n = 0, and _1 ⊂_2 ⊂…⊂_n is called superlevel filtration. The persistence homology process involving such cubical complexes has a special name, called cubical persistence. In <Ref>, we detail a successful application of this method in cancer detection from histopathological images.
While these sublevel and superlevel filtrations are common choices for color or grayscale images, there are other filtration types used for binary images, e.g., erosion, dilation, and signed distance <cit.>. These filtrations are specific to binary images and effective in capturing the size and other properties of topological features. In <Ref>, we give an example of erosion filtration.
§.§.§ Filtrations for Graphs
Graphs are a widely used application domain for PH because they generate highly effective features that are not easily obtained through other methods. While filtration methods for point clouds and images are relatively standard, graph filtrations offer a variety of choices. The way you construct these filtrations can significantly impact the performance of ML models. Unlike other data formats, there are various methods to construct filtrations from graphs, where the details can be found in <cit.>. We categorize these methods into two groups:
*i. Filtrations through node/edge functions. This type of filtration is computationally efficient in most cases and is commonly used in applications. The main concept involves defining a filtration function on nodes or edges and using the order dictated by these functions to create a sequence of subgraphs and corresponding simplicial complexes.
r3in
< g r a p h i c s >
Graph Filtration. For 𝒢=𝒢_3 in both examples, the top figure illustrates a superlevel filtration using the node degree function with thresholds 3>2>1, where nodes of degree 3 are activated first, followed by those of lower degrees. Similarly, the bottom figure illustrates a sublevel filtration based on edge weights with thresholds 1.5< 1.8< 2.1.
Given a graph =(,), let f:→ be a node filtration function. Common examples of such functions include degree, centrality, betweenness, closeness, and heat kernel signatures (HKS). Additionally, functions may be derived from the domain of the graph, such as atomic number for molecular graphs or account balance for transaction networks. These functions establish a hierarchy among the nodes, ordering them from less important to more important within the graph within the context defined by the filtration function.
To define the resolution of our filtration, we choose a set of thresholds that cover the range of f, denoted as ℐ={ϵ_i}_1^n, where ϵ_1=min_v ∈ f(v) < ϵ_2 < … < ϵ_n = max_v ∈ f(v). For each threshold ϵ_i ∈ℐ, let _i = {v ∈| f(v) ≤ϵ_i}. Define _i as the subgraph of induced by _i, i.e., _i = (_i, _i) where _i = {e_ij∈| v_i, v_j ∈_i}. In other words, at each threshold, we activate the nodes whose values reach that threshold, along with the edges in the graph between these activated nodes.
This process results in a nested sequence of subgraphs _1 ⊂_2 ⊂…⊂_n = (See <Ref>). For each _i, we define an abstract simplicial complex _i for 1 ≤ i ≤ n, creating a filtration of simplicial complexes _1 ⊆_2 ⊆…⊆_n. Typically, clique complexes are used, where the complex is obtained by assigning a k-simplex to each (k+1)-complete subgraph ((k+1)-clique) in <cit.>. The term clique refers to a group (clique) of k+1 vertices forming a k-simplex. This is called sublevel filtration for the filtration function f. Similarly, one can reverse this process (using a decreasing order for activation of the nodes) to obtain a superlevel filtration, where _i = {v ∈| f(v) ≥ϵ_i'} where {_i'} is a monotone decreasing sequence.
While we have explained the process for node filtration functions, edge filtration functions are also common. This approach is frequently employed in weighted graphs or graphs with key edge functions for downstream tasks. For a graph , let g: → be an edge filtration function. In the case of weighted graphs, the weight function w: → with w(e_ij) = ω_ij serves as an example of such a function. Other common examples include Forman Ricci and Ollivier Ricci curvatures, which assign the (weighted or unweighted) edge a curvature value by interpreting the geometry of the edge's neighborhood <cit.>. Furthermore, edge filtration functions can be derived from the graph's domain, such as transaction amounts in blockchain networks <cit.> or density in traffic networks <cit.>. These functions establish a hierarchy among the edges, similar to node filtration functions.
Again, by choosing the threshold set ℐ={_i}_i=1^n with _1=min_e_ij∈g(e_ij)<_2<…<_n=max_e_jk∈g(e_jk), we define the filtration as follows. For _i∈ℐ, let ℰ_i={e_jk∈ℰ| g(e_jk)≤_i}.
Let 𝒢_i be a subgraph of 𝒢 induced by ℰ_i, i.e., 𝒢_i is the smallest subgraph of 𝒢 containing the edge set ℰ_i (Fig. <ref>). In other words, for each threshold, we activate the edges whose value reaches that threshold, along with the nodes attached to them. Again, this induces a nested sequence of subgraphs 𝒢_1⊂…⊂𝒢_n=𝒢. Then, one can follow the same steps by using clique complexes as before.
In <Ref>, we give a toy example for superlevel (node) and sublevel (weight) filtrations. Here, we list the most common node and edge filtration functions used in applications:
* Node Filtration Functions: Degree, betweenness, centrality, eccentricity, heat kernel signature (HKS), Fiedler values (spectral), node functions from the domain of the data.
* Edge Filtration Functions: edge weights (weighted graphs), Ollivier Ricci, Forman Ricci curvature,
*ii. Graph distance-based filtrations. Next to the sublevel filtrations above, a completely distinct filtration method for graphs uses the distances between nodes. Essentially, it treats the graph as a point cloud, where pairwise distances between nodes are defined by graph distance, and applies Rips filtration (<Ref>) to this point cloud. While this method is more computationally intensive than filtrations via node or edge functions, it can be highly effective for small graphs as it captures distance information and finer topological details.
To apply this method, we need to define graph distances between nodes. In an unweighted graph = (, ), a common approach is to set the distance between adjacent nodes to 1 (e_ij = 1) and define the distance between v_i and v_k as the length of the shortest path τ_ik in with endpoints v_i and v_k, i.e. d(v_i, v_k) = minτ_ik. Thus, each pairwise distance is an integer representing the number of hops needed to travel from v_i to v_j in . The largest such distance is the diameter of , denoted as n.
After obtaining the pairwise distances {d(v_i, v_j)} between the nodes, we treat the graph as a point cloud = {v_1, v_2, …, v_m} with these distances. Since all distances are integers, we use integer thresholds {r_i = i} for 0 ≤ i ≤ n = diam(). Now, we are ready to define our filtration. For the filtration index, we use superscripts instead of subscripts, where the reason will become clear later.
To make the exposition simpler, let ^k be the Rips complex corresponding to r=k, and let ^k be the graph corresponding to the 1-skeleton of ^k. Here 1-skeleton means the graph itself with its nodes and edges, without considering any higher-dimensional elements like faces or solids that might be part of a more complex simplicial structure. For the distance parameter r = 0, we have ^0 = since there are no edges in the Rips complex. It is easy to see that at the distance threshold of 1, 𝒢^1 = 𝒢 because all edges are automatically included in the complex when r = 1. Recall that ^1 is a Rips complex, hence any complete j-subgraph in ^1 generates a (j-1)-simplex (j-clique) in the Rips complex ^1. In particular, ^k is nothing but the clique (or flag) complex of ^k. Furthermore, ^k is the graph with node set , and edge set ^k = {e_ij| d(v_i, v_j) ≤ k}, i.e., ^k=(,^k). Therefore, in the filtration ^0 ⊂^1 ⊂…⊂^n, all simplicial complexes have the same node set, but at each step, we add new edges and cliques. Finally, for ^n, the 1-skeleton is a complete graph with m nodes, and hence the corresponding simplicial complex ^n would be an (m-1)-simplex.
r3in
< g r a p h i c s >
Graph Powers. A graph G=G^1 and its graph powers.
Red edges are added in G^2, and green ones are added in G^3.
Note 𝒢^3 is the complete graph on 7 vertices since D=diam(G)=3. Hence, 𝒢^3 would be a complete graph 𝒦_7, and all higher powers are the same, i.e. 𝒢^n=𝒢^3 for n≥ 3.
The graphs {^k} are called graph powers (See <Ref>), and this filtration is called the power filtration (or Rips filtration), hence the superscripts. Observe that the power filtration calculates the distances between every pair of vertices in the graph. Therefore, even vertices that are not direct neighbors or appear distant in the graph can still form a simplex in the later stages of the filtration. Note that you don't need to complete the entire filtration. In most cases, the critical insights information lies in the first few steps (e.g., up to =5 or 10) for the power filtration. Given the high computational cost, it's both practical and reasonable to stop early.
For a weighted graph = (, , ), edge lengths can be defined using weights {ω_ij}, where e_ij = f(ω_ij) instead of the uniform e_ij = 1 used in unweighted graphs. The function f: →^+ assigns distances based on weights, where a smaller f(ω_ij) indicates that nodes v_i and v_j are closely related, and a larger distance indicates they are less related. This approach is particularly useful in financial networks where large edge weights (i.e., amounts) indicate stronger financial connections or dependencies. In <Ref>, we elaborate on this method in the context of a real-world application: Crypto-token anomaly forecasting.
Once edge lengths are defined, pairwise distances between any node pair can be calculated as the length of the shortest path as before, i.e., d(v_i, v_j) = minϵ_ij, where ϵ_ij is any such path in the graph with endpoints v_i and v_j. After obtaining pairwise distances, the process is the same as for unweighted graphs.
r2in
< g r a p h i c s >
Coral Reduction. With CoralTDA, 2-core of has the same persistence diagrams _k() for k≥ 1.
*Graph Reduction and PH Although PH generates highly effective topological embeddings for graph representation learning, scalability remains a challenge. To address this issue, Akcora et al. <cit.> introduced two key methods to significantly reduce the computational costs of the PH process. The first method, CoralTDA, leverages the observation that nodes with low graph-core do not contribute to persistence diagrams in higher dimensions. Specifically, they demonstrate that the (k+1)-core of a graph, which is a subgraph where each vertex has at least k neighbors, is sufficient to compute the _k() of the original graph. This improvement can be implemented as few as three lines of Python code (see the repository). The second algorithm, PrunIT, introduces an efficient method to reduce the size of graphs without altering their persistence diagrams. In this approach, a vertex u is said to dominate another vertex v if the 1-neighborhood of u contains 1-neighborhood of v, i.e., (u)⊃(v). The authors show that removing (pruning) a dominated vertex from the graph does not affect the persistence diagrams at any level as long as the dominated vertex enters the filtration after the dominating vertex.
*Which filtration to use for graphs? We first note that filtrations using node/edge functions and filtrations based on graph distances are entirely different methods, producing distinct outputs. Distance-based filtrations are computationally demanding, but for datasets with small graphs (such as molecular graph datasets), power filtrations can be highly effective. Conversely, for larger graphs, the computational costs necessitate the use of filtration functions. In these cases, the choice of filtration function could be critical for performance <cit.>. Typically, relevant node or edge functions specific to the domain of the data are the best choices. If such functions are unavailable, heat kernel signatures (HKS) for node functions and Ollivier Ricci for edge functions are excellent alternatives. Choosing a filtration function can be seen as an outdated method, as letting ML algorithms select or construct the best filtration function for optimal performance is more reasonable. However, in many settings, model interpretability or greater control over the process is needed. In these instances, selecting a relevant filtration function is more suitable for the model.
If the goal is to achieve optimal performance with PH machinery, learnable filtration functions are a promising alternative. There are significant works, along with available code, that can be adapted for various tasks in graph representation learning <cit.>.
§.§.§ Choosing Thresholds
One of the key steps in effectively applying PH to ML problems is selecting the thresholds {_i}_i=1^m for constructing the filtration. The first decision involves determining the number of thresholds, i.e., m. This number can be viewed as the resolution of the filtration: a larger m implies higher resolution, while a smaller m indicates lower resolution. Then, a natural question arises: is there any disadvantage to choosing a large m? The answer is "Yes". Firstly, the computational cost increases with m. Secondly, the key information captured in the data may become diluted in higher dimensions, depending on the vectorization step.
For graph filtrations, depending on the filtration function, selecting m between 10 and 20 generally yields good results. Once the number of thresholds is set, they can be chosen equally spaced if the filtration function f: → (or g: →) is appropriate. Another effective method is to examine the distribution of the value set {f(v_i)} for all vertices in the graphs across the dataset (or a random subsample), and then select the thresholds as the corresponding quantiles in this distribution. Similarly, for distance-based filtrations, analyzing the distribution of pairwise distances between vertices in a random subsample of the dataset can provide valuable insights for selecting appropriate thresholds.
For sublevel filtrations in the image setting, it’s important to choose thresholds that cover the color range [0,255]. Typically, using 50-100 thresholds yields good results. However, if the task only concerns a specific color interval [a,b], it’s advisable to concentrate most or all thresholds within that range. These thresholds don't need to be evenly spaced. In the case of binary image filtrations (e.g., erosion, height, signed distance), thresholds are usually evenly distributed up to the maximum value.
r2.8in
[ϵ=0.4]
< g r a p h i c s >
[PB up to ϵ=0.4]
< g r a p h i c s >
[ϵ=0.7]
< g r a p h i c s >
[PB up to ϵ=0.7]
< g r a p h i c s >
[ϵ=2]
< g r a p h i c s >
[Persistence barcode]
< g r a p h i c s >
Evolution of Persistence Barcode (PB). In PBs shown on the right, red bars represent PB_0(), and blue bars represent PB_1(). The 8-shaped point cloud with 35 points, initially has 35 components, corresponding to 35 red bars. As ϵ increases, these components merge. At ϵ = 0.4, only the top 11 red bars remain (b), indicating the number of components in 𝒩_0.4() (a). By ϵ = 0.7, the space becomes fully connected (c), and all red bars in PB_0() terminate, except for the top ∞-bar (d). Regarding loops (1-holes), a small loop appears around ϵ = 0.3 (a-b) and persists in 𝒩_0.7() (c), while a larger loop emerges around ϵ = 0.6 (d). In PB_1() (f), two blue bars, [0.3, 0.9) and [0.6, 1.7), correspond to the small and large loops in the 8-shape.
For point clouds, the number of thresholds directly determines the filtration's resolution. While thresholds are generally evenly spaced, in the presence of outliers, they can be more sparsely distributed after a certain value. Additionally, since computational cost is often a concern with point clouds, it's important to find a balance between the number of thresholds and the associated computational expense.
§.§ Persistence Diagrams
After constructing the filtration _1 ⊂_2 ⊂…⊂_n for a data type , PH systematically tracks the evolution of topological features (k-holes) in the filtration sequence and records this information in a persistence diagram, which we define next. The nontrivial elements in the homology groups _k(_i) for 1≤ i≤ n represent the k-dimensional topological features (or k-holes) appearing in the filtration sequence. Furthermore, the inclusion map ι: _i ↪_i+1 allows us to determine if a k-hole σ in _i persists in _i+1 through the induced map ι_*: _k(_i) →_k(_i+1).
For each k-hole σ, PH records its first appearance in the filtration sequence, denoted _i_0, and its first disappearance in a later complex, _j_0. We define b_σ = ϵ_i_0 as the birth time of σ and d_σ = ϵ_j_0 as the death time of σ, where {ϵ_i}_1^n is the threshold set used for the filtration. The difference d_σ - b_σ is called the lifespan of σ. For example, if a k-hole τ first appears in _3 and disappears in _7, we mark the birth time as b_τ = ϵ_3 and the death time as d_τ = ϵ_7. The lifespan of τ is then ϵ_7 - ϵ_3.
For each nontrivial σ∈_k(_i) for 1 ≤ i ≤ n, we represent σ with a 2-tuple (b_σ, d_σ) to denote its birth and death times in the filtration. The collection of all such 2-tuples is called the persistence diagram (PD) as depicted in Figure <ref>. Note that a topological feature with the same birth and death time (0 lifespan) is considered a trivial topological feature, and they are represented with diagonal elements in the persistence diagrams. Therefore, the diagonal Δ={x=y}⊂^2 is always included in any persistence diagram.
Then, k^th persistence diagram is defined as
_k()={(b_σ, d_σ) |σ∈_k(_i) for b_σ≤ i < d_σ}∪Δ
Since trivial elements (t,t) in PDs can appear multiple times, we take the diagonal Δ with infinite multiplicity. Essentially, the persistence diagram is a subset of ^2 (_k() ⊂^2), where each point in _k() is either a pair of threshold values (ϵ_i, ϵ_j) for some i < j, or belongs to the diagonal (trivial element). Here, infinite multiplicity is a technical assumption, which will be important when discussing Wasserstein distance between PDs (<Ref>).
r3in
[ϵ=2]
< g r a p h i c s >
[Persistence barcode]
< g r a p h i c s >
Persistence Barcode (left) and Persistence Diagram (right) for the 8-shaped point cloud (<Ref>). Both representations convey the same information, while the PD plots each bar's birth time as its x-coordinate and death time as its y-coordinate. For example, the 35 red bars in PB_0() that begin at 0 are represented as 35 red points along the x=0 line in _0(), where the y-coordinates correspond to the bars' death times. Similarly, the blue points at (0.3, 0.9) and (0.6, 1.7) represent the blue bars [0.3, 0.9) and [0.6, 1.7), respectively.
There is an equivalent concept called the persistence barcode (see Figure <ref>), which uses bars (half-open intervals) {[b_σ, d_σ)} instead of 2-tuples {(b_σ, d_σ)}. The bar notation [b_σ, d_σ) represents the entire lifespan of a topological feature σ as such an interval. In practice, persistence diagrams are more commonly used due to their practicality in applications.
Note that the choice between sublevel and superlevel filtration is irrelevant for the point cloud setting. In the image setting, this choice is not essential, as they essentially convey the same information due to Alexander duality <cit.>; when analyzing the topological features of images, the choice of focusing on either the binary image or its complement is not crucial because the topological information of one is directly related to the topological information of the other through this duality. The Alexander duality holds only when the ambient space is quite simple, such as when the space has a structure like an m × n rectangle, like images. However, sublevel and superlevel filtrations can yield significantly different results in the graph setting. Carriere et al. <cit.> introduced extended persistence diagrams to address this issue.
This approach enables the birth and death pairs of the persistence diagram to be positioned below the diagonal (b > d). By doing so, it captures topological features identified through both superlevel and sublevel filtrations.
§.§.§ Interpretation of Persistence Diagrams
A persistence diagram _k(𝒳) records the k-dimensional topological features of the data 𝒳 as a collection of points in ℝ^2. For example, _0(𝒳) records 0-holes (components), and _1(𝒳) records 1-holes (loops) appearing in the filtration sequence {𝒦_i}. Each point q_j = (x_j, y_j) represents a k-dimensional hole σ_j. For instance, consider a threshold set ℐ = {ϵ_i = i/10}_i=0^100 with 100 thresholds spanning the interval [0, 10]. Suppose we have two points, q_1 = (0.3, 9.7) and q_2 = (4.2, 4.6), in _2(𝒳). These points represent 2-holes (cavities) σ_1 and σ_2 that appear in the filtration 𝒦_0 ⊂𝒦_1 ⊂…⊂𝒦_100.
For q_1 = (0.3, 9.7), the cavity σ_1 first appears in 𝒦_3 and persists until 𝒦_97, indicating a long lifespan. Conversely, for q_2 = (4.2, 4.6), the cavity σ_2 first appears in 𝒦_42 and persists only until 𝒦_46, indicating a short lifespan. This suggests that σ_1 represents an important topological feature of the data 𝒳, while σ_2 is likely just topological noise.
In general, features with long lifespans (where y - x is large) are located far from the diagonal and are considered significant (or "big") features. Features with short lifespans (where y - x is small) are close to the diagonal and are considered insignificant (or "small") features. This is why many vectorization techniques try to involve the lifespan information in their computation to give different emphasis to small and big features in the output.
§.§.§ Wasserstein Distance
After obtaining the persistence diagrams (PDs) for two datasets, 𝐗^+ and 𝐗^-, we can assess their topological similarities using these diagrams. One effective approach is to employ a metric in the space of PDs. If the distance between the two persistence diagrams is small, we can conclude that the two datasets exhibit similar topological characteristics; otherwise, they differ.
The most commonly used metric for this purpose is the Wasserstein distance (also known as the matching or earth mover distance) of the Optimal Transport Theory (see an ICML tutorial in <cit.>), which is defined as follows:
Let (^+) and (^-) be the persistence diagrams for the datasets ^+ and ^-, respectively (we omit the dimensions in PDs for simplicity). Denote (^+) = {q_j^+}∪Δ and (^-) = {q_l^-}∪Δ, where Δ represents the diagonal (indicating trivial cycles) with infinite multiplicity, and q_j^± = (b^±_j, d_j^±) ∈(^±) represents the birth and death times of a topological feature σ_j in ^±. Let ϕ: (^+) →(^-) represent a bijection (matching). The presence of the diagonal Δ on both sides ensures the existence of these bijections even if the cardinalities |{q_j^+}| and |{q_l^-}| differ. This means that for optimal matching, some points in {q_j^+} and {q_l^-} are matched to diagonal elements whenever necessary. Then, the x^th Wasserstein distance _p is defined as
_p((^+), (^-)) = min_ϕ( ∑_j q_j^+ - ϕ(q_j^+)_∞^p )^1/p, p ∈ℤ^+.
Here, (x_1,y_1)-(x_2,y_2)_∞=sup{|x_1-y_1|,|x_2-y_2|} is called supremum norm (or l^∞-norm). When p=∞, _∞ is called the bottleneck distance. i.e.,
_∞((^+), (^-)) = max_j {q_j^+ - ϕ(q_j^+)_∞} (See <Ref>).
r2.5in
< g r a p h i c s >
Wasserstein distances _1 and _∞ (bottleneck) between _1(8) and _1(9) for the point clouds of "8" and "9" on the left.
In applications, the most common choices for the Wasserstein distance are p=1, 2, and ∞. The bottleneck distance, unlike the p-Wasserstein distance, is insensitive to the number of points in _k(^±) and instead focuses on the distance between the farthest points in the optimal matching. For instance, ignoring the diagonal, if _k(^+) has 100 points and _k(^-) has only three points, the bottleneck distance is primarily determined by finding the three closest points in _k(^+) to those in _k(^-) and taking the maximum of these distances. Conversely, when using p=1 or p=2, the quantity of points in _k(^±) becomes significant as all points contribute to the distances, even if only slightly. Therefore, if there are only a few points in {_k(_j)} and the focus is on the distances of the most critical features, the bottleneck distance is preferable. However, if there are many points in {_k(_j)} and the primary topological patterns arise from the quantity and location of smaller features, then choosing p=1 (or p=2) for the Wasserstein distance would be more suitable.
There are several ways to utilize the Wasserstein distance in applications. One common approach is to bypass the vectorization step, which requires some choices, and directly compare persistence diagrams. For instance, to determine if two datasets have similar shapes or structures, one can compute the persistence diagrams for both datasets and then calculate the Wasserstein distance between these diagrams. A smaller distance indicates more similar shapes. Another application lies in unsupervised learning, where datasets are clustered based on their topological similarity using persistence diagrams, utilizing the Wasserstein distance to measure distances between them.
§.§ Integrating PDs to ML tasks 1: Vectorizations
To effectively use persistence diagrams (PDs) in ML applications, it is crucial to convert them into numerical or vector formats, allowing for the seamless integration of TDA outputs into standard ML workflows. Although PDs capture the birth and death of topological features within data, their variable size and structure make them challenging to handle directly. For example, in a binary graph classification task with 1000 graphs (400 in Class A and 600 in Class B), PDs might visually highlight differences between the classes. However, applying traditional statistical tools to PDs, which are subsets of ^2, is problematic; for instance, it is not possible to compute averages or confidence intervals for each class. Moreover, directly inputting PDs into ML models is impractical because their variable point count conflicts with the fixed-size input required by most ML algorithms.
Vectorization addresses this by making PDs compatible with traditional ML and statistical techniques. Below, we outline the most common vectorization methods used in practice. For a comprehensive review, refer to <cit.>.
In the following, for a fixed dataset 𝒳, a threshold set {ϵ_i}_i=1^n, and an induced filtration {𝒦_i}_i=1^n, we will introduce several vectorization methods. It is important to note that all these vectorization methods are applicable to any type of data since they simply transform a given PD into vectors. However, the effectiveness and common usage of certain vectorization methods can vary depending on the data type and the density or sparsity of the PDs. Although we present various vectorization methods and their configurations here, those who prefer to avoid the intricacies of vectorization selection and hyperparameter tuning can opt for automated approaches, as detailed in <Ref>.
*Function vs. Vector Format. In the following, we introduce several vectorization methods. For each method, we will describe how to convert a given PD into a vector or a function depending on the setting. Depending on the context, one of these formats might be preferable (visualization, ML input, etc.), however, both formats can be easily converted to each other. For example, if we have a function f:[ϵ_1,ϵ_n]→ℝ defined over an interval, we can convert f into an N-dimensional vector by sampling the values at f(t_j) where t_j = ϵ_1 + j/N(ϵ_n - ϵ_1) and j ranges from 1 to N. For each point t_j, we calculate f(t_j). The vector 𝐯_f is then 𝐯_f=[f(t_1), f(t_2), …, f(t_N)]. Conversely, for a given N-dimensional vector 𝐯=[v_1 v_2 … v_N], we can convert it to a function via a step function or linear spline interpolation, e.g., g_𝐯:[ϵ_1,ϵ_n]→ℝ such that g_𝐯(t_j)=v_j for t_j=ϵ_1+j/N(ϵ_n-ϵ_1). Then, for any point s∈[t_j,t_j+1], we define the linear extension g(s)=g(t_j)+g(t_j+1)-g(t_j)/t_j+1-t_j(s-t_j). Hence, any of the following vectorizations can be taken as a function or vector depending on the need.
*Topological Dimensions and Final Topological Vector. In each vectorization method, we convert a persistence diagram _k() into a vector (or array) 𝐯_k where k represents a topological dimension of k-holes. In particular, each persistence diagram produces a different vector. The user needs to decide which topological dimensions to be used in the method. After getting an N-dimensional vector for each dimension, a common method is to concatenate them to obtain the final topological vector. Hence, if we use m different topological dimensions, we have (m· N)-dimensional final vector 𝐯()=𝐯_0‖𝐯_1‖…‖𝐯_m-1. In most applications, the common dimensions used are k=0 and k=1, and hence the final topological vector is 2N-dimensional, i.e., 𝐯()=𝐯_0‖𝐯_1.
*Betti vectors One of the most straightforward and interpretable vectorization methods in TDA is the Betti function. The k^th Betti number of a topological space essentially counts the total number of k-dimensional holes in . More formally, it is defined as β_k() = rank(H_k()), which is the rank of the k^th homology group of . For example, β_0() represents the number of connected components in , while β_1() indicates the number of 1-dimensional holes.
r0.4
< g r a p h i c s >
Betti Function. The step function represents the first Betti number (β_1) over the figure-eight-shaped point cloud of Figure <ref> as a function of the threshold ϵ. The function shows the number of 1-dimensional holes (loops) in the point cloud. Initially, there are no loops (β_1 = 0), then a single loop appears at ϵ = 0.35 and persists until ϵ = 0.94, during which another loop appears at ϵ = 0.55 and vanishes at ϵ = 0.94. The final loop disappears at ϵ = 1.70, returning β_1 to 0.
For given filtration {_i}_1^n, we define k^th Betti vector β⃗_k()= [ β_k(_1) β_k(_2) …β_k(_n)]. Therefore, for each topological dimension k, we obtain a Betti vector of the size of a number of thresholds. For example, k=0,1 with n=50 results in 50-dimensional vector for each dimension. In ML applications, typically, these vectors are concatenated to create a 100-dimensional topological vector as output.
Consider the point cloud in <Ref>. We define a filtration using five threshold values ϵ = [0, 0.25, 0.75, 1.5, 1.75], where _i = __i(). The Betti-0 vector β⃗_0() = [35, 20, 1, 1, 1] is obtained, with β_0() = 35, β_0(_0.25()) = 20, and β_0(_ϵ()) = 1 for ϵ≥ 0.7. This indicates that _0.25() consists of 20 connected components. The Betti-0 vector can be easily determined from the persistence barcode PB_0() (<Ref>) by counting the number of red bars intersected by a vertical line x=_i, representing the number of persistent topological features at the threshold _i. Thus, with 5 thresholds, we obtain a 5-dimensional vector, β⃗_0().
Similarly, we can determine the corresponding Betti-1 vector by examining the blue bars in the persistence barcode (<Ref>). We observe no blue bar at x = 0, 0.25, and 1.75. Furthermore, there are two blue bars at x = 0.75 and one blue bar at x = 1.5. Therefore, the Betti-1 vector is β⃗_1() = [0, 0, 2, 1, 0]. This indicates that there are no 1-dimensional holes (1-holes) in = _0(), _0.25(), or _1.75(), while _0.75() contains two 1-holes and _1.5() contains one. It is also common to use Betti vectors as Betti functions via step functions (See <Ref>).
Similarly, using 21 equally spaced thresholds = {0, 0.1, 0.2, …, 2} would yield 21-dimensional vectors β⃗_⃗0⃗ and β⃗_⃗1⃗.
We note that Betti vectors do not require the computation of persistence diagrams. Therefore, there are computationally more effective ways to produce Betti vectors <cit.>. Another favorable aspect of Betti vectors is their ease of interpretation. Simply put, β_k(ϵ_i) is equal to the number of k-dimensional topological features in 𝒦_i.
r2in
< g r a p h i c s >
Persistence Landscape (PL). The plot shows the PL for _1() for the 8-shaped point cloud filtration in <Ref>. In the plot, the blue function corresponds to λ^1, and the orange function to λ^2.
*Persistence Landscapes Persistence Landscapes are one of the first vectorization methods in TDA, introduced by P. Bubenik, directly utilizing the lifespan information <cit.>. In particular, in this vectorization, the points away from the diagonal (large features) are easily distinguished and promoted. For a given persistence diagram _k()={(b_i,d_i)}, we first define generating functions Λ_i for each (b_i,d_i)∈_k(), i.e., Λ_i:[b_i,d_i]→ is a piecewise linear function obtained by two line segments starting from (b_i,0) and (d_i,0) connecting to the same point (b_i+d_i/2,b_i-d_i/2). Then, we define several piecewise-linear functions {λ^m} in the following way. For each t∈[ϵ_1,ϵ_n], we check all generating functions {Λ_i(t)}, and we mark m^th largest value. In particular, m^th Persistence Landscape function λ^m():[ϵ_1,ϵ_n]→ for t∈ [ϵ_1,ϵ_n] is defined as λ^m()(t)=m^th max_i{Λ_i(t)} (See <Ref>). Note that the first and second persistence landscapes are the most commonly used vectors, and they are used with concatenation in the applications.
r2.5in
< g r a p h i c s >
Silhouette Functions. The plot shows the silhouette functions with tuning parameters p=0.5,1,2 _1() for the 8-shaped point cloud filtration given in <Ref>.
*Silhouettes While persistence landscapes are among the first vectorizations to effectively utilize lifespan information, the need to consider m different maxima makes it difficult to use in ML applications.
In <cit.>, Chazal et al. proposed a practical modification called the Silhouette for persistence landscapes. This modification introduces a tuning parameter p to better utilize the lifespans of topological features. For a persistence diagram _k()={(b_i,d_i)}_i=1^N, let Λ_i be the generating function for (b_i,d_i) as defined in the persistence landscapes. The Silhouette function ψ is defined as: Ψ_k()=∑_i=1^N w_iΛ_i(t)∑_i=1^m w_i, t∈[ϵ_1,ϵ_n],
where the weight w_i is usually chosen as the lifespan (d_i-b_i)^p. Thus, the p-silhouette function ψ^p_k():[ϵ_1,ϵ_n]→ is defined as:
Ψ^p_k()=∑_i=1^N (d_i-b_i)^pΛ_i(t)∑_i=1^N (d_i-b_i)^p
The tuning parameter p is crucial as it adjusts the silhouette function's emphasis on topological features with varying lifespans. When p<1, shorter lifespans (d_i-b_i) are given more weight, emphasizing smaller features. Conversely, shorter lifespans are down-weighted when p>1, highlighting larger features. Common choices for p are 1/2, 1, and 2 (See <Ref>). If the persistence diagram has a few points and the goal is to emphasize significant features, p=2 would be a good choice. If there are many points and the key information comes from smaller features, p=1/2 is considered more suitable. Similarly, Silhouette functions ψ^p_k() can be converted to vectors for each dimension k, and concatenation of these vectors can be used in the application as the topological vector of the dataset.
r2.5in
< g r a p h i c s >
Persistence Curves. The plot shows the persistence curves for _1() for the 8-shaped point cloud filtration in <Ref> by using two different summary statistics: sum and mean. The dashed line represents the persistence curve using the sum of lifespans, which emphasizes the cumulative contribution of all loops. The solid line represents the persistence curve using the mean of lifespans, which highlights the average significance of the loops. This visualization illustrates how different summary statistics influence the interpretation of topological features in the data.
*Persistence Curves The Persistence Curve (PC) framework, introduced by Chung et al. <cit.>, offers a structured approach to the vectorization process. The core idea revolves around utilizing a generating function ψ, a summary statistic T, and a filtration parameter ϵ. From these elements, a PC is defined as:
PC(, ψ, T )(t) = T ([ψ(; b, d, ϵ) | (b, d) ∈_ϵ ]), ϵ∈.
Here, _ϵ represents the points within the persistence diagram that fall inside a region Δ_ϵ varying with the filtration value ϵ.
The function ψ takes as input a point from the persistence diagram , and the filtration parameter, ϵ and outputs a real number. This function can be chosen to prioritize certain points in the PD or to emphasize specific features. Finally, the statistic T acts upon a multiset of values, aggregating them into a single real number. Common examples include sum, mean, or max.
PCs provide a general and unifying framework for vectorization methods.
By selecting different combinations of ψ and T, one can generate various functional summaries of the PD, each potentially highlighting different aspects of the data. Many established vectorizations of PDs can be represented within the PC framework, e.g., Betti curves, persistence landscapes, silhouettes, and entropy curves. See <cit.> for more details.
r3in
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Persistence images (PI). PIs are vectorization with 2D (matrix) output. Here, we give examples of PIs for _1() (8-shaped point cloud in <Ref>) with different spread values. The spread parameter σ controls the standard deviation of the Gaussian functions used to smooth the persistence points in the persistence diagram. As illustrated in the figures, as σ increases, the generating Gaussian functions are even more spread out, producing a flatter persistence image.
*Persistence Images Our next vectorization is Persistence Images, introduced by Adams et al. <cit.>. Unlike most vectorizations, Persistence Images, as the name suggests, produce 2D-arrays (tensors). The idea is to capture the location of the points in the PDs with a multivariable function by using the 2D Gaussian functions centered at these points. For PD()={(b_i,d_i)}, let ϕ_i represent a 2D-Gaussian centered at the point (b_i,d_i)∈^2. Then, one defines a multivariable function, Persistence Surface, μ=∑_iw_iϕ_i where w_i is the weight, mostly a function of the life span d_i-b_i. To represent this multivariable function as a 2D-vector, one defines a k× l grid (resolution size) on the domain of μ, i.e., threshold domain of PD(). Then, one obtains the Persistence Image, a 2D-vector (matrix) μ⃗=[μ_rs] of size k× l such that
μ_rs=∫_Δ_rsμ(x,y) dxdy where Δ_rs= pixel with index rs in the k× l grid.
Note that the resolution size k× l is independent of the number of thresholds used in the filtering; the choice of k and l is completely up to the user. There are two other important tuning parameters for persistence images, namely the weight w_i and the variance σ (the width of the Gaussian functions). Like Silhouettes, one can choose w_i=(d_i-b_i)^p to emphasize large or small features in the PD. Similarly, the width parameter σ determines the sharpness of Gaussian, where smaller σ would make the Gaussian functions more like Dirac δ-function, and larger σ would make the Gaussians flat. Depending on the context, σ can be chosen a constant (e.g. σ=0.1) or depending on the point (b_i,d_i), e.g., σ_i=k(d_i-b_i) for some constant k>0.
*Kernel Methods Kernel methods provide an alternative to traditional direct vectorization techniques used to transform PDs into formats suitable for ML algorithms. In ML, these methods utilize a mathematical function known as a kernel to analyze data and discern patterns. Unlike earlier approaches that directly represent PDs using vectors or functions, kernel methods compute a similarity score as an inner product between pairs of PDs in a high-dimensional space without explicitly mapping the data. This approach is advantageous because it adapts well to ML techniques like support vector machines (SVMs) and kernel principal component analysis (KPCA). Therefore, kernel methods are well-suited for tasks such as classification, regression, and principal component analysis.
One common such method is the persistence weighted Gaussian kernel (PWGK) <cit.>. It enhances the Gaussian kernel by incorporating weights based on the significance of topological features. This approach amplifies the influence of important features while reducing noise (i.e., short-lived holes) impact. For instance, it assigns weights to points p=(b,d)∈() proportional to their lifespans w(p)=(d-b). In particular, PWGK is defined as
𝐊((^+), (^-)) = ∑_p ∈(^+)∑_q ∈(^-) w(p) w(q) k(p, q)
where k(p, q) = exp(-p - q^2/2σ^2) denotes the Gaussian kernel function.
Another significant approach is the sliced Wasserstein kernel (SWK) <cit.>, which computes the Wasserstein distance between PDs and integrates it into a kernel framework. This method employs Optimal Transport Theory to establish a meaningful metric for comparing PDs. Although kernel methods can yield better results in some settings, they can be computationally intensive and impractical for large datasets due to the high computational costs associated with computing the kernel matrix. In particular, computing kernels takes quadratic time in the number of diagrams, while vectorizing PDs takes only linear time.
§.§.§ Stability
In most applications, the stability of vectorization is vital for statistical and inferential tasks. Essentially, stability means that a small change in the persistence diagram (PD) should not lead to a significant change in its vectorization. In particular, if two PDs, _k(^+) and _k(^-), are close, their corresponding vectorizations, β⃗_k(^+) and β⃗_k(^-), should also be close. This ensures that the vectorization process preserves the structural properties of the data. Therefore, when two persistence diagrams _k(^±) are similar, it implies that the datasets ^+ and ^- share similar shape characteristics. If these datasets are intuitively expected to belong to the same class, their vectorizations β⃗_k(^±) should likewise remain close.
To formalize this concept, we need to define what constitutes a "small/big change" or what it means for PDs to be close. This requires a distance (metric) in the space of PDs, with the most common being the Wasserstein distance (or matching distance) as defined in <Ref>. Similarly, we need a metric for the space of vectorizations, such as the Euclidean distance β⃗(^+) -β⃗(^-) for β⃗∈^N. Thus, for a given vectorization β⃗, we call it stable (with respect to the Wasserstein-p metric) if it satisfies the stability equation
β⃗_k(^+) -β⃗_k(^-)≤ C ·_p(_k(^+), _k(^-))
This ensures that if _p(_k(^+), _k(^-)) (the distance between PDs) is small, then the distance β⃗_k(^+) -β⃗_k(^-) between the corresponding vectorizations will also be small.
Among the methods described earlier, persistence landscapes, silhouettes, persistence images, and most kernel methods are stable vectorizations, while Betti functions are generally unstable.
§.§.§ Choice of Vectorization and Hyperparameters
The choice of vectorization method should align with the characteristics of your data and the problem at hand. If your data contains a few prominent topological features that are crucial to the task, Silhouettes with p ≥ 2 or Persistence Images may be the most suitable options. These methods are also effective when dealing with noisy data, allowing you to filter out less significant features. Conversely, if your data generates a high number of small features, where the task hinges on their location and density—in other words, when the noise itself carries important information—Betti curves, Persistence curves, Silhouettes with p≤ 0.5 and kernel methods are likely to yield strong performance. On the other hand, for point cloud data, Turkes et al. <cit.> provide valuable insights on how to effectively utilize PH for shape recognition. Lastly, if interpretability is a priority, Betti Curves stands out as the most interpretable vectorization method. Note that most vectorizations are computationally efficient and require minimal time compared to the computation of PDs.
*Hyperparameters: Each vectorization method comes with its own set of hyperparameters that need to be carefully tuned to maximize performance. In all of them, the choice of thresholds is one of the key hyperparameters we discussed in <Ref>.
In addition to thresholds, many vectorization methods have tuning parameters that are used to adjust sensitivity to topological noise. Typically, topological features with short lifespans (i.e., those close to the diagonal in PDs) are regarded as noise. These features, represented as pairs {(b,d)} with short lifespans (d-b), are generally easy to detect, leading to many tuning parameters being tied to the lifespan.
For example, in persistence landscapes, the number of landscapes (m) specifies how many maxima are included in the representation. A higher number of landscapes provides a richer depiction of topological features but increases computational complexity. The first few landscapes (e.g., 1^st and 2^nd landscapes) emphasize features with the largest lifespans.
In silhouettes, the tuning parameter p in the weight function w_i = (d_i - b_i)^p determines the emphasis of the vectorization. A higher p (e.g., p ≥ 2) de-emphasizes features with short lifespans, while a smaller p (e.g., p ≤ 0.5) increases the importance of topological noise.
For persistence images, the spread (σ) controls the width of the Gaussian kernels applied to each persistence point. A smaller spread results in sharper, more localized features, whereas a larger spread yields smoother images. The resolution parameter sets the number of pixels in the persistence image, thus determining the output dimension of the vectorization. Higher resolution captures finer detail but at a higher computational cost. Similar to silhouettes, a weight function (e.g., linear) can be applied to weigh the contribution of each persistence point based on its significance, such as its lifespan.
§.§ Integrating PDs to ML tasks 2: Using Neural Networks
While vectorization choices provide greater control over the model and data analysis, there are automated methods to avoid dealing with vectorization choices or hyperparameter tuning. These methods optimize the selection for downstream tasks by either finding the best vectorization within a large vectorization space or working directly with persistence diagrams as point clouds, allowing neural networks to handle the data.
§.§.§ Vectorization with NN: Perslay
PersLay, introduced by Carrière et al. <cit.>, is an automated vectorization framework for persistence diagrams, which decides the best vectorization method for the downstream task. The method leverages a neural network architecture that processes PDs via a specialized layer called the PersLay layer. This layer incorporates a variety of vectorization strategies, providing a unified and flexible approach to extracting topological features.
The PersLay layer is composed of several key components:
* Weighting Functions: These assign importance to points in the persistence diagram. Common choices include exponential weighting and Gaussian weighting, which can be learned during training.
* Transformation Functions: These functions, such as linear transformations or learned neural networks, apply to the coordinates of the persistence points to encode meaningful geometric and topological information.
* Symmetric Functions: After transformation, symmetric functions like sum, mean, or max aggregate the transformed points into a fixed-size vector, ensuring permutation invariance.
With this architecture, PersLay covers a wide range of known vectorization methods within its framework, i.e., Persistence Landscapes, Persistence Images, Silhouettes, Betti Curves, Entropy Functions and other Persistence Curves <cit.>. By incorporating various vectorization techniques, PersLay can adapt to diverse data types and topological features, making it a robust and versatile tool for integrating PH into ML workflows. For an alternative method involving slightly different generating functions for learnable vectorizations, see <cit.> by Hofer et al.
§.§.§ PD Learning with NN: PointNet
An alternative automated method to leverage PDs in ML tasks without hyperparameter tuning is to directly utilize PointNet <cit.> or similar neural network architectures to process and analyze point cloud data, e.g., DeepSets <cit.>, PointMLP <cit.>. Unlike conventional techniques that transform point clouds into regular grids or structured formats, these networks treat PDs as sets of points in ℝ^2. Perslay and PointNet offer fundamentally different strategies for incorporating PDs into ML models without requiring vectorization or hyperparameter optimization. The choice between these methods depends on the specific task, as there is no universally superior approach.
§.§ Computational Complexity for Persistent Homology
The computational complexity of persistent homology (PH) is strongly influenced by the choice of filtration complex, as it is directly related to the size, |𝒦|, of the simplicial complex 𝒦, i.e., the number of simplices it contains. Early PH algorithms had a cubic complexity with respect to the number of simplices, i.e., 𝒪(|𝒦|^3) <cit.>. Subsequent advancements have reduced this exponent to w = 2.376, i.e., 𝒪(|𝒦|^w) <cit.>.
Thus, the computational challenge boils down to the number of simplices in a given simplicial complex 𝒦. For a point cloud with N points, the worst-case scenario for the Rips or Čech complex involves up to 2^N simplices across all dimensions. However, in PH, the computation of d-dimensional topological features requires only simplices up to dimension d+1. Higher-dimensional simplices do not contribute to the calculation of H_d(𝒦). For instance, when focusing on 0- and 1-dimensional features, only simplices up to dimension 2 are needed.
In a point cloud of size N, for a Rips complex, the number of 0-simplices (vertices) is N, the number of possible 1-simplices (edges) is N2 = N(N-1)/2, and the number of possible 2-simplices (triangles) is N3 = N(N-1)(N-2)/6. Recall that if we include all dimensions, the possible number of simplices would be 2^N. Hence, increasing the dimension d for topological features thus significantly raises computational costs, which is why most ML applications limit the maximum dimension for simplices in the filtration (d+1) to 2 or 3, which is enough to calculate up to 1- or 2-holes, respectively.
The discussion above primarily applies to Rips and Čech complexes, but PH computations can be significantly more efficient when using cubical complexes, which are commonly employed in image data. For 2D images, the time complexity of PH is approximately 𝒪(|𝒫|^r), where r ≈ 2.37 and |𝒫| represents the total number of pixels <cit.>. Practically, this implies that the computational cost of PH scales almost quadratically with the image size. For higher-dimensional images, alternative methods for efficient computation exist <cit.>.
In recent years, several works have been published to improve the scalability and computational efficiency of PH. One approach focuses on developing alternative methods for more efficient computation of persistence diagrams <cit.>, while another aims to sparsify datasets while preserving topological information <cit.>.
§.§ Software Libraries for Persistent Homology
Several software libraries provide tools for computing persistent diagrams and vectorizations across different types of data structures, particularly point clouds and graphs (see <Ref>). Notable among these are GUDHI, DIONYSUS, and RIPSER, which have been compared in a benchmarking study <cit.>.
Most TDA libraries require point cloud data as input, rather than image or network (graph) data. GUDHI can process images, but only after they are converted into a suitable format, such as a point cloud or a cubical complex. Giotto-TDA extends the capability to graphs through its VietorisRipsPersistence and SparseRipsPersistence for undirected graphs and FlagserPersistence for directed ones. However, a common challenge in graph-based TDA is the reliance on shortest path (geodesic) distances, which may not always capture the most relevant topological features. Giotto-TDA also supports the input of images and time series data, enabling the computation of various vectorizations. Beyond PH computations, the Persim library from Scikit-TDA offers a suite of tools for further analysis of persistence diagrams, including metrics like the Bottleneck distance and visualizations like Persistence Landscapes and Persistence Images, enhancing the interpretability of TDA results.
In terms of usage, most libraries (e.g., TDA in R) were language-specific bindings of the underlying GUDHI, Dionysus, and PHAT libraries, which were originally implemented in Matlab or C++ for efficiency and acted as the earliest and most established workhorses for TDA. In recent years, Python implementations like Giotto-TDA have become increasingly popular, and the widely used ML library Scikit-learn has contributed to this trend by creating a new set of tools called Scikit-TDA for TDA integration (<https://github.com/scikit-tda>).
It is worth mentioning that since TDA has been developed and extended primarily by mathematicians, the popularity of R within this community has made the TDA R library particularly important. Mathematicians often release advanced models and methods that can be directly connected to TDA research in R first, leading to a delay in their implementation in other programming languages.
§ MULTIPARAMETER PERSISTENCE
Multiparameter Persistence (often referred to as Multipersistence) introduces a novel concept with the potential to significantly enhance the performance of single-parameter persistence. The concept, introduced in the late 2000s by Carlsson, Zomorodian, and Singh <cit.>, has since been actively investigated for real-world applications <cit.>.
r6cm
Generic Bifiltration
1.!
_m1 ⊂ _m2 ⊂ … ⊂ _mn
∪ ∪ ∪ ∪
… ⊂ … ⊂ … ⊂ …
∪ ∪ ∪ ∪
_21 ⊂ _22 ⊂ … ⊂ _2n
∪ ∪ ∪ ∪
_11 ⊂ _12 ⊂ … ⊂ _1n
For original PH, the term single persistence is applied because we filter the data in just one direction, _1⊂_2⊂…⊂_n. The filtration's construction is pivotal in achieving a detailed analysis of the data to uncover hidden shape patterns. For example, for graph setting, when utilizing a single function f:→ containing crucial domain information (e.g., value for blockchain networks, atomic number for protein networks), it induces a single-parameter filtration as described earlier. However, numerous datasets offer multiple highly relevant domain functions for data analysis. Similarly, in other data types, there are several ways to expand the single filtration into multifiltration to get finer information on the topological patterns hidden in the data. The idea of using multiple filtration functions simultaneously to obtain a finer decomposition of the dataset led to the suggestion of the MP theory as a natural extension of single persistence (SP).
§.§ Multifiltrations
Multifiltration refers to filtrations constructed using multiple scale parameters. A typical example of a bifiltration, denoted as {_ij}, is illustrated in Table <ref>, where each row and column corresponds to a distinct filtration. This bifiltration is generated by simultaneously varying one scale parameter along the horizontal axis and another along the vertical axis. The specific construction of these filtrations depends on the data type, with additional parameters introduced based on the nature of the data. Below, we summarize common methods for different data types. While the explanation focuses on two parameters for simplicity, this approach generalizes to any number of parameters.
§.§.§ Multifiltrations for Graphs.
In graph settings, a single filtration function results in a single-parameter filtration _1 ⊂…⊂_N =. However, employing multiple filtration functions enables a more detailed analysis of the data. For instance, two node functions f: → and g: →, which provide complementary information about the network, can be combined using Multiparameter Persistence to generate a unique topological fingerprint. These functions induce a multivariate filtration function F: →^2 defined by F(v) = (f(v), g(v)).
Next, we define sets of increasing thresholds {α_i}_i=1^m for f and {β_j}_j=1^n for g. Then, we have _ij = {v_r ∈| f(v_r) ≤α_i, g(v_r) ≤β_j}, which can be written as _ij = F^-1((-∞, α_i] × (-∞, β_j]). Let _ij be the subgraph of induced by _ij, meaning the smallest subgraph of generated by _ij. This setup induces a bifiltration of complexes {_ij| 1 ≤ i ≤ m, 1 ≤ j ≤ n}, which can be visualized as a rectangular grid of size m × n (see <ref>). For more details on applying multipersistence in graph settings, refer to <cit.>. Additionally, one can combine power filtration with filtration by functions by applying power filtration to each subgraph in the sequence induced by sublevel filtration via functions <cit.>. In <Ref>, we give details of the utilization of MP method on graph setting for computer-aided drug discovery.
§.§.§ Multifiltrations for Point Clouds.
In the context of point clouds, a natural parameter to consider is the radius r, as discussed in <Ref>. However, a point cloud often has regions of varying density, with some areas being very dense and others quite sparse. Additionally, a few outliers can significantly distort the topological signature due to the way it is constructed. To address this issue, it is common to use a density parameter in conjunction with the radius parameter when constructing the filtration. This parameter reflects the local density of points in the point cloud and helps identify regions of different densities, distinguishing features significant in dense regions from those in sparse ones.
To achieve this, we first need to compute local densities. For each point p ∈, we compute a density measure δ(p), which could be based on the number of points within a fixed radius r_d or kernel density estimation <cit.>.
Next, we define our threshold set {(d_i, r_j)}, where d_1 > d_2 > … > d_m corresponds to thresholds for density, and 0 = r_1 < r_2 < … < r_n = diam() are radius thresholds. We then define the multifiltration as follows. Using δ(p), we define a nested sequence of subsets _1 ⊂_2 ⊂…⊂_m =, where _i = {p ∈|δ(p) ≥ d_i}. For each i_0, we treat _i_0 as a separate point cloud and apply the Rips filtration process to it. Specifically, for each i_0, we construct the Rips filtration _i_01⊂_i_02⊂…⊂_i_0n. This gives a bifiltration {_ij} as in <Ref>, with _ij = _ij representing the Rips filtration for point cloud _i with radius parameter r_j. e.g., the first column _i1 = _i corresponds to radius r_1 = 0.
This multifiltration approach provides more detailed information by capturing topological patterns in both dense and sparse regions, offering a richer understanding of the data's underlying shape, especially when varying densities and scales are crucial for the analysis. For further details, see <cit.>.
§.§.§ Multifiltrations for Images.
Similarly, in the image setting, if the image is a color image, one can easily employ all color channels at the same time, similar to the graph setting. In particular, a color image has three color functions, denoted as R (red), G (green), and B (blue). Thus, for every pixel Δ_ij, there exist corresponding color values: R_ij,G_ij,B_ij∈ [0,255]. To proceed, we establish a three-parameter multifiltration with parameters {α_m}_1^N_R, {β_n}_1^N_G, {γ_r}_1^N_B, where α_m,β_n,γ_r∈ [0,255] are threshold values for color channels R,G, and B respectively. By simply defining binary images _m,n,r={Δ_ij⊂| R_ij≤α_m, G_ij≤β_n, B_ij≤γ_r}, we obtain a three parameter multifiltration of size N_R × N_G × N_B.
Similarly, for grayscale images, one can utilize height, radial, erosion, signed distance, or similar filtration methods <cit.> along with the color filtration to obtain a multifiltration for color images.
§.§ MP Vectorization Methods
By computing the homology groups of the complexes in these multifiltrations, {_k(_ij)}, along with the induced inclusion maps, we obtain the corresponding multipersistence module, which can be visualized as a rectangular grid of size m× n. The goal here is to track k-dimensional topological features through the homology groups {_k(_ij)} within this grid. However, as explained in <cit.>, technical challenges rooted in commutative algebra prevent us from transforming the multipersistence module into a mathematical structure like a "Multipersistence Diagram." The key issue is that for any k-dimensional hole in the multifiltration, we cannot directly assign a birth and death time due to the partial ordering within the module. For example, if the same k-hole σ appears at _1,4 and _2,3, neither (1,4) ⊀ (2,3) nor (2,3) ⊀ (1,4), making it difficult to define a birth or death time for σ unless it exists in a perfectly rectangular region. This issue does not arise in single persistence, where any two indices are always comparable, i.e., i<j or j<i.
As a result, we do not have an effective representation of the MP module <cit.>. While these technical obstacles prevent this promising approach from reaching its full potential in real-life applications, in the past years, several approaches have been proposed to extract key insights from MP modules by skipping MP representations and directly going to the vectorization from multifiltrations, which we detail below.
The simplest method to derive a vector (or tensor) from a multifiltration involves using the Hilbert function of the MP module, which is a generalization of Betti functions. Let {_ij} denote an m × n multifiltration associated with a given dataset . First, we construct the multipersistence module _k(_ij). Next, by computing the Betti numbers of each simplicial complex in the multifiltration, we obtain an m × n matrix (or tensor) β_k() = [β_k(ij)], i.e., β_k(_ij) = rank(_k(_ij)). These are also called multipersistence (or bigraded) Betti numbers. In graph representation learning, this simple vectorization of the MP module has demonstrated remarkable performance <cit.>.
While bigraded Betti numbers offer a straightforward vectorization method for multiparameter (MP) modules, they fail to capture crucial topological information, especially regarding the significance of dominant features or the presence of a few important topological features. More sophisticated techniques are necessary for effective MP vectorization. One widely used strategy involves the "slicing technique," which focuses on studying one-dimensional fibers within the multiparameter domain. A clearer understanding can be obtained by restricting the multidimensional persistence module to these single directions (slices) and applying single persistence analysis.
In their work <cit.>, Carriere et al. explored this approach by considering multiple such slices, often referred to as vineyards, and summarizing the resulting persistence diagrams. Another significant advancement is found in multipersistence landscapes <cit.> by Vipond, which extends the concept of persistence landscapes <cit.> from single to multiparameter persistence.
The vectorization of MP modules is a rapidly evolving area of research. Various techniques have recently been proposed in practical applications, e.g., Hilbert decomposition signed measure (MP-HSM-C) <cit.>, Effective MultiPersistence (EMP) <cit.>, Generalized Rank Invariant Landscape (GRIL) <cit.>, and Stable Candidate Decomposition Representations (S-CDR) <cit.>. Although many recent methods are burdened by high computational costs, the MP-HSM-C method introduced by Loiseaux et al. <cit.> stands out as the most efficient in terms of speed and performance among current MP vectorization techniques.
§ MAPPER
Next to Persistent Homology, another powerful method in TDA is the Mapper, which Singh, Memoli, and Carlsson introduced in the late 2000s <cit.>. The Mapper stands out as an effective and versatile tool for extracting insights from high-dimensional datasets, which creates a simplified graph representation of data by combining ideas from algebraic topology and data visualization. While PH is mostly used in supervised learning, Mapper is commonly utilized in unsupervised settings. In the past decade, it has made critical contributions to various fields involving point clouds in high dimensional spaces, e.g., biomedicine <cit.>. This method works by projecting data onto a lower-dimensional space, clustering the points within overlapping intervals, and constructing a topological network that captures the essential structure of the dataset. Through this approach, Mapper can reveal hidden patterns, relationships, and the underlying shape of data, making it particularly valuable for unsupervised learning in fields such as genomics, neuroscience, and social network analysis. Therefore, it is also considered a smart dimension reduction technique, too (<Ref>).
§.§ Mapper for Point Clouds
For a point cloud in ^N and a real-valued function f: →, the Mapper algorithm provides a summary of the data by scanning the clusters in in the direction dictated by the function f, which is commonly referred to as a filter function or lens <cit.>. The output of the Mapper algorithm is the Mapper graph, which is considered a meaningful summary of the data, representing clusters and relations between the clusters in the data. It has been applied in several contexts like diabetes subtype identification <cit.>, ransomware payment detection on blockchains <cit.>, and cancer genotyping from single-cell RNA sequence data (<Ref>).
The Mapper algorithm generates a graph-based summary of a high-dimensional point cloud. In this new Mapper graph, nodes represent clusters within the original point cloud, and edges indicate the interaction between these clusters. In other words, Mapper is a soft clustering approach where a data point may appear in multiple clusters; these clusters are then represented as nodes in the new Mapper graph.
Given a point cloud in ^N, we first define a lens f:→. In general, the lens function can be a natural function derived from the data domain, or it may be computed using a dimensionality reduction technique. The proper selection of hyperparameters for Mapper is crucial, and we will shortly discuss these in <Ref>.
The lens function then decomposes the data into subregions through a cover of the image f()⊂, allowing for the identification of clusters within each subregion.
Formally, we cover the image f()⊂ with a collection of open intervals, i.e. = {I_k}_k=1^n where ⋃_k I_k⊃ f(). The intervals I_k=(a_k,b_k) are indexed such that a_k < a_k+1 and typically b_k ≥ a_k+1, allowing for overlap between consecutive intervals. Figure <ref>b shows 6 overlapping intervals ={I_1,…, I_6} for a toy example.
Next, we consider the preimage of each interval, U_k=f^-1(I_k)⊂. In particular, the preimage of each interval refers to the set of points in the original high-dimensional space, such as the data points, mapped into a given interval by the lens function. For example, in Figure <ref>, the union of all points from U_2 and U_3 and lower-part points from U_1 are the preimage of the interval I_2. We apply a clustering algorithm (e.g., k-means, DBSCAN) to each preimage U_k_0, dividing it into m_k_0 clusters {C_k_0l}_l=1^m_k_0.
For each cluster C_kl, we create a node v_kl in the Mapper graph and let _k={v_k1, v_k2, …, v_km_k} represent the clusters in U_k⊂. Thus, =⋃_k _k forms the node set for the Mapper graph . In <Ref>, there are 10 nodes in the graph for the 6 preimages: I_2, I_3, I_4, and I_6, each resulting in two clusters. Note that the number of clusters within a preimage can vary. For example, representing U_1 with a single cluster might result from using a density-based clustering algorithm (e.g., DBSCAN), which identifies dense regions as clusters while treating sparse regions as noise, thereby avoiding the creation of clusters in those less populated areas.
r3in
< g r a p h i c s >
Toy Example of Mapper. For a point cloud (a), we define a lens function f:→ (b), and the induced covering defines a Mapper graph where nodes represent clusters and edges represent related clusters (c).
After obtaining the nodes, we define the edges based on the intersections of the clusters. Note that as I_k∩ I_k+1≠∅, clusters in subsequent levels might have nontrivial intersections. Then, if the clusters C_i∩ C_j≠∅, we add an edge between the corresponding nodes v_i and v_j. i.e., ={e_ij| C_i∩ C_j≠∅} where e_ij represents the edge between the nodes v_i and v_j. It is also possible to define a weighted graph with weights ω_ij=#(C_i∩ C_j), the count of the points in the intersection. In this case, the weights reflect the strength of the interaction between the corresponding clusters, indicating how many data points are included in the overlapped area. The final Mapper graph gives a rough summary/sketch of the whole point cloud.
The Mapper graph not only helps in identifying data clusters but also enables the selection of data points that are similar to a given subset by leveraging its structure <cit.>. In this context, nodes that are closer together on the Mapper graph tend to contain more similar data points. This aspect of cluster similarity based on their proximity in the Mapper graph is an area that remains largely understudied and offers significant potential for further exploration. The Mapper graph computations have been detailed in <Ref>. In <Ref>, we outline the utilization of Mapper in a real-life application, i.e., cancer genotyping from RNA sequencing.
*Common Filter Functions for Mapper. While we explain the Mapper algorithm for single-valued filter functions to keep the exposition simpler, in practice it is very common to use multivariate filter functions F:→^2. One of the most common filter functions used in applications is the Stochastic Neighborhood Embedding (t-SNE) <cit.>, and a variant of this, called Neighborhood Lens <cit.>.
Similarly, it is common to use other dimension reduction techniques as filter map F:→^2, e.g., Isomap, PCA, or UMAP <cit.>. Then, for an open covering ={I_ij} of F() in ^2, one can repeat the process of U_ij=F^-1(I_ij), and assign a node w for each cluster in U_ij. Again, assign an edge between the nodes when the corresponding clusters have nontrivial intersections. Note that if the domain of the data offers good filter functions f,g:→, one can also utilize multivariate filter functions F(x)=(f(x),g(x))∈^2 by combining both functions in the process.
§.§.§ Hyperparameters for Mapper
For a point cloud , four main parameter choices must be made to obtain its Mapper graph.
* The lens function. If available, the lens function can be derived directly from the data domain, which in turn greatly enhances the interpretability of the model. In practice, when appropriate lens functions are not readily available from the data domain, dimensionality reduction techniques like t-SNE, UMAP, or PCA are commonly used to create lenses f: 𝒳→ℝ^2.
* Clustering method. For each subset f^-1(I_k), the clustering method determines the nodes of the Mapper graph. The most common methods employed are DBSCAN and k-means, with their hyperparameters controlling the granularity of the resulting clusters.
For k-means, it is common to use <10 for clusters (e.g., see <cit.>).
* Resolution and Overlap. The resolution and overlap are the parameters for the covering {I_k}_1^n of f()⊂, i.e., ⋃_k=1^n I_k⊃ f().
The resolution, n, is the number of bins (intervals) {I_k} to cover f(). The overlap is the percentage of overlaps of these intervals, i.e. |I_k∩ I_k+1|/|I_k|. Note that increasing the resolution gives a finer summary by increasing the number of nodes in the Mapper network and making the clusters smaller (See <Ref>). On the other hand, increasing overlap adds more relation (edges) between the nodes (clusters). In <Ref>, the resolution is 6, and the overlap is the fixed intersection amount (e.g., 20%) between the intervals I_k and I_k+1. Typically, it is common to use 10%-30% overlap and 10-20 intervals (e.g., Figure 10 of GraphPulse <cit.>).
§.§ Mapper for Other Data Formats
While the Mapper approach is most commonly applied to point clouds, it can be adapted for other data formats, such as graphs and images. For graphs, this adaptation can be seen as a form of graph coarsening/skeletonization, where the goal is to summarize clusters of nodes within the graph.
In <cit.>, Bodnar et al. present a natural extension of the Mapper algorithm to the graph setting. Given a graph =(,), we start with a filter function f:→ and define a cover = {I_k}_k=1^n, where ⋃_k I_k ⊃ f(). The set _k = f^-1(I_k) represents a subset of nodes, and _k is the induced subgraph, i.e., _k = (_k,_k) where _k ⊂ are the edges connecting pairs of vertices in _k. Instead of clustering, we directly use the components (connected subgraphs) in _k as the nodes of the Mapper graph.
Specifically, if _k_0 = ⋃_l=1^m_k_0_kl has m_k_0 connected subgraphs, we define the node set _k = {c_k_01, …, c_k_0m_k_0}. For example, if _k has five connected subgraphs, we define five nodes in the Mapper graph, each representing one connected subgraph in _k. Let = {w_1, w_2, …, w_M} be the collection of all such nodes where M = ∑_k=1^n m_k. This defines the node set of the Mapper graph of .
The edge set is defined similarly: If the subgraphs _kl and _(k+1) l' share a common node, then an edge is added between the corresponding nodes in . In summary, in the original construction, we replace the point cloud with the node set of , and the clustering algorithm is directly derived from the graph structure. For this approach, one can utilize the common filtering functions f:→ from persistent homology (e.g., degree, closeness, betweenness, eccentricity) or can use other popular functions from graph representation learning, e.g., eigenfunctions of graph Laplacian, pagerank <cit.>.
Alternatively, if the graph = (, , ) includes node attributes, a Mapper graph can be directly derived from the set of node attributes . In particular, we treat the node attribute vectors = {_i}_1^n with _i ∈^N as a point cloud ⊂^N and apply the Mapper algorithm as before. This Mapper graph provides a visual summary of the node attribute space, independent of the graph's structure. In other words, the resulting Mapper graphs summarize the graph based on node attributes alone, without incorporating information about node neighborhoods in the original graph. An effective utilization of this approach can be found in <cit.>.
The Mapper algorithm on images is analogous to its application in graphs, where clustering is derived from the original image <cit.>. For a given r × s image , let f: → represent the color values of the pixels. We define a cover = {I_k}_k=1^n such that ⋃_k I_k ⊃ f(). A node is defined for each connected component in f^-1(_k), and an edge is created between nodes if the corresponding components have a nontrivial intersection. This Mapper graph summarizes the interaction between different color regions in as a graph.
§.§ Computational Complexity of Mapper
The computational complexity of TDA Mapper varies depending on the specific choices of filter functions, clustering algorithms, and dimensionality reduction techniques.
The first step in TDA Mapper involves applying a filter function. If t-SNE is used as the filter, the computational complexity of the Barnes-Hut t-SNE variant is 𝒪(N log N · D) <cit.>, where N is the number of data points (e.g., for graphs, the number of nodes |𝒱|) and D is the dimensionality of the data (e.g., for graphs, the number of node attributes). The complexity of t-SNE is largely driven by the computation of pairwise distances in a D-dimensional space, followed by optimizing the low-dimensional embedding. While dimensionality D influences the time required for these computations, the overall complexity is typically dominated by the number of data points N.
After t-SNE, the data is covered by overlapping intervals, and clustering is performed within each interval. The complexity of this step depends on the clustering algorithm used. For example, if a clustering algorithm with complexity 𝒪(N^2) is used, this step may add significant computational cost. Finally, the Mapper complex is constructed by connecting clusters, which typically has a lower complexity, often 𝒪(N) to 𝒪(N log N), depending on the number of clusters and the method used to connect them.
The overall computational complexity of TDA Mapper when using t-SNE is dominated by the complexity of t-SNE, which is 𝒪(N log N · D). The subsequent steps in the Mapper pipeline add to this complexity, particularly the clustering step. When using k-means clustering, the complexity is typically 𝒪(k · N · D · T), where k is the number of clusters, N is the number of data points, D is the dimensionality of the data, and T is the number of iterations. Therefore, the overall complexity can be expressed as 𝒪(N log N · D) + 𝒪(k · N · D · T)=𝒪(N log N).
§.§ Software Libraries for Mapper
Several software libraries provide tools for constructing and analyzing Mapper (see <Ref>). Among these, Kepler-Mapper is a popular choice for Python users, particularly within the Scikit-TDA ecosystem, offering HTML outputs that are ideal for creating shareable, interactive visualizations. This makes it a practical tool for users who need to present their findings to non-technical users.
Giotto-TDA integrates well with the Python environment and offers parameter visualization capabilities, making it an excellent choice for those needing to iteratively refine their Mapper construction. Its interactive features are especially helpful for real-time exploration of the impact of different filter functions and covering parameters.
For users prioritizing performance over interactivity, tda-mapper-python offers a faster computation engine, making it well-suited for large datasets or scenarios where rapid iteration is necessary. However, the lack of interactive features means that users will need to rely on external tools for visual exploration.
TDAmapper in R is an excellent choice for users already embedded in the R ecosystem. It’s a robust standalone solution that is particularly useful for those who prefer the extensive statistical and data manipulation capabilities available in R. However, the lack of interactivity might require additional effort for visualization and exploration.
Finally, Mapper Interactive is designed for those who value hands-on engagement with the data. It offers a highly interactive Python-based environment, making it ideal for exploratory data analysis where immediate feedback and manipulation of the Mapper graph are crucial.
In practice, the choice of library typically depends on the specific requirements of your project, such as whether you prioritize interactivity, computational speed, or seamless integration with other tools in your data analysis workflow. For example, Kepler-Mapper and Giotto-TDA are ideal for exploratory analysis and visualization, while tda-mapper-python and TDAmapper are more suitable for handling large-scale computations and integrating with ecosystems like Python and R, respectively.
§ APPLICATIONS
In this section, we will provide examples of the applications of TDA methods in machine learning. We provide four examples from published papers and outline their model and how they applied TDA in their project.
§.§ PH for Point Clouds: Shape Recognition
r3in
< g r a p h i c s >
Shape Recognition via PH. In <cit.>, the authors obtained 100 point clouds for each animal by subsampling the surfaces above and analyzed their persistence landscapes. On the right, they present the 95% confidence bands for the persistence landscapes of these 400 point clouds, demonstrating that each animal's persistence landscape exhibits significantly distinct characteristics. The figure is adapted from <cit.>.
In this section, we present an illustrative example of the utilization of PH in shape recognition, based on the work <cit.>, where Chazal et al. demonstrated the effectiveness of PH in shape recognition. One specific example involved four different animals: an elephant, flamingo, lion, and camel, each representing a unique shape in ^3, where each shape is normalized to have a diameter of one. They began by selecting 500 random points from the surface of each animal, as shown in <Ref>, resulting in the point clouds _E, _L, _F, and _C for the elephant, lion, flamingo, and camel, respectively. Then, they created 100 subsamples of 300 points each from these point clouds, resulting in 400 different point clouds: E_i ⊂𝒳_E, L_i ⊂𝒳_L, F_i ⊂𝒳_F, and C_i ⊂𝒳_C for 1 ≤ i ≤ 100.
Next, they performed Rips filtration on each point cloud {E_i,L_i,F_i,C_i}_i=1^100 (see <Ref>) and computed the corresponding persistence diagrams for dimension one. By vectorizing these persistence diagrams, they obtained the persistence landscapes for each point cloud, resulting in 400 persistence landscape functions:
{λ(E_i), λ(L_i), λ(F_i), λ(C_i)}_i=1^100.
In <Ref>, they present the 95% confidence bands for the true average landscapes (bold curves) of each class. e.g., blue confidence band correspond to 100 persistence landscapes coming from lion point clouds {λ(L_1), …, λ(L_100)}. The narrowness of these confidence bands indicates that PH techniques provide robust shape-embedding methods that are minimally affected by noise. Moreover, the distinct confidence bands for each class underscore the effectiveness of PH in shape recognition. We also note that recent work by Türkeş et al. <cit.> studied the effectiveness of PH for shape recognition in different settings.
§.§ PH for Graphs: Crypto-Token Anomaly Forecasting
Three primary types of problems dominate graph representation learning: graph classification, node classification, and link prediction, as outlined by Hamilton et al. <cit.>. These tasks encompass a broad range of real-life applications, including brain connectivity networks, molecular property prediction, recommender systems, fraud detection and transaction networks. In a related study, Li et al. <cit.> employ persistent homology to extract effective feature vectors from weighted and directed Ethereum crypto-token networks, modeled as temporal graphs. The approach is tailored for graph anomaly prediction, a specialized form of graph classification.
The research problem is set as a prediction task where the authors aim to predict whether the absolute price return of an Ethereum token will change significantly beyond a predefined threshold |δ| > 0 within the next h days. This involves analyzing the token's transaction network and its price fluctuations over multiple discrete time graph snapshots. The price information is sourced from external ground truth data, as token prices are determined by trading on blockchain exchanges.
For each snapshot, the authors use transferred token amounts on edges to define distances between adjacent vertices. These functions help establish the similarity between nodes:
ω_uv = [1 + α·A_uv - A_min/A_max - A_min]^-1
where A_uv represents the amount of tokens transferred between nodes u and v, and A_min and A_max are the minimum and maximum transferred amounts, respectively. The authors set the parameter α = 9 to map these weights to the interval [0.1, 1], thus standardizing the weight values.
r3in
[Tronix ]
< g r a p h i c s >
[Power Ledger ]
< g r a p h i c s >
Betti pivots. Comparative functional summaries of the Tronix and Power Ledger token networks over seven days, displaying daily Betti-1 values across various scaling parameters. Each plot captures the evolution of topological features, with the central red lines indicating the Betti pivots. These pivots represent the most stable or 'normal' network structures, providing insights into network behavior and potential anomalies driven by underlying transaction activities. The figure is adapted from <cit.>.
By treating the edge weights (similarity measures) as distances between nodes, i.e., d(u,v) = ω_uv, the authors construct a filtration of Rips complexes (<Ref>-ii) from the snapshot graphs. For non-adjacent nodes, the distance is defined by the shortest path in the graph. The main insight here is to extract topological patterns developed in the graphs at multiple scales. This process involves forming a simplicial complex where nodes are connected if their distance is within a threshold ϵ, reflecting their similarity based on the edge weights. Essentially, nodes with high similarity (those with large transaction amounts between them) are connected early in the filtration, while more distant nodes appear later.
The authors then compute persistence diagrams from these filtrations and convert them into Betti functions. Additionally, they introduce new functional summaries of topological descriptors, namely Betti limits and Betti pivots, which track the evolution of topological features as the scale parameter ϵ changes over the snapshots. <Ref> illustrates two token networks and their corresponding functional summaries.
To identify which transaction networks indicate anomalous patterns, the authors use Modified Band Depth to assess how central or peripheral each network's topological descriptors are within the observed data set. Figure <ref> shows two token networks and their snapshot graphs as represented by functions within these figures. Snapshot graphs with deeper Betti limits are considered more typical or central.
The authors integrate these novel topological features with conventional network summaries to predict price anomalies. Daily labels are assigned as anomalous based on significant price changes anticipated in the near future (e.g., in one or two days). To this end, the authors construct a predictive model by utilizing topological vectors they produce for token networks. <Ref> displays the accuracy metrics for their model, specifically for a prediction horizon of h=2 (two days). Achieving an average accuracy of 96% across ten token networks, they demonstrate the effectiveness of topological features in the anomaly forecasting task.
§.§ PH for Images: Cancer Diagnosis from Histopathological Images
Our example will directly apply persistent homology methods to histopathological images. As noted in <Ref>, cubical persistence is the primary method for applying persistent homology in an image context. In Yadav et al. (2023) <cit.>, Yadav et al. successfully applied cubical persistence to analyze histopathological images. Specifically, for each image , the authors generated a topological feature vector β() and utilized standard ML methods on these vectors (image embeddings) for tumor classification.
r3in
< g r a p h i c s >
Eight Color Channels. Different color channels are used to create sublevel filtrations. The figure is adapted from <cit.>.
Recall that for a given image X with r × s resolution, the first step is to create a filtration, which is a nested sequence of binary images X_n. A common method to create such a sequence is to use the color values γ_ij of each pixel Δ_ij⊂ X. Specifically, for a sequence of grayscale values (t_1 < t_2 < … < t_N), one obtains a nested sequence of binary images X_1 ⊂ X_2 ⊂…⊂ X_N such that X_n = Δ_ij⊂ X |γ_ij≤ t_n.
In <cit.>, the authors construct filtrations for cubical persistence by first extracting eight color channels γ^k_ij from histopathological images with 1 ≤ k ≤ 8, where each superscript k corresponds to one color channel. The first four channels come from the RGB color space: red, green, blue, and grayscale (the average of R, G, and B). Additionally, they utilize the HSV color space, which includes hue, saturation, value, and their average (see <Ref>). Each color channel defines a different filtration {_n^k}_n=1^N. In the paper, they set N=100.
Next, by using these filtrations, they obtain the corresponding persistence diagrams for dimensions 0 and 1. Then, by applying Betti vectorization to these persistence diagrams, they obtain 100-dimensional Betti vectors β⃗^k_m=[β^k_m(t_1) …β^k_m(t_100)] where k represents the color, and m=0,1 represents the dimension. Hence, β^k_0(t_n) is the number of components (Betti-0 number) in the binary image ^k_n, and β^k_1(t_n) is the number of holes (Betti-1 number) in the binary image ^k_n. Considering that there are eight color channels and two dimensions, one obtains 1600-dimensional vector β⃗() by concatenating these 16 vectors {β⃗^k_m}. By extracting extra features by utilizing local binary patterns (LBP) and Gabor filters, they produced another 800 and 400 dimensional features for each image, respectively. Then, they obtain a 2800-dimensional final vector β(). In particular, one can consider this as an image embedding method, and each image is realized as a point in ^2800. Next, to improve the performance by removing correlated features, they use a feature selection method for the downstream task and reduce the vector size to 500 dimensions.
They studied five cancer types, namely bone, breast, cervical, prostate, and colon cancer. In <Ref>, they provide the median curves and confidence bands for each class for three cancer types. As discussed earlier, while Betti vectors are considered as a weak vectorization method in general, they can be highly effective when the signature comes from the quantity and distribution of small features like these histopathological images. They utilized benchmark datasets for each cancer type consisting of 20K to 60K histopathological images. In <Ref>, we give their results for various sets of feature vectors. Their results indicate that utilizing filtrations with multiple color channels can significantly improve the classification results in some cancer types, e.g., bone, breast, and colon cancers.
§.§ Multiparameter Persistence: Computer Aided Drug Discovery
In this part, we outline an effective application of multiparameter persistence in computer-aided drug discovery (CADD), showcasing its use within the graph setting. For an application in the context of point clouds, see <cit.>.
Virtual screening (VS), a key technique in CADD, is used to identify potential drug candidates from a vast library of compounds that are most likely to bind to a specific molecular target. Demir et al. <cit.> employed a multiparameter persistence approach for virtual screening by framing it as a graph classification task. In this method, compounds are represented as graphs, with atoms as nodes and bonds as edges. They adopted a ligand-based approach, wherein a few positive samples are provided for a given protein target, and the goal is to screen the compound library to find compounds that are most similar to these positive samples. Essentially, this approach can be viewed as a topology-based graph ranking problem. While their approach is more technical, here we outline the key concepts of their method.
r0.25
< g r a p h i c s >
Cytosine. Atom types are coded by their color: White=Hydrogen, Gray=Carbon, Blue=Nitrogen, and Red=Oxygen. The decimal numbers next to atoms represent their partial charges. The figure is adapted from <cit.>.
Their framework, TODD, generates fingerprints of compounds as multidimensional vectors (tensors), represented as a 2D or 3D array for each compound. The core idea is to simultaneously employ 2 or 3 highly relevant functions or weights (e.g., atomic mass, partial charge, bond type, electron affinity, distance) to obtain a multifiltration, which decomposes the original compound into substructures using these relevant chemical functions. As detailed in <Ref>, for a given compound =(,), they used atomic number A and partial charge P as node functions A:→ and P:→, as well as bond strength B as an edge function B:→, to define multifiltrations. An illustration of such a multifiltration for the compound cytosine <Ref> is provided in <Ref>. In this example, the functions are partial charge and atomic number.
After constructing multifiltrations, they obtained compound fingerprints using a slicing technique. In particular, within each multifiltration ij, they took horizontal slices by fixing a row i_0 to get a single persistence filtration i_0j. For each horizontal slice i_0, they then generated a persistence diagram _k(i_0j). These persistence diagrams were vectorized using Betti and Silhouette vectorizations to produce 2D arrays, such as 𝐌β=[β(ij)] (Betti numbers of the clique complex ij).
Next, using these 2D arrays, they applied two ML classifiers: Random Forest and ConvNeXt Vision Transformers. Both classifiers performed exceptionally well, surpassing all state-of-the-art models on benchmark datasets. In <Ref>, their results are shown for the DUD-E Diverse dataset, which comprises 116K compounds targeting eight proteins. The common metric in virtual screening is enrichment factor, which compares the proportion of active compounds found in the top-ranked subset of a model output to the proportion of active compounds in the entire dataset. EF_1% represents the enrichment factor for the top 1% of the dataset.
§.§ Mapper: Cancer Genotyping from RNA Sequencing
In this section, we will explore a significant application domain for the Mapper algorithm: the analysis of single-cell RNA sequencing data. Recall that Mapper is particularly effective for analyzing high-dimensional point clouds by generating a low-dimensional graph summary that preserves local relationships (<Ref>). In the Mapper summary graph, the nodes represent clusters in the point cloud, and the edges between the nodes indicate that the corresponding clusters are nearby (interacting) in the high-dimensional space.
RNA sequencing data is crucial for cancer genotype analysis, though it presents significant challenges. The expression profile of a cell can be mathematically represented as a point in a high-dimensional expression space, where each dimension corresponds to the RNA level of a gene, and the dimension of the space is the number of expressed genes. Points that are close to each other in this space correspond to cells with similar expression profiles. The set of all possible tumors of a cancer type spans a subspace of the expression space. From an ML perspective, this is a highly sparse point cloud in a high-dimensional space. In general, cells are assigned to some cancer subtype (e.g., malignant, benign, etc.), and the aim is to understand the relation of these types in this high-dimensional space.
r3in
< g r a p h i c s >
Genetic Alterations. Topological representation of the expression space for low-grade gliomas using Mapper. The structure reveals three main clusters corresponding to distinct glioma subtypes. One cluster, labeled IDH2mut, highlights the significance of the IDH2 gene in differentiating this subtype. This visualization emphasizes the distinct expression profiles that separate these glioma subtypes. This figure is adapted from <cit.>.
There are several significant works on cancer genotyping through the use of Mapper techniques <cit.>. In this paper, we will detail the methods in two works. In the first one <cit.>, Wang et al. conducted a comparative analysis of Mapper visualization techniques on RNA-sequencing data. They used the melanoma tumor cells dataset GSE72056, which contains 4,645 cells. Of these, 1,257 are malignant, and 3,256 are non-malignant, with some minority classes included. The dataset includes 23,686 expressed genes. From an ML perspective, this data can be represented as a point cloud = {x^i}_i=1^4645 in very high dimensional space ^23,686, where each dimension corresponds to a gene {γ_j}_j=1^23,686. In particular, for a cell x^i ∈ with x^i = (x^i_1, x^i_2, …, x^i_23,686), x^i_j represents the RNA level of gene γ_j in cell x^i. This is computed as x^i_j = log_2(1 + TPM_ij/10), where TPM_ij is the transcript-per-million (TPM) for gene γ_j in cell x^i. Notice that the number of dimensions far exceeds the number of points. From an ML perspective, this presents a highly challenging setup. In <Ref>, the authors illustrate different dimension reduction techniques on this dataset. In <Ref>, they show how the graph changes when the number of bins (resolution) is varied.
In the second work <cit.>, Rabadan et al. applied the Mapper algorithm to identify somatic mutations relevant to tumor progression. They analyzed mutation and RNA expression data for 4500 genes across 4476 patients from 12 tumor types, including low-grade glioma (LGG), lung adenocarcinoma, breast invasive carcinoma, and colorectal adenocarcinoma. This study led to the identification of 95 mutated cancer genes, 38 of which were previously unreported. They identified three large expression groups with oligodendrogliomas (enriched for CIC and IDH2 mutations), IDH1-mutant astrocytomas (enriched for TP53 mutations), and IDH1-wild-type astrocytomas (enriched for EGFR mutations) (<Ref>). In their analysis, they used the neighborhood lens (<Ref>) as the filter function and scanned the resolution and gain parameter space to determine the optimal parameters (<Ref>).
§ FUTURE DIRECTIONS
In the preceding sections, we covered the primary topological methods employed in machine learning. In this section, we will explore future directions for advancing these methods, aiming to enhance the practicality of TDA in ML. We will also discuss strategies for expanding the application of topological methods into new and emerging areas.
§.§ Topological Deep Learning
While TDA is already an effective tool for feature extraction from complex data, recent research underscores its potential to augment deep learning models by providing complementary insights. Deep learning algorithms typically emphasize local data relationships, whereas TDA offers a global perspective, delivering supplementary information.
Furthermore, TDA methods have also been integrated into deep learning algorithms to optimize its process by regulating the loss functions and other layers <cit.>.
In recent years, there has been significant progress in integrating TDA with deep learning in domains such as image analysis <cit.>, biomedicine <cit.>, graph representation learning <cit.>, genomics <cit.>, cybersecurity <cit.> and time-series forecasting <cit.>. This is a rapidly emerging field, with several recent papers surveying the current state-of-the-art <cit.>.
§.§ TDA and Interpretability
TDA, though often used as a feature extraction technique in machine learning, provides a powerful framework for enhancing model interpretability by capturing the inherent geometric and topological structures of data. Tools like PH and Mapper excel at identifying and quantifying features such as clusters, loops, and voids across multiple scales, unveiling intricate patterns that might escape traditional analytic methods <cit.>. By utilizing these topological insights into ML pipelines, TDA offers a unique perspective on decision boundaries, feature interactions, and model behavior. For instance, Mapper facilitates the visualization and interpretation of complex, high-dimensional datasets by projecting them into lower-dimensional topological spaces<cit.>. This transformation uncovers relationships within the data that are otherwise difficult to grasp, making it easier to interpret the reasoning behind certain predictions or classifications. Ultimately, by incorporating TDA, researchers and practitioners can clarify the decision-making processes of machine learning models, promoting greater transparency and trustworthiness in AI systems.
§.§ Fully Automated TDA Models
While PH and Mapper have demonstrated effectiveness across several machine learning domains, their successful application still requires considerable expertise, such as hyperparameter tuning and selecting appropriate filtration functions. To make PH and Mapper more accessible to the broader ML community, there is a pressing need for end-to-end algorithms that automate these processes. For PH, this automation can be tailored to specific data formats, such as point clouds, images, or graphs. In each case, the fully automated algorithm would handle the selection of filtration functions, thresholds, vectorization methods, and other hyperparameters for optimal performance in downstream tasks. For instance, an automated PH model designed for graph data could automatically choose the best filtration function (learnable), appropriate threshold values, and vectorization techniques to achieve optimal results in a node classification task. Similarly, a fully automated Mapper algorithm that selects filtration functions, resolution, gain, and other hyperparameters would significantly enhance its usability and accessibility to the ML community. For PH, there are works for learnable filtration functions, which find the best filtration functions in graph setting <cit.>. On the other hand, For Mapper, there is promising progress in this direction with Interactive Mapper <cit.>. The broader adoption of TDA hinges on developing practical software libraries that streamline its use for various applications.
§.§ Scalability of TDA methods
Another key barrier to the widespread adoption of TDA methods in various application areas is their high computational cost. Although TDA performs well on small to medium datasets, scaling these methods to larger datasets poses significant challenges due to computational demands. Several strategies have been introduced to alleviate these costs, varying by data type. For graph datasets, recent research <cit.> has proposed practical techniques that substantially lower the computational burden of PH. Similarly, for point clouds, recent works <cit.> present algorithms that improve the efficiency of PH computations. While these advances are promising, there remains a critical need to enhance the scalability of PH for large graph and point cloud datasets. In contrast, cubical persistence is already computationally efficient for image datasets and can be applied to large datasets without significant cost concerns. Even in the case of 3D image datasets <cit.>, PH provides a cost-effective alternative to deep learning approaches. Similarly, scalability issues are minimal for Mapper when hyperparameters are appropriately tuned.
§ CONCLUSION
In this tutorial, we have introduced the key concepts of topological methods, particularly focusing on persistent homology and the mapper algorithm, and demonstrated their practical application in machine learning tasks. By providing a clear and accessible roadmap, we aimed to equip readers with the tools and understanding necessary to integrate TDA techniques into their research workflows. The strength of these methods lies in their ability to capture and quantify intricate, multi-scale topological features that are often missed by traditional ML approaches.
As demonstrated through various case studies, including cancer diagnosis, shape recognition, and drug discovery, TDA offers unique insights and interpretability, which are increasingly valuable in today's era of complex data. The integration of persistent homology for feature extraction and mapper for intuitive data visualization opens new pathways for researchers to explore the underlying structures of their data.
Looking forward, the continued development of software libraries, scalable algorithms, and more interpretable models will play a crucial role in making topological machine learning even more accessible to a broader audience. The future directions we highlighted, such as topological deep learning and automated TDA models, provide exciting opportunities for further advancements in the field.
We hope this tutorial serves as a foundational resource for those new to TDA and inspires further exploration of topological methods in machine learning and beyond. By embracing these techniques, researchers can unlock novel insights and push the boundaries of what is possible in data analysis.
This work was partially supported by the National Science Foundation under grants DMS-2202584, 2229417, and DMS-2220613 and by the Simons Foundation under grant # 579977.
The authors acknowledge the http://www.tacc.utexas.eduTexas Advanced Computing Center (TACC) at The University of Texas at Austin for providing computational resources that have contributed to the research results reported within this paper.
ACM-Reference-Format
§ NOTATION TABLE
§ DATASET RESOURCES
As we develop this tutorial to introduce topological methods to the ML community, we also seek to highlight their real-world applications to the mathematics community. For those eager to gain hands-on experience with topological techniques using real-world datasets, Table <ref> lists the commonly utilized datasets. For task-specific datasets, you can explore the collections at <https://paperswithcode.com/datasets> and <https://www.kaggle.com>.
|
http://arxiv.org/abs/2409.02672v1 | 20240904130059 | Independence Constrained Disentangled Representation Learning from Epistemological Perspective | [
"Ruoyu Wang",
"Lina Yao"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Independence Constrained Disentangled Representation Learning
Wang et al.
University of New South Wales Commonwealth Scientific and Industrial Research Organisation, Australia
Independence Constrained Disentangled Representation Learning from Epistemological Perspective
Ruoyu Wang 1 Lina Yao 1,2
September 9, 2024
==============================================================================================
§ ABSTRACT
Disentangled Representation Learning aims to improve the explainability of deep learning methods by training a data encoder that identifies semantically meaningful latent variables in the data generation process. Nevertheless, there is no consensus regarding a universally accepted definition for the objective of disentangled representation learning. In particular, there is a considerable amount of discourse regarding whether should the latent variables be mutually independent or not. In this paper, we first investigate these arguments on the interrelationships between latent variables by establishing a conceptual bridge between Epistemology and Disentangled Representation Learning. Then, inspired by these interdisciplinary concepts, we introduce a two-level latent space framework to provide a general solution to the prior arguments on this issue. Finally, we propose a novel method for disentangled representation learning by employing an integration of mutual information constraint and independence constraint within the Generative Adversarial Network (GAN) framework. Experimental results demonstrate that our proposed method consistently outperforms baseline approaches in both quantitative and qualitative evaluations. The method exhibits strong performance across multiple commonly used metrics and demonstrates a great capability in disentangling various semantic factors, leading to an improved quality of controllable generation, which consequently benefits the explainability of the algorithm.
§ INTRODUCTION
Representation learning is widely recognized as a fundamental task in the field of machine learning, as the efficacy of machine learning methods heavily relies on the quality of data representation. It is suggested that an ideal representation should be disentangled <cit.>, which means it can identify the genuine generative factors hidden in the observed data, and the latent variables should be semantically meaningful and correspond to the ground truth generative factors.
However, there is no general agreement on a formal definition of disentangled representation <cit.> <cit.> <cit.>. Despite the lack of agreement on the formal definition of disentangled representation learning, existing methods suggested that two quantities are significant in disentangled representation learning: 1) Mutual Information between the latent variables and the data <cit.>; and 2) Independence between the latent variables <cit.><cit.>.
Nevertheless, regarding the second quantity Independence between the latent variables, it is worth noting that there is a lack of consensus about whether latent variables should be mutually independent in disentangled representation learning. While some <cit.> suggest that hidden factors should be strictly independent, some other works <cit.> argued that causal relationships exist between generative factors, thus they are not necessarily independent. Therefore, these arguments lead us to ask:
What should be considered as generative factors? What should be independent in latent space? And what should be causally connected?
To answer these questions, it is crucial to understand how humans perceive and comprehend these factors and their relationships, because the fundamental objective of disentangled representation learning is to extract factors that are interpretable to us human. Therefore, in this paper, we first answer the above questions by borrowing the concepts from epistemology. Then, based on these interdisciplinary theories, we introduce a unified framework to consolidate prior arguments regarding the relationships between latent variables. Finally, after clarifying these questions, we propose a novel method for disentangled representation learning that jointly optimizes the two objectives mentioned earlier: 1) the mutual information objective and 2) the independent objective. The contribution of this paper is threefold:
* We establish a conceptual bridge between epistemology and disentangled representation learning to facilitate the understanding of the data generation process and disentangled representation learning.
* We introduce a two-level latent space framework to unify the prior arguments regarding the relationships between generative factors and latent variables in disentangled representation learning.
* We propose a novel method for disentangled representation learning to jointly optimize the mutual information and the independent objectives, which outperforms the baseline methods consistently on multiple evaluation metrics.
§ OUR METHOD
§.§ A Perspective of Epistemology
As discussed in Section <ref>, comprehending the relationships between latent variables in disentangled representation learning necessitates an understanding of how humans perceive these factors. Because the concept of interpretability inherently implies interpretability to humans; therefore, interdisciplinary insights from epistemology are indispensable in this context.
In epistemology, mental representations of perceptions are called ideas, which can be grouped into two categories, simple ideas and complex ideas <cit.>. Simple ideas are basic, indivisible concepts that form the foundation of our knowledge, and complex ideas are more advanced concepts that are built upon multiple simple ideas. For instance, when we imagine an apple (Figure <ref>), the sensory perceptions of an apple such as its colour, shape and taste are irreducible, and thus are regarded as simple ideas. These simple ideas can then be combined to form the complex idea of an apple as a whole.
Taking these theories as a reference, we argue that the disagreement regarding whether latent variables should be independent arises from the lack of understanding of this hierarchical structure of human concepts. Existing works instinctively consider ALL latent variables in a single latent space. However, building upon these interdisciplinary theories, we contend that interpretable latent variables in disentangled representation learning should also follow a hierarchical structure in a similar way as demonstrated in Figure <ref>.
§.§ Two-level Latent Space Framework
Inspired by the interdisciplinary theories introduced in Section <ref>, we propose a two-level latent space framework to consolidate prior arguments on the interrelationships between the latent variables, as illustrated in Figure <ref>.
The framework groups all latent variables into two levels: 1) Atomic Level: which corresponds to the simple idea in epistemology, and comprises the factors that can be directly perceived from the observed data, and latent variables in this level should be mutually independent. 2) Complex Level: which comprises concepts derived from the atomic level variables. The two levels are connected by causal relationships and can be modelled by the Structural Causal Model (SCM). Within this framework, atomic-level latent variables should be mutually independent, and complex-level variables are not necessarily independent because the atomic-level variables may work as confounders due to the causal relationships between the two levels.
We argue that prior arguments regarding the relationships between latent variables are primarily due to their focus on a set of selected factors in certain datasets, thus having different views on this issue. In contrast, our framework, supported by the theories in epistemology<cit.> and also cognitive science <cit.>, offers a general explanation that is adaptable to various scenarios, and consolidates prior arguments regarding the interrelationships between latent variables. In this paper, we concentrate on the datasets that consist only of atomic-level latent variables, thus all latent variables are independent. Therefore, we leverage the constraints of independent and mutual information to augment the efficacy of our proposed method for disentangled representation learning.
§.§ Method Formulation
After clarifying the legitimacy of applying independence constraints in this problem in Section <ref>, we introduce our proposed method in this section. We build our method in the paradigm of Generative Adversarial Networks (GAN). Formally, GAN solves the minimax optimization problem (Equation <ref>) by utilizing a generator G and a discriminator D, and the generator G could learn the real data distribution P_real(x) when this framework converges.
min_Gmax_Dℒ_GAN(D,G) = 𝔼_x ∼ P_real[ log D(x) ] + 𝔼_z ∼ noise[ log (1-D(G(z))) ]
However, this framework imposes no restrictions on the semantic meaning of the latent variable z_i. To encourage the latent variables to possess semantic meanings, as introduced in Section <ref>, we apply the two constraints in this framework: 1) Mutual information between the latent variables and the data <cit.>; and 2) Independence between the latent variables <cit.>.
For the constraint on mutual information, we adapt the implementation of InfoGAN <cit.>. First, the latent variables are decomposed into latent code z which controls the semantics in the image, and noise ϵ which is considered incompressible. Then, an auxiliary network Q is introduced into the framework to maximize the lower bound of mutual information between the latent code z and the image (Equation <ref>). In practice, a regularization term ℒ_I(G,Q) (Equation <ref>) is introduced into the framework to maximize the Mutual Information between latent variable z and the generated data.
I(z,G(z,ϵ)) ≥𝔼_x ∼ G(z,ϵ)[ 𝔼_z' ∼ P(z|x)log Q(z'|x) ] + H(z)
On the other hand, for the independence constraint, we aim to minimize the Total Correlation <cit.> between latent variables, and apply this constraint to the learned latent code ẑ such that:
TC(ẑ) = KL(q(ẑ)|| ∏_iq(ẑ_i))
where ẑ is the samples drawn from the learned posterior distribution obtained by Q(G(z)). Therefore, by integrating these two constraints, the overall objective of our method becomes:
min_G,Qmax_Dℒ_TCGAN = ℒ_GAN(D,G) - λℒ_I(G,Q) + βℒ_TC(G,Q)
where
ℒ_I(G,Q) = 𝔼_z ∼ P_(z), x ∼ G_(z,ϵ)[ log Q(z|x) ]
ℒ_TC(G,Q) = KL(q(ẑ)|| ∏_iq(ẑ_i))
Thus far, we have formulated our method by integrating the two constraints in the paradigm of Generative Adversarial Network. However, the total correlation term (Equation <ref>) is intractable. In Section <ref>, we introduce the method to estimate this term and how this estimation process is integrated into our framework, then give an end-to-end illustration of our method.
§.§ Total Correlation Estimation
To estimate the Total Correlation term (Equation <ref>), we adapt the method from FactorVAE <cit.>. Specifically, we utilize the Density-Ratio trick, which trains a Total Correlation Discriminator TCD to predict the probability that the input vector is from q(ẑ) rather than ∏_iq(ẑ_i), and then the Total Correlation term can be estimated by Equation <ref>.
TC(ẑ) = 𝔼_q(ẑ)[ logq(ẑ)/∏_iq(ẑ_i)] ≈𝔼_q(z)[ logTCD(ẑ)/1-TCD(ẑ)]
The end-to-end framework of our method is illustrated in Figure <ref>. On top of the vanilla GAN framework which comprises a generator G, a data encoder E and a Discriminator head D, we first utilize an auxiliary network Q to predict the mean and variance of the latent variables of the generated data, as implemented in InfoGAN <cit.>. Then, we sample ẑ by utilizing a reparametrization trick to ensure the differentiability of the framework. By employing this approach, we acquired samples from the distribution q(ẑ). On the other hand, samples from ∏_iq(ẑ_i) cannot be sampled directly, so we adapt the permute-dim algorithm proposed in <cit.> to do the sampling, where the samples are drawn by randomly permuting across the batch for each latent dimension. Therefore, the input of our framework should be two randomly selected batches of latent variables, one of which is used to calculate the mutual information loss ℒ_I and sample from q(ẑ), another batch is used to sample from ∏_iq(ẑ_i) in order to estimate the total correlation loss ℒ_TC.
We perform three steps iteratively to train the framework: 1) Train the Discriminator D in GAN to ensure the generated images look real with other modules fixed; 2) Train the Generator G and Network Q by the loss defined in Equation <ref> with other modules fixed, where ℒ_TC is estimated by Equation <ref>; 3) Train the Total Correlation Discriminator TCD to distinguish q(ẑ) and ∏_iq(ẑ_i) with other modules fixed. These steps are illustrated in Algorithm <ref>.
§ EXPERIMENTS
§.§ Experiment Setting
We evaluate the performance of our method from two perspectives: (1) Quantitative Evaluation (Section <ref>), where we compare our method with several baseline methods on multiple commonly used metrics for Disentangled Representation Learning; and (2) Qualitative Evaluation (Section <ref>), where we conduct Latent Space Traversal Test and compare our results with the case without the enforcement of independence constraint, to observe the direct impact of our method on the generated images.
We follow the settings in previous works on the model architectures of the Generator, Discriminator <cit.> and the Total Correlation Discriminator <cit.>. We use the Adam optimizer for training, with a learning rate equal to 0.001 for the generator, 0.002 for the discriminator and TC discriminator. We use α=0.1 and β=0.001 in Equation <ref>. The latent dimension equals 10. We used a batch size of 64 and trained the model for 30 epochs, then selected the checkpoint with the highest Explicitness score for evaluation on other metrics.
§.§ Quantitative Evaluation
Since the quantitative evaluation of disentanglement requires ground-truth labels of all generative factors, most datasets are not suitable for quantitative evaluation. Therefore, following previous works in this domain, we concentrate on dSprites <cit.> for quantitative evaluation, where all factors are well-defined.
§.§.§ Metrics
We evaluate the quality of disentanglement by several metrics proposed in recent literature, including 1) Explicitness Score <cit.> which evaluates the quality of disentanglement by training a classifier on latent code to predict the ground-truth factor classes, a higher Explicitness Score indicates that all the generative factors are decoded in the representation; 2) JEMMIG Score <cit.> which evaluates the quality of disentanglement by estimating the mutual information (MI) between the ground-truth generative factors and the latent variables; 3) Modularity Score <cit.> which measures whether one latent variable encodes no more than one generative factor, it estimates the mutual information (MI) between a certain latent variable and the factor with maximum MI and compares it with all other factors; 4) SAP Score <cit.>, which evaluates the quality of disentanglement by training a linear regression model for every pair of latent variables and ground-truth factor, and uses the R^2 score of the regression model to denote the disentanglement score; and 5) Z-diff <cit.> which first selects pairs of data points with the same value on a fixed latent variable, and then evaluates the quality of disentanglement by training a classifier to predict which factor was fixed. We used the code provided by <cit.> for all evaluation metrics.
§.§.§ Baselines
We compare our method with several baseline methods in the domain of Disentangled Representation Learning, which include VAE <cit.>, β-VAE <cit.>, AnnealedVAE <cit.>, Factor-VAE <cit.>, β-TCVAE <cit.>, InfoGAN <cit.>, IB-GAN <cit.>, and InfoGAN-CR <cit.>. For the VAE-based baseline methods <cit.>, we reproduce the results by the code provided in <cit.> with the suggested optimized parameters. In particular, we use a batch size of 64 for all experiments and train the model for 30 epochs then select the checkpoint with the highest Explicitness score for evaluation. We use the model architecture suggested in <cit.> and use a latent dimension equal to 10 for all methods. For β-VAE <cit.>, we train the model with β equal to 4. For Annealed VAE <cit.>, we set the capacity equal to 25. For factor VAE and β-TCVAE, we use the weight of 6.4 for the total correlation term. On the other hand, for baseline methods based on GAN <cit.>, we reproduce the result with the exact parameters and model architecture provided in the corresponding paper and code.
§.§.§ Results
The results are presented in Table <ref>, where we present
the average and standard deviation over 10 runs for all metrics, and highlight the highest score in bold, and the second-highest score by an underscore. Our method outperforms the existing methods from two perspectives:
1) Our method achieves the best performance across all the evaluation metrics. Specifically, our method outperforms baseline methods by a considerable margin on Explicitness, Modularity and SAP scores. And on the other two metrics JEMMIG and Z-diff scores, even though existing methods already performed well, our method also outperforms baseline methods by a reasonable margin.
2) Our method exhibits consistency across the evaluation metrics. Other methods, in contrast, often lack this consistency. For instance, IB-GAN performs well on Explicitness and Modularity Score but performs poorly on JEMMIG and SAP. This inconsistency arises from the fact that different metrics perform their evaluation from different perspectives as introduced in Section <ref> and the Metrics section above. Therefore, the consistency of our method shows the effectiveness of our method in enhancing the quality and generalizability of Disentangled Representation Learning algorithms.
§.§ Qualitative Evaluation by Latent Space Traversal Test
While the datasets suitable for quantitative evaluation are limited, we further evaluate our method on other datasets by conducting Latent Space Traversal Tests. Latent space traversal test is a commonly used technique to investigate the semantic meaning of latent variables. It traverses one latent variable while keeping all the other variables invariant, and generates a sequence of images with these features. The semantic meaning of the traversed variable can be obtained by inspecting the changes in the images.
In our experiments, since we aim to improve the quality of disentanglement, we examine whether the changes in one variable affect more than one generative factor. If the traversed variable affects only one semantic meaningful factor, it means the quality of disentanglement is desirable. In contrast, if the traversed variable affects more than one semantic meaningful factor, it means the factors are still entangled in the latent space.
To this end, we make comparisons between the cases with/without the enforcement of the proposed independence constraint and directly observe the impact of implementing our method. To ensure fair comparisons, all the settings remain the same except for the total correlation loss ℒ_TC (Equation <ref>) in each pair of comparisons. These comparisons are conducted on three datasets: MNIST, FashionMNIST and dSprites. For all three datasets, the settings remain unchanged as introduced in Section <ref>, except for the dimension settings of the latent space, because the number of generative factors contained in each dataset is different. This will be further elaborated in each paragraph below.
§.§.§ MNIST
We trained the models with 1 ten-dimensional discrete variable, 2 continuous variables, and 62 noise variables. The ideal outcome of Disentangled Representation Learning is that the data encoder will map the discrete variable to the digit class, and the two continuous variables will correspond to the width and rotation of the digit. The results of our experiment are provided in Figure <ref>. In each row, we keep the discrete variable digit invariant, and traverse on the continuous variable rotation. When the model is trained without the independence constraint (Figure <ref>), we observe that: 1) The variables digit class and rotation are not fully disentangled in the latent space. For example, while we control the digit to be 5 on the fifth row, some 0 and 6 are generated when traversing on rotation; and 2) the variables width and rotation of the digit are not fully disentangled in the latent space. For example, as highlighted in the first row of Figure <ref>, while we only traverse the variable rotation, the width of the digit is also affected. We highlight this trend on the first row, and similar patterns can be observed on other rows. In contrast, with the enhancement of the independent constraint (Figure <ref>), both digit and width remain unchanged when traversing on rotation.
§.§.§ dSprites
We trained the models with 5 continuous variables, and 5 noise variables. Ideally, the five continuous variables will be mapped to the five generative factors of the dataset, which include Shape, Scale, Rotation, Pos X and Pos Y. The results of our experiment are provided in Figure <ref>. On each row of the images, we traverse one factor while keeping all other factors invariant as noted on the left side of each row. When the model is trained without the independence constraint (Figure <ref>), we observe that the factors are entangled in the latent space. For example, on the third row of the image, while traversing the factor of rotation, the shape of the figures are affected. And on the fourth row, rotation is affected while traversing the Position X. In contrast, with the enhancement of the independent constraint (Figure <ref>), these factors are not affected, as highlighted by red boxes in Figure <ref>.
§.§.§ FashionMNIST
We trained the models with 1 ten-dimensional discrete variable, 1 continuous variable, and 62 noise variables. Ideally, the discrete variable and the continuous variable will correspond to the item class and the thickness of the image. The results of our experiment are provided in Figure <ref>. A similar conclusion can be drawn as we did on other datasets above. On each row, we keep the discrete variable item class invariant, and traverse on the continuous variable thickness of the image. When the model is trained without the constraint of the independence between latent variables (Figure <ref>), we observe that the item is affected by variable thickness, as highlighted on the second row and the fourth row. In contrast, the item type remains unaffected when the model is trained with the enhancement of the independent constraint (Figure <ref>).
§.§.§ Summary
Based on these comparisons, we conclude that our method could consistently enhance the quality of disentanglement. Note that this does not mean our method could always disentangle all the factors perfectly, however, while some factors may still exhibit slight entanglement in the given images, the differences between the cases with/without the independence constraint are nontrivial, which validates the effectiveness of our method.
§ RELATED WORKS
§.§ Disentangled Representation Learning
Disentangled Representation Learning aims to learn a data encoder that can identify true latent variables that are semantically meaningful. <cit.> suggested that increasing the weight of the KL regularizer in VAE <cit.> can benefit the quality of disentangled representation learning. <cit.> proposed that disentanglement quality can be improved by progressively increasing the bottleneck capacity. FactorVAE <cit.> and β-TCVAE <cit.> both penalize the total correlation <cit.> between latent variables, while FactorVAE uses density-ration trick for total correlation estimation, β-TCVAE proposed a biased Monte-Carlo estimator to approximate total correlation. Several other methods were also proposed in the paradigm of VAE <cit.>. However, <cit.> claimed that unsupervised disentangled representation learning is impossible without inductive biases.
On the other hand, some methods are proposed to learn disentangled representation in the paradigm of GAN <cit.>. InfoGAN <cit.> proposed a method to learn disentangled representation by maximizing the mutual information between latent variables and the generated image. InfoGAN-CR <cit.> claimed that self-supervision techniques can be used to improve the quality of disentanglement. IB-GAN <cit.> utilized the Information Bottleneck framework for the optimization of GAN. Besides, some methods attempted to learn disentangled representations without utilizing generative models<cit.>.
Recently, diffusion models have been applied to the domain of disentangled representation learning <cit.>. Additionally, disentangled representation learning has found broad applications in areas such as graph representation learning <cit.>, graph neural architecture search <cit.>, recommendation systems <cit.>, and out-of-distribution generalization <cit.>.
§.§ Causal Representation Learning
Recent studies are aimed at connecting the field of causal inference and disentangled representation learning. <cit.> introduced a causal perspective of disentangled representation learning by modelling the data generation process as a Structural Causal Model (SCM) <cit.>, where they introduced a set of confounders that causally influence the generative factors of observable data. <cit.> further developed this idea and studied the role of intervention and counterfactual effects. CausalVAE <cit.> introduced a fully supervised method that builds a Causal Layer to transform independent exogenous factors into causal endogenous factors that correspond to causally related concepts in the observed data. And DEAR <cit.> proposed a weakly supervised framework, which learns causally disentangled representation with SCM as prior.
§ CONCLUSION
In this paper, we investigated the prior disagreement on the interrelationships between latent variables in Disentangled Representation Learning, and proposed a novel method to improve the quality of disentanglement. First, we build a conceptual bridge between epistemology and disentangled representation learning, thus clarifying what should and should not be independent in the latent space by introducing a two-level latent space framework based on interdisciplinary theories. Then, after clarifying the legitimacy of applying the independence constraint on the problem of Disentangled Representation Learning, we introduce a novel method that applies the mutual information constraint and independence constraint within the Generative Adversarial Network (GAN) framework. Experiments show that our method consistently achieves better disentanglement performance on multiple evaluation metrics, and Qualitative Evaluation results show that our method leads to an improved quality for controllable generation. Besides, our paper introduced a novel perspective to apply causal models to the field of representation learning, which facilitates the development of explainability of deep learning and holds potential for wide-ranging applications that value explainability, transparency and controllability.
splncs04
|
http://arxiv.org/abs/2409.02203v1 | 20240903181341 | Automated inclusion of QED corrections in Monte Carlo event generators | [
"Lois Flower"
] | hep-ph | [
"hep-ph"
] |
=1
ifpackageloadedmicrotype
footnote
img/
itemize*
enumerate*
equationsection
L>l<separate-uncertainty,
range-phrase = –,
range-units = single
^#1 d^#1(
[
{[#1]#2footnote[#2] |
http://arxiv.org/abs/2409.02180v1 | 20240903180002 | The Axion is Going Dark | [
"Markus Dierigl",
"Dušan Novičić"
] | hep-th | [
"hep-th",
"hep-ph"
] |
=18pt
The Axion is Going Dark
Markus Dierigl^1, Dušan Novičić^2
^1 Arnold-Sommerfeld-Center for Theoretical Physics,
Ludwig-Maximilians-Universität, 80333 München, Germany
^2 Max-Planck-Institut für Physik, Werner-Heisenberg-Institut,
85748 Garching bei München, Germany
In this work we explore the effect of a non-Abelian dark sector gauge group that couples to the axion field. In particular, we analyze effects that arise if the dark sector gauge group mixes topologically with the Standard Model. This is achieved by gauging a subgroup of the global center 1-form symmetries which embeds non-trivially in both the visible as well as the dark sector gauge groups. Leaving the local dynamics unchanged, this effect modifies the quantization conditions for the topological couplings of the axion, which enter the estimate of a lower bound on the axion-photon coupling. In the presence of a dark sector this lower bound can be reduced significantly, which might open up interesting new parameter regions for axion physics. We further determine the allowed exotic matter representations in the presence of topological mixing with a dark sector, explore the generalized categorical symmetries of such axion theories, and comment on other model-independent phenomenological consequences.
empty
§ INTRODUCTION
Axions are one of the most promising candidates for particles beyond the Standard Model. They can provide a dynamical solution for the strong CP problem <cit.> and generically contribute to the dark matter density in the universe <cit.>, see also <cit.>. Axions further have a very rich structure of symmetries, including various generalized symmetries, see <cit.> for reviews and in particular <cit.>. These include the mixture of various higher-form symmetries into higher groups <cit.> as well as non-invertible symmetries <cit.>, making axions the ideal laboratory for studying these structures and their physical implications.
Since axions couple to the gauge dynamics only topologically via the instanton density they are sensitive to subtle differences in the realization of the gauge group. Because of this, they are able to go beyond the implications of the gauge algebra which dictates the local interactions. This is important even for the Standard Model for which the gauge algebra is known to be composed of strong, weak, and hypercharge interactions:
𝔤_SM = 𝔰𝔲(3) ⊕𝔰𝔲(2) ⊕𝔲(1) ,
but the actual gauge group is not fixed. The allowed realizations of the Standard Model gauge group can be parameterized by
G_SM = SU(3) ×SU(2) ×U(1)/ℤ_k , with k ∈{1 , 2 , 3 , 6 } ,
see, e.g., <cit.>. The value of k leads to quantization conditions for the axion coupling that affects its coupling to the electro-magnetic field after electro-weak symmetry breaking <cit.>. This quantization leads to a lower bound on the axion-photon coupling which enters various experimental searches and is therefore of great importance to define the experimentally interesting parameter space, see, e.g., <cit.>.
In this work we will demonstrate that the presence of a non-Abelian dark sector can drastically modify the conclusions about a lower bound on the axion-photon coupling. This happens in cases where it mixes topologically with the Standard Model fields, i.e., the total gauge group is given by[G_d denotes the simply connected group associated to the algebra 𝔤_d, e.g., SU(N) for 𝔰𝔲(N).]:
G = SU(3) ×SU(2) ×U(1) ×G_d/ℤ_k .
For non-trivial realizations of G the effective coupling of the axion to the electro-magnetic field g_a γγ will generically be reduced compared to its Standard Model value,
g^G_a γγ≤ g^G_SM_a γγ .
In certain cases this reduction is significant and might open up new parameter regimes. For example, we find that in the case of a dark sector with 𝔰𝔲(3) algebra and ℤ_6 quotient the minimal value of the axion photon coupling differs by an order of magnitude in the presence of topological mixing. Specifically,
g_a γγ^min,SM≃α/2 π f 0.72(4) ≫ g_a γγ^min,d≃α/2 π f( 0.07(6) + ρ) ,
where we expect ρ, modifying the kinetic mixing of axions with pions in the dark sector, to be sub-leading.
The topological mixing in the presence of a dark gauge sector further has implications on the allowed representations of exotic matter fields, which we determine for general dark sector gauge group. We further consider scenarios in which the dark sector is confining or broken, which leads to different restrictions. This influences other phenomenological applications of the axion field as well and we comment on the corresponding possible effects. These effects can also be constrained by the categorical symmetries of the system which we explore systematically in the presence of a dark sector including anomaly inflow onto the defects of the axion field given by axion domain walls and axion strings. This has the potential to open up interesting possibilities and new corners of the parameter space.
Restricting to a single non-Abelian dark sector gauge group and a single axion field we will quantify the consistency conditions and restrictions depending on the choices of the dark sector gauge algebra 𝔤_d and the quotient ℤ_k with a focus on universal, model-independent properties. For that we use that the quotient by ℤ_k in (<ref>) can be understood as gauging part of the global center 1-form symmetries of the theory <cit.>, which allows for more general gauge backgrounds carrying fractional instanton number <cit.>, see also <cit.>. The periodicity of the axion field leads to consistency conditions for these fractional instanton backgrounds which can be phrased in terms of a quantization condition on the topological coupling constants <cit.>. Fractional instanton configurations in the dark sector gauge group need to be included in the evaluation of these consistency conditions which modifies the result compared to those obtained from the Standard Model alone.
The rest of the manuscript is organized as follows. In Section <ref> we explain the notion of topological mixing, its effect on allowed matter representation, as well as its formulation in terms of the gauging of a subgroup of the center 1-form symmetries. We further explain how this gauging allows for gauge configurations with fractional instanton numbers. In Section <ref> we determine how the realization of the total gauge group leads to quantization of the topological axion coupling and explore the influence of the presence of a dark sector gauge group on the minimal value of the axion photon coupling. The generalized categorical symmetries and defects of the axion system are discussed in Section <ref> with a focus on the inclusion of a dark sector. In Section <ref> phenomenological consequences of coupling axions to the dark sector are discussed. Finally, we point out possible generalizations and conclude in Section <ref>. In Appendix <ref> we discuss the fractional instanton number in cases in which one only mods out a subgroup of the center symmetry.
§ TOPOLOGICAL MIXING
In this section we discuss the consequences of the topological mixing of gauge groups. For this we note that being given the gauge algebra of a theory does not specify the gauge group, which requires more detailed information, for example the allowed representations or the spectrum of line operators <cit.>. For a gauge algebra 𝔤 we denote by G the associated maximal form of the gauge group. For a non-Abelian Lie algebra this is given by the simply-connected realization whereas for U(1) this fixes the charge quantization, see Table (<ref>). The group G has the maximal center subgroup 𝒵_G, i.e., the subgroup commuting with all group elements.
As an example consider the special unitary group SU(3), whose center elements form the discrete subgroup ℤ_3 and are represented by elements of the form
[ e^2 π i / 3 0 0; 0 e^2 π i/3 0; 0 0 e^2 π i/3 ] = e^2 π i /3 1∈SU(3) .
Being diagonal matrices, these elements clearly commute with all the SU(3) group elements.
For the groups relevant to us in later sections the Lie algebra 𝔤, the associated maximal Lie group G, and its center subgroup are given by[Our convention for labeling the symplectic groups is such that 𝔰𝔭(1) ≃𝔰𝔲(2).]
[ Lie algebra 𝔤 Lie group G center 𝒵_G; 𝔰𝔲(n) SU(n) ℤ_n; 𝔰𝔬(2n+1) Spin(2n+1) ℤ_2; 𝔰𝔬(4n) Spin (4n) ℤ_2 ×ℤ_2; 𝔰𝔬(4n+2) Spin (4n + 2) ℤ_4; 𝔰𝔭(n) Sp(n) ℤ_2; 𝔢_6 E_6 ℤ_3; 𝔢_7 E_7 ℤ_2; ℝ U(1) U(1); ]
The remaining Lie groups E_8, G_2, and F_4 all have trivial center and will not be relevant for our discussion.
In general, a gauge group associated to a single Lie algebra summand 𝔤 takes the form
G = G/𝒵 , with 𝒵⊂𝒵_G ,
where we divide by 𝒵, a subgroup of the center. Whenever the gauge algebra contains more than one summand, the quotient by 𝒵 can affect each gauge group factor simultaneously. We will refer to this phenomenon as topological mixing, since the local dynamics of a theory with such a gauge group remains unchanged. For several gauge group factors G_i this can be written as
G = ∏_iG_i/𝒵 .
To specify the resulting gauge group G one further needs to fix the group homomorphisms from 𝒵 to each 𝒵_G_i. These can be inferred from the image of a set of generators of 𝒵, which also includes the possibility of 𝒵 having more than one factor. This data can be captured by an integer ℓ_i, which specifies the maps
𝒵⊃ℤ_n →ℤ_m_i⊂𝒵_G_i ,
by the image of the generator
1 mod n ↦ℓ_i mod m_i .
The subgroup in the image of ℤ_n is given by ℤ_m_i/gcd(ℓ_i, m_i), where gcd denotes the greatest common divisor. Moreover, for this to be well-defined the generated subgroup cannot be bigger than ℤ_n, which restricts the allowed values of ℓ_i and imposes that n is mapped to a multiple of m_i. In particular, it requires that
ℓ_i n/m_i∈ℤ ,
where we regard the ℓ_i parameter mod m_i, such that the identity element in ℤ_n is mapped to the identity element in ℤ_m. Note, that since for 𝔤 = 𝔰𝔬(4n) the center is ℤ_2 ×ℤ_2 one needs to specify maps into both ℤ_2 factors. If G_i = U(1) we can specify the group homomorphism
𝒵⊃ℤ_n →U(1) = ℤ_G_i ,
by the map
1 mod n ↦ e^2 π i ℓ_i / n∈U(1) ,
which produces a ℤ_n/gcd(ℓ_i,n) subgroup of U(1) = 𝒵_G_i.
Let us illustrate this in more detail for four different examples:
* 𝒵 = ℤ_6 and G_i = SU(3): The homomorphisms are specified by the maps of the generator of ℤ_6 into 𝒵_G_i = ℤ_3
1 mod 6 ↦ℓ_i mod 3 .
In both non-trivial cases the generated subgroup of 𝒵_G_i is the full ℤ_3, but the specific homomorphism differs, captured by ℓ_i ∈{ 1, 2}.
* 𝒵 = ℤ_6 and G_i = SU(12): With the group homomorphism into 𝒵_G_i = ℤ_12 specified by
1 mod 6 ↦ℓ_i mod 12 .
For this to be a group homomorphism ℓ_i needs to be even. And we find the generated subgroup and remaining parameters of the non-trivial homomorphism
[ ℓ_i m_i/gcd(ℓ_i,m_i) ℤ_m_i/gcd(ℓ_i,m_i)⊂𝒵_G_i; 2 6 ℤ_6; 4 3 ℤ_3; 6 2 ℤ_2; 8 3 ℤ_3; 10 6 ℤ_6; ]
We see that there are many different possibilities parametrized by the generated subgroup in 𝒵_G_i.
* 𝒵 = ℤ_2 and G_i = Spin(8): For this we need to specify two maps, since 𝒵_G_i = ℤ_2 ×ℤ_2 and we find
1 mod 2 ↦ (ℓ^1_i mod 2 , ℓ^2_i mod 2) ,
allowing for three different non-trivial embeddings, see also Appendix <ref>.
* 𝒵 = ℤ_6 and G_i = U(1): The map is specified by
1 mod 6 ↦ e^2 π i ℓ_i / 6 ,
which as for SU(12) leads to various subgroups of U(1) and group homomorphisms. To be specific, one finds ℤ_6 for ℓ_i ∈{1 ,5 }, ℤ_3 for ℓ_i ∈{ 2 , 4 }, and ℤ_2 for ℓ_i = 3.
For the quotient in (<ref>) to lead to topological mixing we demand that 𝒵 embeds non-trivially in a product of at least two different gauge group factors G_i. This quotient by part of the center symmetry has various consequences:
* Restriction of the allowed representations
* More general gauge theory backgrounds
* Fractional instantons
We will discuss these in the following subsections and in particular choose
∏_i G_i ⊃G_SM = SU(3) ×SU (2) ×U(1) ,
the form of the Standard Model gauge group with the largest center symmetry.
First, let us recall that even without additional gauge factors in (<ref>) there can be topological mixing, parametrized by the different forms of the Standard Model gauge group G_SM in (<ref>). With the center of G_SM given by
𝒵_G_SM = ℤ_3 ×ℤ_2 ×U(1) .
The specific embedding of the possible discrete quotients ℤ_k with k ∈{1 , 2 , 3 , 6 }, where ℤ_1 is the trivial group, is given by
ℤ_3: 1 mod 3 ↦(1 mod 3 , 0 mod 2 , e^2 π i 2/3) ∈𝒵_G_SM , (ℓ_3 , ℓ_2 , ℓ_1) = (1 , 0 , 2) ,
ℤ_2: 1 mod 2 ↦(0 mod 3 , 1 mod 2 , e^2 π i 1/2) ∈𝒵_G_SM , (ℓ_3 , ℓ_2 , ℓ_1) = (0 , 1 , 1) ,
Here, ℓ_3, ℓ_2, and ℓ_1 specify the maps to 𝒵_SU(3), 𝒵_SU(2), and U(1) of 𝒵_G_SM, respectively. The transformation under ℤ_6 can be deduced by the fact that ℤ_6 = ℤ_3 ×ℤ_2 and is generated by, e.g.,
ℤ_6: 1 mod 6 ↦(1 mod 3 , 1 mod 2 , e^2 π i 1/6) ∈𝒵_G_SM , (ℓ_3 , ℓ_2 , ℓ_1) = (1 , 1 , 1) .
In the following we further include a single additional non-Abelian gauge group factor G_d and interpret it in terms of a dark sector gauge group. Here, we focus on a single non-Abelian dark sector in order to avoid the strong additional constraints for dark U(1) gauge fields imposed by kinetic mixing effects, see, e.g., <cit.> and <cit.>, and not to clutter notation by multiple additional gauge factors. The total gauge group is then given by
G = G_SM×G_d/𝒵 ,
where 𝒵 embeds non-trivially in both factors.
We will see that the presence of topological mixing with the dark sector has several important consequences for exotic matter representations as well as in the presence of an axion field. To allow for a sensible phenomenology we further assume a mass gap for the dark sector gauge fields. In particular we will distinguish between the following two scenarios:
§.§.§ Confining dark sector
In this scenario the non-Abelian dark sector gauge group is unbroken at low energies and a mass gap is generated dynamically via confinement. We leave the associated energy scale Λ_d unspecified, since it depends on the details of the model, but discuss several interesting consequences below. In order not to alter the interactions of Standard Model particles they need to be singlets under the the dark sector gauge group. This constrains the allowed quotients 𝒵 and can be realized by demanding that 𝒵 embeds into 𝒵_G_SM as described in (<ref>) or (<ref>). Depending on the scale Λ_d there can be important modifications to the axion potential induced by the strong coupling dynamics in the dark sector.
Note that such models also lead to interesting dark matter candidates in terms of dark baryons and glueballs, see, e.g., <cit.>, and are very common in string theory compactifications <cit.>.
§.§.§ Broken dark sector
The second possibility is that the dark sector gauge group is broken completely at low energies, which means that in principle the Standard Model gauge fields can transform as non-trivial representations under G_d. This however, leads to a multiplicity of the Standard Model matter fields according to the dimension of the dark sector representation, which needs to be implemented carefully to allow for a reasonable phenomenology in the visible sector. All interactions coming from the dark sector would be suppressed by the breaking scale Γ_d, which can be very high and therefore does not change the measured interactions. In this case 𝒵 does not have to embed as a ℤ_k as in (<ref>) and (<ref>) discussed above and can be more general. Moreover, the modifications of the axion potential will be suppressed by the breaking scale.
This scenario is typical in Grand Unified theories where the Standard Model embeds into a larger gauge group at high energies, see, e.g., <cit.> for a recent discussion. It is also generically realized in phenomenological applications of the heterotic string <cit.> with its large 10d gauge groups.
In the following we do not want to analyze individual models, with a certain realization of the matter fields, which are also subject to other consistency constraints such as anomaly cancellation. Instead we focus on the model independent restrictions imposed by topological mixing and argue how they alter the general properties of the theory.
Before we go into the investigation of the axion dynamics we will explain what the topological mixing implies for the allowed representations of matter fields as well as consistent gauge theory backgrounds.
§.§ Representations and exotic matter
The quotient by a subgroup of the center symmetry in (<ref>) has consequences for the allowed representation of matter fields in the theory. In particular, only representations that are invariant under 𝒵 in (<ref>) survive the quotient. The fact that all matter representations of the Standard Model are invariant under ℤ_k, that acts as described in (<ref>) and (<ref>), allows for the different global realizations G_SM = G_SM / ℤ_k of the Standard Model gauge group.
For example consider k = 6, all the fields of the Standard Model, as well as the right-handed neutrinos, are invariant under ℤ_6, e.g., the left-handed quarks transform as
Q_L = ( 3 , 2 )_1/6: e^2 π i/3× (-1) × e^2 π i / 6 = 1 .
Here, the subscript denotes the hypercharge normalized to be in 16ℤ as is common in the literature.[Note, however, that the ℤ_6 ⊂U(1) still acts on the charge 16 state as the phase e^2 π i/6 as is expected for an embedding with ℓ_1 = 1.] However, for k = 6 representations of the form
Q_exotic = (3 , 1)_0: e^2 π i/3≠ 1 ,
are not invariant under ℤ_6 and therefore would not be allowed, even though they are well-defined for the maximal gauge group G_SM. Once the quotient by 𝒵 involves a dark sector gauge group the allowed representations change accordingly as we will discuss next.
Including the dark sector gauge group we specify the particle representations as
𝐑 = (𝐑_3 , 𝐑_2 , 𝐑_d)_q ,
where the charges of U(1)_Y once more normalized to be elements in 16ℤ. Next, we define the center charge r_i of a representation 𝐑_i as given by its phase under center transformations
𝒵_G_i⊃ℤ_m_i: 𝐑_i → e^2 π i/m_i r_i 𝐑_i ,
e.g., r_3 = 1 mod 3 for the fundamental representation 3 of SU(3). Thus, one obtains a consistency condition for each ℤ_n ⊂𝒵 factor, which can be phrased in terms of the ℓ_i as
𝒵⊃ℤ_n: 𝐑→exp( 2 π i ∑_i ℓ_i/m_i r_i + 2 π i ℓ_1/n6q ) 𝐑 = 𝐑 ,
which can be rephrased as
∑_i ℓ_i/m_i r_i + ℓ_1/n6q ∈ℤ .
As usual these formulas must be modified if G_i = Spin(4n), since then 𝒵_G_i = ℤ_2 ×ℤ_2 and one needs to specify two charges (r_i^1 , r_i^2), and two embedding parameters (ℓ_i^1 , ℓ_i^2) as discussed above.
Let us exemplify that for the left-handed quarks Q_L that additionally behave as singlets under the dark sector and 𝒵 = ℤ_6 embedded as in (<ref>), i.e. (ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d) = (1 , 1 , 1 , 0). One finds
∑_i ℓ_im_i r_i + ℓ_1n6q = 13 + 12 + 16 + 0 = 1 ∈ℤ ,
which indeed satisfies the quantization condition. If instead we consider Q_exotic in (<ref>), again transforming as singlet under the dark sector gauge group, one finds
∑_i ℓ_im_i r_i + ℓ_1n6q = 13 + 0 + 0 + 0 = 13∉ℤ ,
which does not satisfy the quantization condition.
Next, we introduce topological mixing with the dark sector and see how this changes the allowed matter representations. In case the dark sector confines we demand that all Standard Model fields are singlets under G_d, i.e.,
𝐑_SM = (𝐑_3 , 𝐑_2 , 1_d)_q ,
since otherwise they might be confined in dark baryons. This is automatically satisfied if we demand that 𝒵 embeds into 𝒵_G_SM as the ℤ_k groups defined in (<ref>) and (<ref>), and there is a non-trivial homomorphism of ℤ_k to 𝒵_d to generate the topological mixing. This allows for the appearance of more general representations.
For example for 𝒵 = ℤ_6 and starting with Q_exotic in (<ref>) under the Standard Model one has
𝐑 = (3 , 1 , 𝐑_d)_0 : ∑_i ℓ_im_i r_i + ℓ_1n6q = 13 + 0 + 0 + ℓ_d m_d r_d∈ℤ ,
from which one finds the consistency constraint
ℓ_d m_d r_d = 23 mod 1 .
This can be achieved for example for 𝔤_d = 𝔰𝔲(3) and the embedding specified by
(ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d) = (1 , 1 , 1 , ℓ_d) ,
for which we find
𝐑 = (3 , 1 , 3)_0: ∑_i ℓ_im_i r_i + ℓ_1n6q = 13 + 0 + 0 + ℓ_d 3× 1 ∈ℤ ,
which is allowed for ℓ_d = 2. Similiarly,
𝐑 = (3 , 1 , 3)_0: ∑_i ℓ_im_i r_i + ℓ_1n6q = 13 + 0 + 0 + ℓ_d 3× 2 ∈ℤ ,
which is allowed for ℓ_d = 1.
For the broken dark sector scenario we do not have to demand that the Standard Model fields are singlets under G_d and one can take more general quotients. The allowed matter representations of the UV gauge group will still be influenced by the topological mixing as discussed above.
For that let us consider another example with 𝒵 = ℤ_6 and 𝔤_d = 𝔰𝔲(3), but this time the embedding differs from (<ref>) in the Standard Model factors and instead is given by
(ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d) = (1 , 1 , -1 , 1) ∼ (1 , 1 , 5 , 1) .
In this case the the left-handed quarks need to transform non-trivially under G_d in order to absorb the phase
Q_L = (3, 2)_1/6: Q_L → e^2 π i 2/3 Q_L ,
under 𝒵 = ℤ_6. This can be ensured by transforming as 3 under G_d = SU(3), leading to the consistency condition
𝐑 = (3 , 2 , 3)_1/6: ∑_i ℓ_im_i r_i + ℓ_1n6q= 13 + 12 - 16 + 13 = 1 ∈ℤ ,
being satisfied. Note that the exotic matter representation (<ref>), realized via (3 , 1 , 3)_0 is also allowed for this particular embedding.
Similar conclusions about the realization of allowed exotic matter states can be applied for all the possible gauge groups and follow the same logic as explained above. This demonstrates that the global form of the gauge group leads to significant restrictions on the allowed exotic matter states.
§.§ Non-trivial 1-form symmetry backgrounds
One can interpret taking the quotient in (<ref>) as gauging part of the center 1-form symmetry <cit.>. This center 1-form symmetry acts on Wilson lines labeled by a representation 𝐑 of the gauge group with the charge r defined as in (<ref>)
𝒲_𝐑 = tr( P exp( i ∮ A_𝐑) ) ,
where P denotes a path-ordering in the exponential. Since the charged objects are one-dimensional, the associated symmetry is called a 1-form symmetry. The existence of dynamical particles transforming under the center breaks this symmetry explicitly, since now the Wilson lines can end on a charged local operator. For the Standard Model the discrete ℤ_6 is not broken and one can gauge (subgroups of) it. Once one includes the dark sector gauge group the gauging and breaking depends on the specific matter representations.[In quantum gravity it is expected that all global symmetries are either gauged or broken, e.g., <cit.>, so in our scenario this would mean that either there are exotic matter representation that break the 1-form symmetry or it necessarily needs to be gauged.]
To perform such a gauging we couple the 1-form center symmetry of G = G_SM×G_d given by
𝒵_G = ℤ_3 ×ℤ_2 ×U(1) ×𝒵_d ,
to background gauge fields, which are discrete 2-form fields B_i. We will further make the assumption that 𝒵 maximally embeds as ℤ_6 in the Abelian U(1)_Y of the hypercharge sector. Reason is that one otherwise generically ends up with fractionally charged particles that are additionally charged under the dark sector and are experimentally heavily constrained, see, e.g., <cit.>, similar to kinetic mixing with another Abelian factor that can also produce such states <cit.>. In this case the background fields are given by elements
(B_3 , B_2 , B_1 , B_d) ∈ H^2 (M; ℤ_3) × H^2 (M; ℤ_2) × H^2 (M; ℤ_6) × H^2 (M; 𝒵_d) ,
where M denotes the spacetime manifold. The gauging proceeds by promoting a linear combination of these background fields to be dynamical, which for a discrete fields merely means that we sum over its configurations in the partition function. The specific form of linear combination which is gauged is determined by the embedding of 𝒵 into the individual factors of 𝒵_G, i.e., by the embedding parameters ℓ_i.
For 𝒵 = ℤ_6 with embedding parameters (ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d) we can parametrize the `dynamical' linear combination by summing over ω∈ H^2 (M; ℤ_6) and identifying
(B_3 , B_2 , B_1 , B_d) = (ℓ_3 ω_3 , ℓ_2 ω_2 , ℓ_1 ω , ℓ_d ω_d) .
Here, ω_i is the reduction of ω modulo the order of the different factors of 𝒵_i. For embedding of subgroups into U(1) and 𝒵_i we use the fact that we can rephrase a background in H^2(M; ℤ_k) as a background in H^2(M; ℤ_m k) by multiplication by m, see Appendix <ref>. Again, we note the usual caveats in case 𝔤_d = 𝔰𝔬(4n), for which one needs to specify two independent background gauge fields in H^2(M;ℤ_2).
For example, for (<ref>) one finds the linear combination
(B_3 , B_2 , B_1 , B_d) = (ω_3 , ω_2 , ω , ℓ_d ω_3) .
Similarly, for (<ref>) the appropriate linear combination is given by
(B_3 , B_2 , B_1 , B_d) = (ω_3 , ω_2 , - ω , ω_3) ,
where one could equally well use 5 ω instead of - ω, since 6 ω is trivial in H^2(M;ℤ_6).
Since for the maximal gauge group G we effectively need to set all ω_i to zero we see that the gauge group G = G/𝒵 has more gauge configurations than G.[This can also be seen from classifying the gauge configurations via transition functions where in G the co-cycle condition on triple overlaps needs to produce the identity in G, it can be a non-trivial element in 𝒵 for G/𝒵, see e.g. <cit.>.] These new gauge field backgrounds, however, lead to a restriction on allowed matter representations, which have to be compatible with the gauging procedure and precisely lead to the restrictions discussed in Section <ref>. Moreover, after gauging part of the center 1-form symmetry the quantization of instanton numbers is modified as we will discuss next.
§.§ Instanton numbers
Instantons are topological gauge field configurations with a non-trivial value of the integral
n_i = 1/8 π^2∫tr(F_i ∧ F_i) ,
where F is the non-Abelian field strength of 𝔤_𝔦 in differential form notation, see, e.g., <cit.>. Since they are characteristic classes of the gauge theory their number, n_i, is quantized.[They are the pullbacks of non-trivial elements in group cohomology with integer coefficients, i.e., originate from H^4 (BG; ℤ), with classifying space BG.] In our conventions of the trace the instanton numbers for simply-connected non-Abelian Lie groups on
Spin manifolds are integers. For Abelian gauge groups the instanton number is given by
n_1 = 1/8 π^2∫ F_1 ∧ F_1 ,
with hypercharge U(1)_Y field strength F_1 = d A_1. Once more n_1 is an integer on Spin manifolds. It is important to stress that the spacetime is Spin, since four-dimensional Spin manifolds have an even intersection form for H_2(M;ℤ). This, together with the charge quantization condition [F_1/2π] ∈ H^2(M;ℤ), is why we can divide by an additional factor of 12 in (<ref>). For non-Spin manifolds, e.g., M = ℂℙ^2, this is generally not the case and (<ref>) can be half-integer. However, this is modified in the presence of non-trivial background fields B_i for the center 1-form symmetries.
In the case of a U(1) gauge symmetry and 𝒵 = ℤ_n this can be understood as follows. The background field B_1 ∈ H^2(M; ℤ_n) can be rephrased as a flat 2-form gauge field b_1 ∈ H^2(M; U(1)) which is roughly related to B_1 as
b_1 = 2 πn B_1 .
In particular this means that if B_1 integrates to q ∈ℤ mod n on a certain 2-cycle, b_1 integrates to 2 π q/n mod 2π. In the presence of this background field the instanton number is shifted to
n_1 = 1/8 π^2∫ (F_1 - b_1) ∧ (F_1 - b_1) .
In general this gives rise to fractional instanton contributions of the form
1/4 π^2∫ F_1 ∧ b_1 ∈1nℤ , and 1/8 π^2∫ b_1 ∧ b_1 ∈1n^2ℤ .
At the same time these new configurations violate the quantization condition for the magnetic flux F_1 and would require electric charges to be multiples of n, which is precisely the U(1) version of the restriction of representations discussed in Section <ref>. Equivalently, since the action is quadratic in F_1 these results can been seen explicitly by noticing that gauging the ℤ_n 1-form symmetry effectively replaces F_1 with 1n F_1 in the Lagrangian.
A similar conclusion holds for non-Abelian gauge groups where the non-trivial 1-form symmetry backgrounds B_i lead to fractional contributions to the instanton numbers. To extract their fractionality one modifies the field strength to include contributions of B_i and evaluates their effect. This has been done in detail in <cit.> to which we refer for a detailed derivation.
Here, we briefly recall the derivation in the special case G = SU(n) and 𝒵 = ℤ_k and embedding specified by
1 mod k ↦ℓ mod n .
For the resulting gauge group to be SU(n)/ℤ_k we further have to demand the stronger condition compared to (<ref>) that k divides n to generate a proper subgroup, where in general we would allow for a subgroup of 𝒵 to map non-injectively to 𝒵_G_i. The non-trivial background field B ∈ H^2(M; ℤ_n) is defined by
B = ℓ ω n,
where ω∈ H^2(M;ℤ_k), is the parameter we sum over in the partition function, and B is well defined because of (<ref>). One also has
ngcd(ℓ,n) = k .
Because of this there is a way to interpret B as a background in H^2(M;ℤ_k) as one would expect from a quotient by ℤ_k, see Appendix <ref>. To see how this non-trivial background B modifies the instanton number we promote the SU(n) field strength F to a U(n) field strength F', see <cit.>, which can be done by introducing a U(1) gauge field f = da and defining
F' = F + 1n f 1 ,
with unit matrix 1. One then identifies the U(1) part with the background gauge field, which after using the identification (<ref>) in terms of b ∈ H^2(M;U(1)) takes the form
tr(F') = f = n b .
This can be implemented via a Lagrange multiplier in the action, see <cit.>. One then replaces the instanton density (<ref>) with
1/8 π^2∫tr( (F' - b 1) ∧ (F' - b 1) ) ,
which contains the U(n) field and the background b. Since the second Chern class of F' integrates to integer values
∫ c_2(F') = ∫( 1/8 π^2tr(F' ∧ F') - 1/8 π^2tr(F') ∧tr(F') ) ∈ℤ ,
we identify the fractional part of (<ref>) as the second term in
1/8 π^2∫tr( (F' - b 1) ∧ (F' - b 1) ) = ∫ c_2 (F') + 1/8 π^2∫ n(n-1) b ∧ b .
Going back to the discrete gauge field ω, which is summed over, using b=2π/n B=2π/nℓω this can be written as
n^frac = (n-1)/2 n∫𝒫(ℓω) .
Here, 𝒫 is the Pontryagin square, see, e.g., <cit.>, which is a cohomological operation
𝒫: H^2 (M; ℤ_n) → H^4 (M; ℤ_p) ,
where p=n for odd n and p=2n for even n. It refines the bilinear form given by the cup product
∪: H^2(M;ℤ_n) × H^2(M;ℤ_n) → H^4 (M;ℤ_n) .
of B with itself. It is only relevant for n even, since for n odd one has 𝒫(B) = B ∪ B ∈ H^4(M;ℤ_n). For even n it introduces an additional factor of 12 in the fractional part of the instantons. In cases where B ∈ H^2(M;ℤ_n) has a well-defined lift to integer cohomology B_ℤ∈ H^2(M;ℤ), which we will always assume and which is the case if H^3(M;ℤ) is torsion free[In case this lift does not exist, one can still perform the calculations above using the definition of the Pontryagin square on the co-chain level, see <cit.>. ], it is given by 𝒫(B) = B_ℤ∪ B_ℤ mod 2n.
For example for G = SU(2)/ℤ_2, where ℓ = 1 since we assume a non-trivial embedding, the fractional instanton number in the presence of B ∈ H^2(M;ℤ_2) is given by
n_2^frac = 14∫𝒫 (B) = 14∫ B_ℤ∪ B_ℤ mod 1 ∈{ 0 , 12} ,
where we once more used the evenness of the intersection form on Spin manifolds.
For other simple Lie algebras in the presence of the background B ∈ H^2(M; 𝒵_G) one can determine fractional instanton contributions by utilizing different embeddings of SU(n) groups <cit.>. The result is presented in the table:
[ gauge algebra 𝔤 fractional instanton contribution for G = G/𝒵_G; 𝔰𝔲(n) n-12n∫𝒫(B); 𝔰𝔬(2n+1) 12∫𝒫(B); 𝔰𝔬(4n) n4∫𝒫(B_L + B_R) + 12∫ B_L ∪ B_R; 𝔰𝔬(4n+2) 2n+1/8∫𝒫(B); 𝔰𝔭(n) n4∫𝒫(B); 𝔢_6 23∫𝒫(B); 𝔢_7 34∫𝒫(B); ]
Indeed, setting ℓ =1, and therefore gcd(ℓ,n) = 1 and n = k, in (<ref>), we find
n^frac = n-1/2n∫𝒫(B) ,
which conincides with the table.
In case 𝒵 is only a subgroup of the center of the dark sector G_d, i.e., if gcd(ℓ_d, m_d) ≠ 1, this affects that fractionality condition. Note that this is only relevant for SU(n) with n not prime and Spin(2n), since otherwise the center does not have non-trivial subgroups. We discuss the various cases in Appendix <ref>.
In the models above the overall quotient 𝒵 embeds into several gauge factors. As we discussed in Section <ref> this correlates the various background fields for the individual 1-form center symmetries. It also implies that fractional instanton contributions in one gauge factor necessarily are accompanied by fractional contributions in a different sector. For 𝒵 = ℤ_6 and 𝔤_d = 𝔰𝔲(3) with embedding (<ref>) the fractional parts are given by
n_3^frac = 13∫𝒫(ω_3) , n_2^frac = 14∫𝒫(ω_2) , n_d^frac = ℓ_d^23∫𝒫(ω_3) ,
n_1^frac = - 16∫ c_1 (F_1) ∪ω + 172∫𝒫(ω) ,
where c_1(F_1) ∈ H^2(M;ℤ) is the first Chern class of the hypercharge gauge field represented by [12π F_1]. The different embedding (<ref>) leads to positive sign for the first term for n_1^frac, but is otherwise identical after setting ℓ_d = 1.
In the next section we will see how the fractional instanton charges induced by the quotient 𝒵 enters the dynamics of axion physics.
§ AXIONS AND TOPOLOGICAL MIXING
Now that we have analyzed how a topological mixing influences the possible representations and leads to more general gauge backgrounds that can carry fractional instanton numbers we analyze how this influences the behavior of an axion field.
For us, an axion field a is a real, pseudoscalar particle that is periodic
a ∼ a + 2 π ,
Its kinetic term in differential form notation, see <cit.> for an introduction, is given by
ℒ^a_kin = 12 f^2 da ∧∗ da ,
with dimension-full parameter f known as the axion decay constant. The canonically quantized axion field, of mass dimension 1, would therefore be fa and have periodicity 2 π f. Importantly, the axion couples topologically to the instanton density of the gauge groups, for which we include the dark sector. This coupling is given by
ℒ^a_top = i a/8 π^2( κ_3 tr(F_3 ∧ F_3) + κ_2 tr (F_2 ∧ F_2) + κ_1 F_1 ∧ F_1 + κ_d tr(F_d∧ F_d) ) ,
with the topological coupling constants κ_i. This interaction must not violate the periodicity condition for the axion (<ref>), which can be interpreted as a gauged shift symmetry. More specifically, thinking of a as a real field it has ℝ shift symmetry and by gauging a discrete subgroup 2πℤ⊂ℝ one imposes a periodicity condition.
For G = G_SM×G_d, i.e., trivial quotient, this simply implies that the constants κ_i are all integers since all instantons are integer quantized. However, once there is non-trivial topological mixing of the gauge theory factors, there are fractional contributions to the instanton numbers, leading to a correlation of the quantization condition of the coupling constants κ_i. In <cit.> this was used to show that the axion-photon coupling takes a minimal value that depends on the topological mixing in the Standard Model. Here, we include the influence of mixing with a dark sector gauge group and find that the constraints are modified.
§.§ Quantization of topological terms
In the following we analyze the quantization condition for the constants κ_i in the presence of non-trivial topological mixing involving a non-Abelian dark sector gauge group. Thus, we assume a gauge group of the form
G = G_SM×G_d/𝒵 ,
with a choice of embedding specified by (ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d). As discussed in Section <ref> this embedding identifies a linear combination of the 1-form symmetry background fields which are summed over in the partition function, gauging the corresponding subgroup of the center 1-form symmetries. Assuming for simplicity that 𝒵 = ℤ_k has a single factor, which can be easily generalized, the dynamical gauge field takes the general form
(B_3 , B_2 , B_1 , B_d) = (ℓ_3 ω_3 , ℓ_2 ω_2 , ℓ_1 ω , ℓ_d ω_d) ,
with ω∈ H^2(M;𝒵) and ω_i its mod m_i reductions as discussed around (<ref>). The last entry ω_d is the reduction of ω mod the order of 𝒵_d, with the usual caveat for Spin(4n) with its two factors of the center. With this one can evaluate the fractional contribution to the instanton number and their coupling to the axion
ℒ^a_top,frac = a ( α_3 𝒫 (ω_3) + α_2 𝒫(ω_2) +
α_1 𝒫(ω) - α_1 ω∪ c_1(F_1) + α_d 𝒫(ω_d) )
The fractional prefactors α_i depend on the specific form of the embedding and are given by
α_3 = ℓ_3^23 κ_3 mod 1 , α_2 = ℓ_2^24 κ_2 mod 1 , α_1 = ℓ_1^22 k^2 κ_1 mod 1 , α_1 = ℓ_1k κ_1 mod 1 ,
and for the dark sector
α_d = c^fracℓ_d^2 κ_d ,
where c^frac is the fractional coefficient in Table (<ref>), which depends on the choice of 𝔤_d. Since the axion periodicity is fixed to 2π the coupling to the topological quantities in (<ref>) has to respect that.[If this is not the case this can be interpreted as a mixed anomaly between the gauged 2πℤ shift symmetry of the axion and the gauged center 1-form symmetries presenting an inconsistency <cit.>. This is also common for topological terms involving higher form fields in larger dimensions, see, e.g., <cit.>.] This leads to the quantization condition
∫( α_3 𝒫 (ω_3) + α_2 𝒫(ω_2) +
α_1 𝒫(ω) - α_1 ω∪ c_1 (F_1) + α_d 𝒫(ω_d) ) ∈ℤ ,
for all backgrounds of the form (<ref>) which are summed over. However, note that not the individual pieces but only their linear combination has to satisfy this constraint.
Let us first set α_d to zero and reproduce the known result for 𝒵 = ℤ_6, (ℓ_3 , ℓ_2 , ℓ_1) = (1 , 1 , 1), in the Standard Model. One has
∫( 13κ_3 𝒫(ω_3) + 14κ_2 𝒫(ω_2) + 172κ_1 𝒫(ω) - 16κ_1 ω∪ c_1(F_1) ) ∈ℤ .
Note that for this condition to be satisfied κ_1 needs to be a multiple of 6 and one has
κ_1 ∈ 6 ℤ ,
which makes the integral of the last term in (<ref>) an integer. Furthermore the term
172κ_1 ∫𝒫(ω) ,
is well-defined modulo integers since κ_172∈112ℤ and ∫𝒫(ω) is defined mod 12. Integer instantons in the non-Abelian group factors, which are present for any value of 𝒵, further provide the conditions
κ_3 , κ_2 ∈ℤ .
The remaining condition can be deduced by taking an integer lift of ω which we always assume to exist, see however footnote <ref>, i.e., ω_ℤ∈ H^2 (M;ℤ) with
∫_C ω_ℤ mod 6 = ∫_C ω , for all C ∈ H_2 (M; ℤ) ,
and notice that this is also a good integer lift of ω_i
∫_C ω_ℤ mod n = ∫_C ω mod n = ∫_C ω_i , for all C ∈ H_2 (M; ℤ) .
With this we find
( 13κ_3 + 14κ_2 + 172κ_1 ) ∫ω_ℤ∪ω_ℤ∈ℤ .
Recalling that ω_ℤ∪ω_ℤ always integrates to an integer number on Spin manifolds the remaining quantization condition for the topological couplings κ_i is given by
13κ_3 + 14κ_2 + 172κ_1 ∈12ℤ ↔ 24 κ_3 + 18 κ_2 + κ_1 ∈ 36 ℤ ,
This precisely reproduces the result of <cit.>.
After the inclusion of 𝔤_d this quantization can change, which we will demonstrate for 𝒵 = ℤ_6 and general dark sector gauge algebra. The embedding is defined by the parameters (ℓ_3 , ℓ_2 , ℓ_1 , ℓ_d). As above, integer instantons in all of the non-Abelian gauge groups give rise to the condition
κ_3 , κ_2 , κ_d∈ℤ ,
leaving a single condition from fractional instantons. For cases with 𝒵 having several factors one gets a condition for each generator of 𝒵. It reads
∫( ℓ_3^23κ_3 𝒫(ω_3) + ℓ_2^24κ_2 𝒫(ω_2) + ℓ_1^272κ_1 𝒫(ω) - ℓ_16κ_1 ω∪ c_1 (F_1) + α_d 𝒫(ω_d) ) ∈ℤ .
For this to possibly be an integer we need to demand that
κ_1 ∈6gcd(ℓ_1, 6) ℤ ,
which also ensures that the κ_1 𝒫(ω) term is well-defined modulo integers. Lifting ω and ω_i to H^2(M; ℤ) the consistency condition is
( ℓ_3^23κ_3 + ℓ_2^24κ_2 + ℓ_1^272κ_1 + α_d) ∫ω_ℤ∪ω_ℤ∈ℤ ,
which we express as
ℓ_3^23κ_3 + ℓ_2^24κ_2 + ℓ_1^272κ_1 + α_d ∈12ℤ ↔ 24 ℓ_3^2 κ_3 + 18 ℓ_2^2 κ_2 + ℓ_1^2 κ_1 + 72 α_d∈ 36 ℤ .
To go through the various possibilities, we provide a list of α_d for all 𝔤_d and embeddings of ℤ_6 specified by ℓ_d on Spin manifolds
[ gauge algebra 𝔤_d gauged subgroup of 𝒵_d ℓ_d α_d; 𝔰𝔬(2n+1) ℤ_2 1 12κ_d; 𝔰𝔬(4n) ℤ_2 (1,0) n4κ_d; ℤ_2 (0,1) n4κ_d; ℤ_2 (1,1) 12κ_d; 𝔰𝔬(4n+2) ℤ_2 2 12κ_d; 𝔰𝔭(n) ℤ_2 1 n4κ_d; 𝔢_6 ℤ_3 1 or 2 23κ_d; 𝔢_7 ℤ_2 1 34κ_d; ]
Since α_d is only relevant modulo 12 we find that the only non-trivial embeddings happen for 𝔰𝔬(4n), 𝔰𝔭(n) for odd n as well as 𝔢_6 and 𝔢_7. The most flexible embeddings happen for 𝔰𝔲(n) dark sector gauge groups for which we find (up to 𝔰𝔲(6))
[ gauge algebra 𝔤_d gauged subgroup of 𝒵_d ℓ_d α_d; 𝔰𝔲(2) ℤ_2 1 14κ_d; 𝔰𝔲(3) ℤ_3 1 or 2 13κ_d; 𝔰𝔲(4) ℤ_2 2 12κ_d; 𝔰𝔲(6) ℤ_2 3 34κ_d; ℤ_3 2 23κ_d; ℤ_6 1 or 5 512κ_d; ]
One sees that the additional contribution of κ_d generated by the topological mixing with the dark sector changes the quantization conditions.
In the next section we explore how this can influence the axion-photon interactions.
§.§ Axion-photon coupling
At energies below electro-weak symmetry breaking the weak SU(2) and the hypercharge U(1)_Y combine to the U(1) of electro-magnetism, whose field strength we denote by F. The topological part of the action below this breaking scale is given by
ℒ'_a,top = i a ( N/4 π^2 tr (F_3 ∧ F_3) + E/8 π^2 F ∧ F + κ_d/8 π^2 tr (F_d∧ F_d) ) ,
where the constants N and E are often used in phenomenological discussions <cit.> and are defined in terms of the κ_i as
N = 12κ_3 , E = 136 (κ_1 + 18 κ_2) .
From this one can also determine the axion-photon coupling which is given by (α is the fine-structure constant)
g_aγγ = α N/π f( E/N - 1.92(4) + ρ) .
Note that the numerical result 1.92 arises from the mixing of the axion with the neutral pion, π_0, see the derivation in <cit.>. The particular realization of the dark sector might alter this numerical coefficient. However, since at leading order this mixing proceeds via Standard Model fields and we do not alter the electro-weak symmetry breaking, these effects will be suppressed with respect to the result stated above. Since further we do not want to include sensibility of the particular dark sector spectrum, we simply parameterize this modification with the constant ρ, which we expect to be sub-leading.
The important conclusion of (<ref>), as pointed out in <cit.>, is that there is a minimal axion-photon coupling, since one has to cancel the mixing contribution (-1.92(4) + ρ) with a quantized EN, which cannot be done with arbitrary precision. For example for the ℤ_6 quotient in the Standard Model from (<ref>) one has
E/N = κ_1 + 18 κ_2/18 κ_3 = - 4 κ_3 + 6 m/3 κ_3 , with m ∈ℤ .
Setting κ_3 = 1, which avoids potential domain wall problems, see Section <ref> below, the best cancellation with the pion mixing is achieved for m = 2, leading to EN = 83 and
g_aγγ^min≃α/2 π f 0.72(4) ,
and one obtains a lower bound of this coupling.
After the inclusion of the topological mixing with the dark sector, which we take to be induced by 𝒵 = ℤ_6 as above, one instead finds the relevant quantities to be
E/N = κ_1 + 18 κ_2/18 κ_3 = 6 m - 12 α_d + 3(ℓ_1^2 - ℓ_2^2) κ_2 - 4 ℓ_3^2 κ_3/3 ℓ_1^2 κ_3 , with m ∈ℤ ,
where we used (<ref>) to eliminate κ_1. We already see that depending on the topological couplings κ_i and the embedding parameters ℓ_i this equation can be tuned more finely and the minimal value for the axion-photon coupling can in turn be reduced significantly.
We will exemplify this for the example 𝔤 = 𝔰𝔲(3) with embedding given in (<ref>), in which case α_d = ℓ_d^23κ_d = 13κ_d and one finds
E/N = 6m - 4 κ_d - 4 κ_3/3 κ_3 ,
again setting κ_3 = 1 one can obtain a significantly better cancellation in cases where κ_d = - 1 for which we can choose m = 1 to find
E/N = 2 ,
and the minimal value of the axion-photon coupling is given by
g_a γγ^min,d≃α/2 π f( 0.07(6) + ρ) .
We see that while the direct dynamical consequences of a topological mixing with the dark sector are rather mild, it can have interesting consequences to the axion sector which is sensitive to these topological properties. In fact, assuming the corrections ρ are sub-leading, the axion-photon coupling can be reduced by an order of magnitude in the presence of a dark 𝔰𝔲(3) with non-trivial mixing with the Standard Model fields. Note further that even in situations where ρ modifies the Standard Model calculations the additional parameter κ_d leads to more freedom in tuning the parameters in order to achieve a better cancellation of the terms among each other.
§ SYMMETRIES AND DEFECTS
Systems with axionic degrees of freedom are very rich in terms of their symmetries. In particular they realize so-called higher-group structures, where various higher-form symmetries mix among each other <cit.>, and non-invertible symmetries, which are symmetries that do not follow a group law <cit.>. Some of these symmetries still exist after the coupling to the dark sector. Moreover, since the axion is a periodic scalar field there can be configurations with a non-trivial winding number, i.e., axion strings. Due to the topological coupling the axion string needs to host localized degrees of freedom accounting for the anomaly inflow from the bulk <cit.>. For non-trivial coupling to the dark sector these localized degrees of freedom are also charged under the dark gauge group.
We will briefly discuss these properties of the symmetries and defects in the system in the following.
§.§ The symmetries of the axion system
In the absence of the topological coupling (<ref>) there are four different types of symmetries. The axion (<ref>) possesses a U(1) 0-form shift symmetry
a → a + σ , with σ∈ [0, 2 π) .
and a winding U(1) 2-form symmetry, which measures the winding number of the axion string configuration.
The corresponding currents are
∗ j_shift = i f^2 ∗ da , ∗ j_wind= 12πda
which are conserved, i.e., d ∗ j = 0, prior to coupling of the axion to the gauge fields. In the modern language we would say that the shift symmetry is related to a 3-dimensional topological operator, which can be written in terms of the conserved current
U_σ (Σ_3) = e^i σ∫_Σ_3∗ j_shift
where Σ_3 is the 3-dimensional submanifold of M, the topological operator is located on, see also <cit.>.
On the other hand, as discussed in <ref> the gauge theory sector contributes an electric center 1-form symmetry. It is the subgroup of
𝒵_G/𝒵 = ℤ_3 ×ℤ_2 ×U(1) ×ℤ_d/𝒵 ,
which is not broken by the presence of dynamical charged matter states, as for example provided by the Standard Model matter fields. On the other hand the gauging process introduces a dual magnetic 1-form symmetry 𝒵^∨, the Poincaré dual of 𝒵, which acts non-trivially on 't Hooft lines that cannot be screened by dynamical magnetic monopoles <cit.>.
In the presence of the interaction (<ref>) the equations of motion are modified and the current for the shift symmetry is not conserved anymore, instead one has
d∗ j_shift = κ_3/8π^2tr(F_3∧ F_3) + κ_2/8π^2tr(F_2∧ F_2) + κ_1/8π^2 F_1∧ F_1 + κ_d/8π^2tr(F_d∧ F_d) ,
and the continuous symmetry is broken. However, it might not be broken completely. To see that, consider a shift (<ref>) and define the modified operator
U_σ (Σ_3) = e^i σ∫_Σ_3 (∗ j_shift - κ_3/2πΩ_3-κ_2/2πΩ_2 - κ_1/8π^2A_1∧ F_1 - κ_d/2πΩ_d) ,
with non-Abelian Chern-Simons 3-forms Ω_i = 1/4πtr(A_i∧ dA_i+2/3A_i^3), which satisfy
dΩ_i=14πtr(F_i∧ F_i) .
Since the Chern-Simons terms are not invariant under large gauge transformations the terms σ κ_i Ω_i are in general not well-defined, even though the resulting operator in (<ref>) is topological. An alternative way to define (<ref>) is to define a 4-dimensional submanifold Σ_4 of M with boundary ∂Σ_4 = Σ_3 and write
U_σ (Σ_3) = e^i σ( ∫_Σ_3∗ j_shift - ∫_Σ_4 ( κ_3/8π^2tr(F_3∧ F_3) + κ_2/8π^2tr(F_2∧ F_2) + κ_1/8π^2 F_1 ∧ F_1 + κ_d/8π^2tr(F_d∧ F_d)) ) .
For that to be a well-defined 3-dimensional topological operator, we need to demand that U_σ does not depend on Σ_4, which means that it should be trivial whenever Σ_4 does not have a boundary. The special values for σ for which this is the case, for given κ_i defines the unbroken part of the U(1) shift symmetry. The requirement reads
σ ∫_Σ_4( κ_38π^2tr(F_3∧ F_3)+κ_28π^2tr(F_2∧ F_2)+κ_18π^2F_1∧ F_1+κ_d8π^2tr(F_d∧ F_d))∈ 2πℤ .
This precisely corresponds to the discussion of fractional instantons in Section <ref> and the quantization condition in (<ref>). As we saw there in the presence of the nontrivial 1-from background ω∈ H^2(M; 𝒵) we can rewrite condition (<ref>)
σ( κ_3 n_3^int++κ_2 n_2^int + κ_1 n_1^int + κ_d n_d^int+
∫_Σ_4(α_3 𝒫 (ω_3) + α_2 𝒫(ω_2) + α_1 𝒫(ω) - α_1 ω∪ c_1(F_1) + α_d 𝒫(ω_d) )) ∈ 2πℤ ,
where we have split the expression into the integral instanton contributions n_i^int and the fractional contributions with α_i defined as in (<ref>). From the consistency of the axion we know that the second line has to be an integer. There is an unbroken shift symmetry in case this integer as well as the prefactors κ_3, κ_2, κ_1/6, and κ_d are divisible by a common integer. The remaining shift symmetry is given by ℤ_p with shifts σ = 2πp and p in the case of 𝒵 = ℤ_6, defined in (<ref>), given by
p = gcd( κ_3 , κ_2 , κ_1/6 , κ_d , 24 ℓ_3^2 κ_3 + 18 ℓ_2^2 κ_2 + ℓ_1^2 κ_1 + 72 α_d/36) .
As discussed in <cit.>, for other special values of shifts σ it is possible to resurrect part of the broken shift symmetries as non-invertible symmetries. For that one compensates the non-invariance under gauge transformation of the topological operators in (<ref>) by including a topological field theory sector, that couples to the background gauge fields to make the whole system gauge invariant. Consider a U(1) gauge field c living on the topological operators localized on Σ_3 with the topological level k Chern-Simons action
S=ik/4π∫_Σ_3( c∧ dc +2 c ∧ b ) ,
where b = 2 πkω is the U(1) realization of ω∈ H^2 (M;ℤ_k) as discussed in (<ref>). Since c appears only quadratically in the effective action (<ref>), one can integrate it out using its equation of motion dc+b=0. Substituting this back in the action and using the extension to Σ_4 as in (<ref>) one finds
S= -ik/4π∫_Σ_4 b∧ b= -2π i /2k∫_Σ_4𝒫(ω)
compensating precisely for a fractional instanton in one of the gauge sectors. In our case this allows the discussion of a non-invertible shift symmetry with shifts σ = 2 πq, where now
q = gcd (κ_3 , κ_2 , κ_d) .
Note that the coefficient κ_1 for the Abelian gauge factor does not show up any more, since for U(1) one can recover similarly, utilizing (<ref>) as above, the full shift symmetry, or at least its rational part ℚ/ℤ, see <cit.>. Since the cancellation of the fractional instanton contribution happens separately for each gauge sector, we need Chern-Simons terms of the form above for each gauge group factor, including G_d that allows for fractional instanton contributions, whose gauge fields c_i couple to the non-trivial background ω_i, respectively. In general we notice that in the presence of topological mixing with a dark sector the unbroken invertible and non-invertible shift symmetries are smaller than what would be inferred from the Standard Model couplings alone.
The topological coupling of the axion further modifies the electric 1-form symmetries, as discussed in <cit.>, since it modifies the equation of motion for the gauge field as well. The specific form of the (non-invertible) electric 1-form symmetries, however, depends on the spectrum of dynamical particles, including the exotic matter representations discussed in Section <ref> and is therefore model dependent. For a detailed account of this in the context of the Standard Model we refer to <cit.>.
The 0-form, 1-form, and 2-form symmetries of the system mix into a higher group structure as was analyzed in <cit.>. Once more, the specific realization of this depends on the matter representations of the system and needs to be analyzed on a case by case basis. For the Standard Model the result was derived in <cit.>. This higher group structure is also reflected in the anomaly inflow on various defects of the axion system which we will briefly discuss in the following.
§.§ The defects of the axion system
Theories with axions have extended objects. Among them are codimension-one objects, i.e., domain walls, across which the value of the axion field jumps, and codimension-two objects, axion strings, around which the axion value winds a → a + 2π. Comparing this behavior of the axion field with the definition of the generalized symmetries in (<ref>) we see that dynamical axion strings break the 2-form winding symmetry.
As discussed above, if q = (κ_2, κ_3, κ_d)>1, there is an exact ℤ_q symmetry which, even though it could be non-invertible, still acts on axions as the standard shift symmetry a → a + 2π/q. This implies that axion potential has to be 2 πq-periodic. Another way to argue for this 2 πq-periodicity of the potential is by considering instanton contribution to the computation of the axion potential. Since it is expected that the axion potential comes from summing over all instantons e^iκ_i n_i a for integral instanton numbers n_i. The Abelian and fractional instantons do not contribute to the axion potential for Minkowski spacetime <cit.>, since they need non-trivial spacetime topology.[This changes once one includes topology changes that are expected to occur for quantum gravity.] Thus, below the electro-weak phase transition the only contributions come from the strong interactions, SU(3), and potentially the dark sector G_d in case it confines. If it does, one expects that there will be contributions of order Λ_d^4 to the potential and the potential has approximate 2 πgcd(κ_3,κ_d)-periodicity imposed by the non-invertible shift symmetry. Therefore there are degenerate vacua and domain walls in case gcd(κ_3, κ_d) > 1. If, on the other hand, the dark sector gauge group is broken the axion potential is approximately 2πκ_3-periodic and one obtains |κ_3| vacua between the axion domain walls interpolate. In both cases the vacuum expectation value of the axion spontaneously breaks the non-invertible shift symmetry <cit.>.
This further demonstrates the close relation between axion domain walls and the symmetry operators for the shift symmetry. As we have seen, in case the shift symmetry is non-invertible this requires the existence of a topological field theory sector on the worldvolume of the domain wall. These topological field theories, related to the fractional quantum Hall effect, contain anyons which feel the influence of the background fields of both the Standard Model and dark sector 1-form symmetries. This can also be interpreted in terms of an anomaly inflow of the 1-form symmetries, see, e.g., <cit.>.
There is also anomaly inflow, this time of the 0-form gauge symmetries, in the presence of axion strings <cit.>, see also <cit.> for a recent discussion. This implies that the string hosts local degrees of freedom that cancel this contribution. To see this inflow explicitly we can perform a 0-form gauge transformation and 1-form transformations in the presence of an axion string localized on a 2-dimensional manifold Σ_2. This means that the value of the axion field on paths winding around this surface undergoes the monodromy a → a + 2 π. Locally, one can write that as
d^2a= 2 π δ_Σ_2 ,
where δ_Σ_2 is a 2-form which is localized on Σ_2, i.e., the Poincaré dual of Σ_2, for a more explicit treatment with bump functions see <cit.>. Considering the topological interaction of the axion (<ref>), we find locally
ℒ^a_top = - i da ∧( κ_32 πΩ_3 + κ_22 πΩ_2 + κ_18 π^2 A_1 ∧ F_1 + κ_d2 πΩ_d) .
Under infinitesimal gauge transformations λ_i the action transforms as
Δ S = - ∫_Mi8π^2 da ∧( κ_3 tr(d λ_3 ∧ F_3) + κ_2 tr(d λ_2 ∧ F_2) + κ_1 d λ_1 ∧ F_1 + κ_d tr(d λ_d∧ F_d) ) .
Integrating this expression by parts and using (<ref>) one finds a gauge variation localized to the worldvolume of the axion string Σ_2
Δ S = - ∫_Σ_2i4 π( κ_3 tr(λ_3 F_3) + κ_2 tr(λ_2 F_2) ) + κ_1 λ_1 F_1 + κ_d tr(λ_d F_d) ) ,
due to anomaly inflow. This needs to be cancelled by the existence of charged chiral fields on Σ_2. Assuming the existence of two-dimensional chiral fermions ψ transforming in the representation 𝐑 = (𝐑_3 , 𝐑_2 , 𝐑_d)_q of G they contribute to the gauge variation due to their perturbative anomaly as
Δ S_f = ∫_Σ_2∑_𝐑i/4π( dim(𝐑_2,𝐑_d) I(𝐑_3) tr (λ_3 F_3) + dim(𝐑_3,𝐑_d) I(𝐑_2) tr(λ_2 F_2 ) +
dim(𝐑_3,𝐑_2) I(𝐑_d) tr(λ_d F_d) + dim(𝐑_3, 𝐑_2, 𝐑_d) 36 q^2 λ_1 F_1 ) ,
where I(𝐑) is the Dynkin index such that tr (T^a_𝐑 T^b_𝐑) = 12 I(𝐑) δ^ab and the sum runs over representations of all chiral fermions on Σ_2. Note that these localized fermions can also arise as localized zero modes of Dirac fermions in four dimensions and therefore might be sensitive to even heavy exotic matter particles, see also <cit.>. Demanding that
Δ S + Δ S_f = 0 ,
one finds the consistency conditions
∑_𝐑dim(𝐑_3, 𝐑_2, 𝐑_d)/dim(𝐑_i) I(𝐑_i) = κ_i , 36 ∑_𝐑dim(𝐑_3, 𝐑_2, 𝐑_d) q^2 = κ_1 ,
for non-Abelian and Abelian variations, respectively. Therefore, we see that if κ_d does not vanish the axionic strings necessarily host chiral degrees of freedom that are charged under the dark sector.
Similarly, we can perform a 1-form gauge transformation in the topological action. Let us perform this analysis explicitly with the example of a single gauge factor SU(n) on which 𝒵 = ℤ_k acts non-trivially. Expressing the H^2(M;𝒵) gauge field as continuous b, see (<ref>), one can rewrite the topological coupling using (<ref>) as
ℒ^a_top⊃i κ8 π^2 a tr((F'-b1) ∧ (F'-b 1))= i κ8 π^2 a (
tr(F' ∧ F')-n b∧ b ) ,
where the F' is a U(n) gauge field as in (<ref>) . Moreover, locally we can express the closed 2-form field b in terms of a U(1) 1-form gauge field c as
k b = d c .
The 1-form gauge symmetry transformations are then implemented by
A' → A' + Λ1 , b → b + d Λ , c → c + k Λ ,
with 1-form gauge parameter Λ, see <cit.>. After expressing b in terms of c we can aim to use the same trick as for the 0-form symmetries above, i.e., integrating by parts then doing the gauge variation and integrating by part once more to find the localized anomaly. However, we find that after integration by parts
S ⊃ - i κ/2π∫ da ∧( Ω - n4 π k^2 c ∧ dc )
the action is invariant under 1-form gauge transformations and there is no inflow of an anomaly for the 1-form symmetries at the local level. Analogously, with a product of multiple simple gauge groups like in our setup, one has a cancellation for each separate gauge sector, analogous to (<ref>).
Once more this analysis shows what a diverse system, in terms of its symmetries, anomalies, and defects, the axion model is.
§ PHENOMENOLOGICAL CONSEQUENCES
The fact that the axion now couples to more than the Standard Model gauge groups also has consequences on the phenomenological implications, which we want to mention in the following. This is particularly important for the confining scenario, since it leads to major modifications to the axion potential of order Δ V_a∼Λ_d^4, which is the model-dependent dynamical scale of the dark sector. In the scenario of a broken dark sector gauge group all corrections are suppressed by the symmetry breaking scale Γ_d and therefore can be made small.
§.§ Solution to the strong CP problem
Famously, the QCD axion offers a dynamical solution to the strong CP problem <cit.>, i.e., its vacuum expectation value minimizes the CP-violation generated by a bare QCD θ-angle, whose effective value is experimentally constrained to be
θ≲ 10^-10 .
This works if the axion potential is generated by the strong QCD dynamics but can be modified after the inclusion of further interactions, this is also known as the axion quality problem, see, e.g., <cit.>.
For a broken dark sector gauge group the corrections are suppressed by Γ_d and it depends on the details of the setup, whether the corrections are small enough to allow for a dynamical solution for the strong CP problem. For a confining dark sector the situation is significantly worse, since there will be corrections to the axion potential of order Λ_d^4, for Λ_d a strong coupling scale of the dark sector, which will generically shift the minimum of the potential away from θ = 0. The only scenario for which this does not seem to happen is if the contributions to the axion potential from both sectors align in such a way that the minimum remains at the CP preserving value for QCD. For example this would require an alignment in the bare theta angles θ and θ_d to a very high precision, which seems not to be natural without additional symmetries demanding that.
§.§ Axion domain wall problem
As discussed above, if the axion only couples to the Standard Model, there are stable axion domain walls for κ_3 > 1. This is due to the fact that the axion potential has periodicity 2 π/κ_3 and thus there are |κ_3| distinct minima. Axion profiles that interpolate between two of such minima form stable domain walls.[These domain walls become unstable once the total shift in the axion value is a multiple of 2π for which they can form holes bounded by axion strings.] The energy density of a single domain wall is so high that it would lead to inconsistencies in the cosmological evolution of our universe, which is called the axion domain wall problem <cit.>, which also can be phrased in terms of domain wall networks see, e.g., <cit.>. A solution to this problems is to set κ_3 = 1, which we have also done for the analysis in Section <ref>, in which case there are no stable domain walls. Alternatively, one can dilute the density of these objects by mechanisms like inflation, if they form early enough.
Once there is a confining dark sector there are important further contributions to the axion potential and one finds various parameter regimes according to the hierarchy between the two confinement scales Λ_QCD and Λ_d:
* Λ_QCD≫Λ_d: In this regime the axion potential is dominated by the dynamics of the strong force. This also applies for the number of (approximate) minima and we expect to have a potential domain wall problem for κ_3 > 1. Nevertheless the dark sector contributions to the potential generically lift the individual minima, which might extend the allowed parameter space for phenomenologically viable models.
* Λ_d≫Λ_QCD: The axion potential is dominated by the dark sector potential and therefore κ_3 > 1 does not necessarily lead to a domain wall problem. However, there can be a domain wall problem in case κ_d > 1.
* Λ_QCD∼Λ_d: In this regime the form of the potential depends on many of the details, such as the bare θ angles of the system, and one needs to analyse the specific models.
We see that the presence of a dark sector can weaken the axion domain wall problem and open up new parameter regimes. It even offers a solution to the axion domain wall problem without requiring κ_3 = 1.
§.§ Dark matter
Axions also have appeared as viable dark matter candidate <cit.>, but it is not clear what the fraction of dark matter formed by axions is. This contribution of axion dark matter will be highly sensitive to the specific scenario and therefore requires a model dependent analysis, however, the presence of a dark sector typically opens up new interesting regions of parameter space. Moreover, in the case of a confining dark sector one further has natural other candidates for dark matter at mass scales set by Λ_d, which can correspond to dark baryons, or dark glueballs in the absence of matter states, <cit.>. Of course for that it is important that the confined dark objects making up dark matter are neutral under the Standard Model gauge group and are produced appropriately.
Summarizing, the presence of the dark sector poses a challenge for the dynamical solution of the strong CP problem, but might open up new phenomenologically interesting regimes for viable model building.
§ CONCLUSIONS
In this manuscript we explored several effects of having a non-Abelian dark sector gauge group mixing topologically with the Standard Model fields. The specification of the global form of the gauge group imposes restrictions on the allowed matter representations and therefore influences the spectrum of potential exotic matter particles. Moreover, the topological mixing extends the allowed gauge field configurations which need to be included in the partition function, these include the appearance of fractional instantons that influence the dynamics of axion fields that couple to the instanton density. The periodicity of the axion leads to quantization conditions of these couplings which in turn modifies the possible values of the axion-photon coupling. In this way the presence of a dark sector can lead to a significant reduction of such a coupling compared to the Standard Model analysis, which opens up a larger parameter space. Finally, the topological mixing further influences the symmetries and defects of the axion system which further influences its phenomenological implications.
The setup we study allows for various generalizations. For example one could include several dark sector gauge groups that mix topologically with the Standard Model. This would further modify the quantization conditions and can even further reduce the axion-photon coupling. Nevertheless, since the fractional instanton numbers are rather constrained, we expect that mixing with a single gauge group already captures most of the allowed reduction. Of course the freedom to let part of the dark sector confine and break another part further increases the flexibility in phenomenologically interesting scenarios.
A more drastic modification is the inclusion of more than one axion field, which is well motivated from a string theory perspective, see, e.g., <cit.>. In this case the periodicity condition for each of the axion fields will lead to quantization conditions for its topological coupling. If the dark sector confines at a high scale, one can integrate out the heavy axion combination coupling to its instanton density, with the other light linear combinations only coupling to the Standard Model gauge fields. Since κ_d for the remaining axion vanishes, they are only sensitive to the topological mixing within the Standard Model gauge group, consistent with decoupling.[We thank Matt Reece for emphasizing this aspect.] Moreover, the now larger dimensional field space for the axion potential allows for a richer structure of minima and might save axions as a dynamical solution to the strong CP problem. Thus, it would be interesting to see the effect of topological mixing, which is very common at least in a large number of dimensions <cit.>, see also <cit.>, in explicit string theory constructions with axions, which also allows for a statistical analysis <cit.>. This might also shed some light on the fate of the rich variety of generalized symmetries of axion systems in the context of a quantum gravity theory.
While in this work we focused on the general properties and implications of topological mixing with a dark sector, the next step would be to analyze promising specific models in more detail. This includes an estimation of the dark matter abundance in a confined dark sector scenario <cit.> as well as estimates for the stability of axion domain walls <cit.>. Also a hierarchy of breaking scales, now involving the characteristic energy scales of the dark sector, along the lines of <cit.> offers the possibility of constraints on allowed generalized symmetries for axion systems with viable phenomenology.
Throughout our analysis we have restricted to spacetime manifolds that allow for a Spin structure. While for the Standard model this seems to be the natural choice, one might ask which conclusions change in more general setup. In particular, theories for which spacetime does not have to be orientable, see <cit.>, or in which the Spin structure is replaced by a more general tangential structure that allows for (charged) fermions <cit.> are natural places to look for extensions of the models with Spin structure considered here.
Finally, it would be very interesting to explore a top-down explanation of when the bare θ-angles of different gauge sectors have to be aligned, or whether there exist mechanisms to guarantee such an alignment. Such a mechanism might also point towards alternative solutions to the strong CP problem which does not need the introduction of dynamical matter fields, see, e.g., <cit.>.
§ ACKNOWLEDGEMENTS
MD thanks the ESI in Vienna, in particular the program “The Landscape vs. the Swampland”, for hosting him during part of the time in which this work was completed. MD would like to thank Jakob Moritz for fruitful discussions. We are especially grateful to Matt Reece, for very valuable comments on the draft.
§ FRACTIONAL INSTANTONS FOR EMBEDDING OF SUBGROUPS
In this appendix we investigate the fractional instanton number in situations in which the map from the quotient group 𝒵 into the center of the gauge group factor G_i only produces a subgroup of 𝒵_G_i.
If G_i is non-Abelian the only relevant groups for which 𝒵_G_i admits non-trivial subgroups are, see Table (<ref>),
𝒵_SU(n) = ℤ_n , 𝒵_Spin(4n) = ℤ_2 ×ℤ_2 , 𝒵_Spin(4n+2) = ℤ_4 ,
for which we analyse the fractional instanton numbers individually.
For G_i = Spin(4n) there are three different ℤ_2 subgroups of the center generated by
(1,0) , (0,1) , (1,1)
as elements in ℤ_2 ×ℤ_2. Identifying the background fields of the two ℤ_2 factors with B_L and B_R, and introducing ω∈ H^2 (M;ℤ_2) as a summation index in the partition function, we can use Table (<ref>) to find
(1,0): n^frac = n4∫𝒫(ω) mod 1 ,
(0,1): n^frac = n4∫𝒫 (ω) mod 1 ,
which on Spin manifolds can only be fractional, with non-trivial values 12, for n ∈{ 1 , 3} mod 4. For n = 1 this has a simple explanation, since Spin(4) ≅SU(2) ×SU(2) and the ℤ_2 factors appear as center symmetries for the SU(2)'s. For the last embedding we find
(1,1): n^frac = 12∫ω∪ω ,
which is not fractional on Spin manifolds.
For G_i = Spin(4n+2) one has non-trivial subgroup ℤ_2 and the summation is given by the embedding
B = 2 ω ,
where B ∈ H^2(M;ℤ_4) and ω∈ H^2(M;ℤ_2). Assuming ω has a lift to an element in H^2(M;ℤ), denoted by ω_ℤ one has
∫𝒫(2 ω) = 4 ∫ω_ℤ∪ω_ℤ mod 8 ,
which vanishes on Spin manifolds. So there are no fractional instanton numbers for gauge group Spin(4n+1)/ℤ_2.
Finally, we consider G_i = SU(n). For that to have non-trivial subgroup n cannot be prime and we want to consider fractional instanton numbers for gauge group SU(n)/ℤ_k. This happens in case the embedding parameter ℓ_i satisfies
gcd(ℓ_i, n) ≠ 0 ,
and we can identify
k = n/gcd(ℓ_i,n) .
We can define the 1-form symmetry background via an element ℓ_i ω∈ H^2(M; ℤ_n) with ω∈ H^2(M; ℤ_k).
With ℓ_i ω we can use the usual formula as indicated in Table (<ref>)
n^frac = n-1/2n∫𝒫 (ℓ_i ω) = ℓ_i^2 n-1/2n∫𝒫 (ω) = ℓ_i^2/(gcd(ℓ_i,n))^2 n (n-1)/2 k^2∫𝒫 (ω) ,
for which the fractional parts match. The first 𝒫 is considered with respect to a ℤ_n class, while in the latter two with respect to ℤ_n class.[One should lift ℓ_i ω to its integral representative and use co-chains instead of co-cycles to make sense of the transition <cit.>.]
For Abelian G_i = U(1) gauge groups for which the center is the group itself, it is clear that the image of 𝒵 is just a subgroup defined by
𝒵⊃ℤ_n: 1 mod n ↦ e^2 π i ℓ_i/n .
The calculation follows that around (<ref>) and shows that the fractional part of the Abelian instanton is given by
n^frac = ∫( ℓn c_1(F) ∪ω + ℓ^2n^2𝒫(ω) ) .
JHEP.bst
|
http://arxiv.org/abs/2409.02638v1 | 20240904120633 | MADiff: Motion-Aware Mamba Diffusion Models for Hand Trajectory Prediction on Egocentric Videos | [
"Junyi Ma",
"Xieyuanli Chen",
"Wentao Bao",
"Jingyi Xu",
"Hesheng Wang"
] | cs.CV | [
"cs.CV"
] |
§ ABSTRACT
Understanding human intentions and actions through egocentric videos is important on the path to embodied artificial intelligence. As a branch of egocentric vision techniques, hand trajectory prediction plays a vital role in comprehending human motion patterns, benefiting downstream tasks in extended reality and robot manipulation. However, capturing high-level human intentions consistent with reasonable temporal causality is challenging when only egocentric videos are available. This difficulty is exacerbated under camera egomotion interference and the absence of affordance labels to explicitly guide the optimization of hand waypoint distribution.
In this work, we propose a novel hand trajectory prediction method dubbed MADiff, which forecasts future hand waypoints with diffusion models. The devised denoising operation in the latent space is achieved by our proposed motion-aware Mamba, where the camera wearer's egomotion is integrated to achieve motion-driven selective scan (MDSS). To discern the relationship between hands and scenarios without explicit affordance supervision, we leverage a foundation model that fuses visual and language features to capture high-level semantics from video clips. Comprehensive experiments conducted on five public datasets with the existing and our proposed new evaluation metrics demonstrate that MADiff predicts comparably reasonable hand trajectories compared to the state-of-the-art baselines, and achieves real-time performance. We will release our code and pretrained models of MADiff at the project page: <https://irmvlab.github.io/madiff.github.io>.
Hand Trajectory Prediction, Egocentric Vision, Mamba, Diffusion Models
MADiff: Motion-Aware Mamba Diffusion Models for Hand Trajectory Prediction on Egocentric Videos
Junyi Ma^†, Xieyuanli Chen^†, Wentao Bao, Jingyi Xu, Hesheng Wang^*
Junyi Ma and Hesheng Wang are with IRMV Lab, the Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China.
Xieyuanli Chen is with the College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China.
Wentao Bao is with ACTION Lab, the Department of Computer Science and Engineering, Michigan State University, MI 48824, U.S.A.
Jingyi Xu is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
^†Equal contribution
^*Corresponding author email: [email protected]
September 9, 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Embodied artificial intelligence requires deep comprehension of human behaviors and flexible techniques, transferring general skills from daily human activities to robotics. Extracting reusable and transferable knowledge from internet-scale human videos is regarded as an efficient way to understand human intentions and actions. Many efforts have been made to achieve action recognition and anticipation <cit.>, temporal action localization <cit.>, gaze prediction <cit.>, hand trajectory prediction <cit.>, object affordance extraction <cit.>, and object interaction anticipation <cit.>. Among them, hand trajectory prediction (HTP) is a comparably challenging task that aims to anticipate how humans will behave in the near future, moving beyond just estimating action categories or gaze direction. This task is valuable for collecting offline data, predefining the action space for robot learning, and assisting human activities in extended reality applications <cit.>.
Considering that humans use egocentric vision to perceive the world and guide daily tasks, several notable convolution- and transformer-based HTP approaches <cit.> have been proposed in recent years to forecast incoming hand positions with only egocentric videos as inputs.
Despite achieving acceptable prediction results, several challenging problems remain to be solved:
* Camera egomotion guidance has not been seamlessly integrated into the state transition of the HTP process to narrow the motion-related gaps we discovered: 1) Predicting the 3D trajectories of future hand movements directly projected onto the 2D egocentric image plane, presents a challenging problem due to spatial ambiguities. There exists a noticeable disparity between the movements observed in 2D pixels and the corresponding 3D physical actions, which can be mitigated by camera egomotion. 2) With the past egocentric video as input, we predict future hand waypoints on a predefined “canvas” such as the image plane of the first observation. However, the past hand positions and scene information within the other frames are observed in different views with respect to the canvas view due to the existence of camera egomotion.
* HTP models are often optimized along with ground-truth object affordances besides hand waypoints <cit.>. This respects the fact that hand trajectories typically interact with active objects based on human intentions as an oracle. Understanding hand movements involves being aware of both hand positions and environmental situations concurrently. However, annotating object affordances is labor-intensive <cit.> compared to labeling hand trajectories. There is no off-the-shelf detector that can automatically and accurately identify the active objects interacted with a hand trajectory, attaining the quality of producing ground-truth.
The previous work <cit.> shows that the performance of the existing detectors varies significantly across the two tasks, next active object detection and hand detection. Therefore, ground-truth object affordances are not always available due to a lack of manual labeling and low-quality automatic annotation.
In the absence of object affordance labels to aid optimization, the inner correlation between hand motion and semantics in observations is hard to extract in a manner that aligns with human intentions by HTP models.
* Causality and motion continuity constraints are often overlooked in the context of using trendy convolutions or transformers supervised by waypoint displacement. Temporal causality is inherent in both hand motion and its parallel camera wearer's egomotion changes, since the hand and body are simultaneously guided by high-level intentions and the movement patterns of the hand are closely linked to those of the body. However, convolution- and transformer-based models <cit.> suffer from modeling the state transition process by unexplainable attention mechanisms, and fail to selectively capture temporal causality considering the two entangled movement patterns. Moreover, the existing loss functions for constraining trajectory prediction are insufficient to adequately determine the optimization direction of the model in line with the potential physical model of human hand movements.
To address these existing gaps, we propose MADiff, a motion-aware Mamba diffusion model to predict future hand waypoints on egocentric videos. To overcome the challenge of observation semantics caused by a lack of object affordances, we first exploit a foundation model in MADiff to fuse visual and language features in a generalizable manner, thereby capturing high-level semantics from 2D input images without the need for affordance labels. We demonstrate that using a visual grounding model with text guidance as the backbone to generate task-related features from observations significantly enhances hand trajectory prediction, compared to models that are task-agnostic or trained from scratch. Subsequently, we convert both semantic features and past trajectory features to sequential latents. Inspired by the strong generative capability of diffusion models <cit.> in predictive tasks <cit.>, we implement denoising diffusion within the above-mentioned latent space, using the devised Mamba model with motion-driven selective scan (MDSS) to recover future latents conditioned on past sequential features as shown in Fig. <ref>. These reconstructed latents are then transformed into the final predicted hand waypoints. Here, we extend the selective state space models with scan computation (S6) <cit.> by incorporating the camera wearer's egomotion (camera homography) to achieve motion-driven state transition. This helps to fill the motion-related gaps caused by different prediction canvas and 2D-3D aliasing, and enhances the explainability in temporal causality of the entangled
movement patterns. We additionally design a continuous-discrete-continuous (CDC) operation for denoising diffusion combining the strengths of autoregressive (AR) models and iterative non-autoregressive (iter-NAR) models. Furthermore, we propose an effective angle/length supervision strategy for the training paradigm to improve the directionality and stability of predicted hand trajectories. This overcomes the challenge of optimizing HTP models with motion continuity constraints.
In summary, the main contributions of this paper are fourfold:
* We propose MADiff, the pioneering diffusion-based method for predicting hand trajectories, featuring a devised motion-aware Mamba as the denoising model. A novel motion-driven selective scan pattern is tailored to facilitate a suitable state transition in Mamba-based denoising, comprehensively considering both hand motion and camera egomotion patterns to capture temporal causality. Moreover, MADiff bridges autoregressive models and iterative non-autoregressive models, building a novel generative paradigm for hand trajectory prediction.
* We first propose using the fusion of visual and language prompts for semantics extraction on 2D video clips in the realm of hand trajectory prediction. This addresses the challenge of high-level scene understanding due to the absence of affordance labels. Besides, the consistency inherent in deep semantic features also naturally aligns with human intention consistency. By seamlessly integrating the multimodal cues, we lay the foundation for a new scheme of semantic richness in hand trajectory prediction.
* We first emphasize the importance of directionality and stability in the field of hand trajectory prediction. We accordingly design new loss functions for optimization implicitly constrained by physical models of hand motion, leading to more plausible prediction results.
* We conduct comprehensive experiments based on the existing and our proposed new evaluation metrics to demonstrate that MADiff predicts comparably reasonable hand trajectories compared to the state-of-the-art baselines. We also experimentally demonstrate that MADiff has the potential to provide flexible HTP solutions tailored to specific action verbs.
This paper is organized as follows. Sec. <ref> reviews the related works in egocentric vision and some cutting-edge techniques in diffusion models and Mamba. Sec. <ref> introduces the preliminaries of our work. Sec. <ref> details the design of our proposed MADiff. Sec. <ref> showcases the experimental results quantitatively and qualitatively. Finally, Sec. <ref> concludes the paper and provides our insights.
§ RELATED WORK
§.§ Understanding Hand-Object Interaction
Hand-object interaction (HOI) comprehension helps guide the downstream tasks in computer vision and robot systems.
In the early stage, Calway <cit.> establish connections between specific human tasks and corresponding objects, which highlights an object-centric comprehension across diverse interaction modes. In contrast, Liu <cit.> emphasize capturing the dynamic attributes of objects, underscoring the relationship between object-centric interactions and goal-directed human activities. After that, more and more works contribute to HOI understanding by pixel-wise semantic segmentation <cit.>, bounding-box-wise detection <cit.>, fine-grained hand/object pose estimation <cit.>, and contact field estimation <cit.>. Ego4D <cit.> further conducts a standard benchmark that evaluates understanding of hand-object interaction based on several predefined subtasks. However, only comprehending what has happened to humans and environments (objects) is not enough in many applications, where future possible hand positions or object states are required to plan downstream tasks.
§.§ Predicting Future Hand Trajectories
Given sequential egocentric observations, accurately forecasting future hand positions is a valid approach extended in time horizons to understanding human actions and intentions in AR/VR applications and robot manipulation. Although it is technically possible to predict fine-grained hand keypoints by extending the existing hand keypoint estimation methods <cit.>, directly forecasting 2D hand waypoints in the near future focuses more on understanding high-level human intentions, which avoids large error accumulation and benefits running efficiency compared to predicting multiple complicated keypoints.
FHOI <cit.> samples future hand waypoints through motor attention following a 3D convolutional network, using stochastic units to model the uncertainty. Following its task definition, the object-centric transformer (OCT) <cit.> is further proposed combined with conditional variational autoencoders <cit.>. VRB <cit.> designs an affordance model to simultaneously predict contact point heatmap and post-contact hand trajectories. To additionally capture the uncertainty of predicted trajectories, an uncertainty-aware state space transformer (USST) <cit.> is proposed to model the state transition in the unrolling process. More recently, Diff-IP2D <cit.> builds a new diffusion-based paradigm for hand-object interaction.
Although Diff-IP2D <cit.> attempts to mitigate the negative effect of camera motion, its denoising process with integration of motion features does not follow the specific hand state transition process, leading to a weak awareness of causality in hand trajectory prediction. In contrast, in this work, we propose a motion-aware Mamba with a motion-driven selective scan to achieve a more reasonable denoising process. Moreover, most existing HTP approaches <cit.> need affordance labels such as object contact points to guide the optimization of hand waypoint distribution. We avoid the redundancy requirement by utilizing a foundation model to semantically comprehend the relationships between hands and scenarios.
§.§ Generative Paradigm in Egocentric Vision
Generative models have been demonstrated to perform well across multiple subfields of egocentric vision. EgoGAN proposed by Jia <cit.> utilizes a Generative Adversarial Network (GAN) to forecast future hand masks conditioned on encoded video representation and predicted future head motion. Zhang <cit.> also use GAN-based model to generate future frames and predict their temporal saliency maps which reveal the probability of gaze locations.
With the advent of diffusion models <cit.>, diffusion-based generative modeling generally beats discriminative and GAN-based modeling in the field of egocentric vision, including egocentric video prediction <cit.>, human mesh recovery <cit.>, 3D HOI reconstruction <cit.>, and 3D HOI synthesizing <cit.>. Zhong <cit.> propose a diffusion-based method namely DiffAnt for long-term action anticipation. It follows the query-based scheme <cit.> for decoding future embeddings to action labels. Li <cit.> utilize a diffusion model conditioned on the estimated head pose to infer the full-body pose with only egocentric videos as inputs.
In this work, we also propose a diffusion-based generative paradigm for hand trajectory prediction on egocentric videos, combined with the devised Mamba as the denoising model.
§.§ Mamba in Time Series Forecasting
As a trendy state space model (SSM), Mamba <cit.> exhibits competitive ability in modeling long-range dependency as well as improving computational efficiency compared to transformer <cit.>. It is built upon a selection mechanism and thus has a context-aware ability to compress and propagate effective information in the state transition process. Moreover, Mamba also uses a hardware-aware algorithm for the parallel associative scan. Recently, some Mamba-based methods for time series forecasting have been proposed. For example, SiMBA by Patro <cit.> uses EinFFT for channel modeling and Mamba for token mixing, presenting solid performance on multivariate long-term forecasting tasks. TimeMachine <cit.> combines an inner Mamba and an outer Mamba to address channel-mixing and channel-independence problems simultaneously while selecting global and local contexts at multiple scales.
S-Mamba <cit.> and Bi-Mamba+ <cit.> both consider the bidirectional scan pattern implemented on sequential tokens, breaking the limitation of incorporating antecedent variates.
Compared to these time series forecasting methods designed task-agnostically, in this work, we focus on the specific realm of hand trajectory prediction and develop a novel motion-aware Mamba regarding the characteristics of the hand movements and the camera wearer's egomotion. Moreover, we integrate the devised Mamba blocks into a diffusion process, which builds a novel paradigm bridging autoregressive and iterative
non-autoregressive models, and provides a basic framework for time series forecasting.
Our experiments show that our proposed motion-driven selective scan (MDSS) performs better than the recent bidirectional scan pattern <cit.> for hand trajectory prediction due to the unreasonable inversion of causality and human motion pattern inherent in the bidirectional mechanism (see Sec. <ref>).
§ PRELIMINARIES
§.§ Task Definition
Given the video clip of past egocentric observations ℐ={I_t}_t=-N_p+1^0 and sequential past 2D hand waypoints ℋ^p={H_t}_t=-N_p+1^0 (H_t ∈ℝ^2), our objective is to predict future hand trajectories ℋ^f={H_t}_t=1^N_f (H_t ∈ℝ^2), where N_p and N_f correspond to the number of frames in the past and future time horizons. It can be represented by modeling an unknown joint distribution of future hand waypoints p_Φ(ℋ^f|ℋ^p,Θ) where Φ denotes a predictive model and Θ encompasses additional conditions.
Following the previous works <cit.>, we predict the future positions of both hands on a fixed image plane of the input videos, e.g., the first observed image as the prediction canvas.
Here, we only focus on the 2D predictive task, since past 3D hand trajectories are not always available due to limited sensors. In contrast, 2D hand trajectories can be efficiently extracted using off-the-shelf hand detectors <cit.>. Besides, we argue that internet-scale 2D egocentric video data is more widely accessible than 3D data and is more likely to serve as a shortcut for achieving embodied intelligence.
§.§ Diffusion Models
The diffusion models <cit.> can progressively corrupt the inputs into noisy features and subsequently recover them based on a devised denoising model.
Here we use its generative capability for predicting future hand trajectories on 2D egocentric videos. We argue that diffusion models can well model highly dynamic patterns inherent in complex distributions of future hand motion. Besides, the HTP iteration limited in the time axis can be extended to a more flexible diffusion denoising process.
Initially, we map the input images and past hand waypoints into a latent space, denoted as _0 ∼ q(_0). This latent representation is then corrupted into standard Gaussian noise, represented as _S ∼𝒩(0, 𝐈). During the forward process, the perturbation operation is described by q(_s|_s-1) = 𝒩(_s; √(1-β_s)_s-1, β_s𝐈), where β_s is the predefined variance scales. In the reverse process, we employ a denoising diffusion model to gradually reconstruct the latent representation _0 from the noisy _S. The denoised features are then transformed into the predicted future hand trajectories. In this work, we will elaborate on solving the problems of generating reasonable latents, building a novel task-related denoising model, integrating effective denoising guidance, and designing suitable training and inference schemes for diffusion models in the hand trajectory prediction task.
§.§ State Space Models of Mamba
State space model (SSM) of Mamba <cit.>, built upon a selection mechanism, has a context-aware ability to compress and propagate effective information in the state transition process. It utilizes first-order differential equations to link the input and output sequences via hidden states.
Our approach utilizes the discrete version of the continuous-time SSM in Mamba:
A̅ = e^Δ A,
B̅ = (e^Δ A - I) A^-1 B,
h_k = A̅ h_k-1 + B̅ x_k,
y_k = C h_k ,
where A serves as the evolution parameter, B and C act as projection parameters, and Δ is a timescale parameter for the discretization. The structured state space model (S4) <cit.> initializes A
by HIPPO theory <cit.>. Mamba further extends S4 to S6 by forcing B, C, and Δ to be functions of the input.
In this work, we propose naturally utilizing the camera wearer's egomotion information (m_t-1 m_t), i.e., homography egomotion features, to drive the state transition process (h_t-1 h_t) in Mamba, and seamlessly integrate the state space model into a denoising diffusion process, bridging autoregressive and iterative non-autoregressive schemes in the hand trajectory prediction task.
§ PROPOSED METHOD
§.§ System Overview
The overall pipeline of our proposed MADiff is illustrated in Fig. <ref>. The inputs for MADiff encompass past sequential egocentric images and 2D hand waypoints within the given video clip, as well as the language description as the proposed text prompt. Tokenizer first generates visual-language features through a foundation model, encodes past hand waypoints to sequential intermediate features with the trajectory encoder, and then fuses them by the fusion module (Sec. <ref>). The output of the tokenizer is the tokenized latents utilized by our proposed motion-aware Mamba (Sec. <ref>) in the devised Mamba-based denoising diffusion model (Sec. <ref>), where we design a motion-driven selective scan to recover the future latents conditioned on the past latents. Ultimately, the trajectory decoder transforms the reconstructed latent features to predicted future hands waypoints. We design new training loss functions and inference operations for MADiff, which can be found in Sec. <ref>.
§.§ Tokenizer
The devised tokenizer of our MADiff contains a foundation model, a trajectory encoder, and a fusion module. It exploits three types of input data: past egocentric video clips, language descriptions, and past 2D hand waypoints. We fuse multimodal cues to represent the observation at each timestamp by the tokenizer and enhance the prediction performance of MADiff, which can also serve as the foundation for a new scheme of semantic richness in the field of hand trajectory prediction.
Foundation Model: Our MADiff exploits a powerful foundation model, the widely-used GLIP <cit.> to generate visual-language fusion features from sequential past observations (as shown in Fig. <ref>). In contrast to existing works <cit.> only using visual inputs, we additionally consider the text prompt 𝚑𝚊𝚗𝚍 when MADiff captures past environment observations and predicts future hand states. The visual grounding ability of GLIP enables our MADiff to semi-implicitly capture hand poses and hand-scenario relationships within each 2D image frame. This guides the optimization of hand waypoint distribution, demonstrated in Sec. <ref>, without the need for affordance supervision required by previous works <cit.>.
We also discovered that the deepest features averaged over the channel dimension at continuous timestamps exhibit potential consistency, shown in Fig. <ref>, which aligns with the consistency in human intention during the interaction process.
The joint application of the foundation model and language description enhances MADiff's generalization ability and deployment efficiency compared to those using backbones trained on specified HOI datasets from scratch <cit.>, and concurrently holds HTP task specificity in contrast to those using off-the-shelf pretrained backbones <cit.>. Specifically, we extract the outputs of the deepest cross-modality multi-head attention module (X-MHA) in GLIP, which are denoted as the semantic features 𝒳^sem={X^sem_t}_t=-N_p+1^L for hand trajectory prediction. L equals N_f during training and is set to 0 during inference since future observations are unavailable in real deployment and are replaced by sampled noise in the subsequent diffusion models.
Trajectory Encoder and Fusion Module: We use multilayer perceptrons (MLPs) as the trajectory encoder, which converts the sequential 2D hand waypoints ℋ={H_t}_t=-N_p+1^L to intermediate trajectory features 𝒳^traj={X^traj_t}_t=-N_p+1^L in parallel. The fusion module illustrated in Fig. <ref> first adopts 1× 1 convolution as well as a linear projection to adjust the spatial and channel dimensions of 𝒳^sem to match 𝒳^traj, and subsequently uses MLP to fuse adapted 𝒳^sem and 𝒳^traj to ℱ={F_t}_t=-N_p+1^L as tokens for all timestamps t, also as latents for the following devised diffusion process.
§.§ Motion-Aware Mamba
MLP, convolutional layers, and transformers may suffer from capturing temporal causality inherent in hand movements due to a lack of state transition with an explicit selective mechanism along the time axis. In MADiff, we instead integrate Mamba <cit.> into the continuous denoising steps to selectively capture the temporal causality. Due to the inherent motion interference/gaps related to prediction canvas and 2D-3D aliasing mentioned in Sec. <ref>, we further integrate egomotion features into the selective scan process of Mamba, leading to the proposed motion-driven selective scan (MDSS):
h_t = A̅ h_t-1 + B̅ [x_t^T, 0]^T + B̅ [0,m_t^T]^T,
y_t = C h_t ,
where x_t denotes the t th fusion tokens in the sequential latents. m_t is the t th egomotion feature transformed from the homography matrix between t th frame and the canvas frame by the homography encoder in Fig. <ref>. To calculate the homography matrix, we first extract SIFT descriptors <cit.> to determine pixel correspondences between two consecutive images from previous observations. Subsequently, we compute the homography matrix using RANSAC <cit.> which seeks a transformation that maximizes the number of inliers among the keypoint pairs.
As can be seen in Eq. (<ref>), we introduce an additional term related to the homography feature m_t to achieve a shift to the original state transition in Eq. (<ref>). This operation corresponds to the intuition that the position of each hand waypoint projected to the fixed image plane (e.g., the one of the last observation) used as prediction canvas equals the position in its original image plane shifted by an additional displacement of egomotion homography, as shown in Fig. <ref>. Besides, it also implicitly transforms hand movement-related features into a more easily predictable latent space through egomotion features, analogous to predicting on the canvas image plane. We therefore concurrently consider the two entangled motion patterns, the hand motion pattern implicit in h_t and the camera egomotion pattern implicit in m_t during state transition following the fact that the hands and body move in a physically coordinated manner.
Eq. (<ref>) can be further rewritten as:
h_t = A̅ h_t-1 + B̅ [x_t, m_t],
where we denote the concatenation of x_t and m_t along the channel dimension as [x_t, m_t] for brevity. Note that we do not use the sum of x_t and m_t here because B̅ can adaptively reweight the two features. Besides, B and C in Eq. (<ref>) and Eq. (<ref>) are also projection functions of the input [x_t, m_t], and thus are also referred to as motion-aware projection matrices. The additional motion-related term in Eq. (<ref>) and matrices B and C being functions of egomotion jointly determine the motion-driven property in our proposed selective scan pattern. Here, we do not let matrix A be a function of egomotion, because it stably encapsulates historical information, solving long-range dependency inherent in sequential past egomotion and other fusion features following the HIPPO theory <cit.>.
Ultimately, the output signals can be computed in parallel by the discrete convolution of the input sequence:
K̅ = (CB̅, CAB, …, CA^N_p+N_f-1B̅),
y = [x, m]*K̅,
where N_p+N_f corresponds to the length of the holistic hand trajectory. Compared to the previous works <cit.>, our proposed motion-aware Mamba with MDSS maintains temporal causality (scanning along the time direction unidirectionally) and simultaneously exhibits reasonable explainability in the state transition process of hand movements while narrowing the inherent gaps caused by egomotion.
§.§ Mamba in Denoising Diffusion
We seamlessly integrate our devised motion-aware Mamba block into the continuous denoising diffusion process.
In each denoising step of MADiff, we utilize multiple stacked motion-aware Mamba blocks to recover future latents for better HTP performance. The forward process is only implemented during training and the reverse process is required for both the training and test pipeline, which will be extensively analyzed in Sec. <ref>.
Forward Process: We implement partial noising <cit.> in the forward process during training.
The output of the fusion module is first extended by a Markov transition q(_0|F_t)= 𝒩(F_t,β_0I), where F_t ∈ℝ^(N_p+N_f)× a. In each following forward step of the diffusion model, we implement q(_s|_s-1) by adding noise to the future part of _s-1, i.e., _s-1[N_p+1:N_p+N_f].
Reverse Process: After _S is derived after the forward process, our proposed motion-aware Mamba is exploited to denoise _S to _0. Considering the guidance of egomotion features m, the reverse process can be modeled as p_Mamba(_0:S):=p(_S)∏_s=1^Sp_Mamba(_s-1|_s,m). Our ℓ stacked Mamba blocks f_Mamba(_s,s,m) predicts the injected noise for each forward step with p_Mamba(_s-1|_s,m)=𝒩(_s-1;μ_Mamba(_s,s,m),σ_Mamba(_s,s,m)). Specifically, for the step s in the denosing process, the first Mamba block receives [_s, m] to calculate y_0,s by Eq. (<ref>). Then the feature values of y_0,s at the corresponding positions of the concatenated m are recovered to m, which is fed to the following Mamba blocks to get y_0:ℓ-1,s iteratively. The final denoised result _s-1 corresponds to the feature values of y_ℓ-1,s at the corresponding positions of _s. We further design a continuous-discrete-continuous (CDC) operation (Fig. <ref>) to achieve explicit interaction on predicted hand waypoints in the reverse process of inference, rather than being limited in the latent space that ignores the discrete nature of pixels in 2D image plane (see Sec. <ref>). Ultimately, the denoised feature ℱ̂=f_Mamba(_1,1,m)={F̂}_t=1^N_f is fed to the trajectory decoder, which uses MLP to generate the predicted hand trajectories in parallel.
Note that we anchor m in Eq. (<ref>) for the inputs of all consecutive motion-aware Mamba blocks for two reasons: 1) we respect the fact that egomotion is deterministic during the hand movement and should not be reconstructed as hand state features in the diffusion process (demonstrated in Sec. <ref>), and 2) anchoring deterministic conditional information while denoising features enhances the stability of the optimization process <cit.> and reduces the computation <cit.>. In addition, following the previous works <cit.>, we also anchor the past part of the latent features for each diffusion step to achieve conditional sequence modeling and apply both learnable positional embedding and temporal embedding before each denoising operation.
§.§ MADiff Training and Inference
Training with New Losses: We first use the same diffusion-related losses ℒ_VLB, trajectory displacement loss ℒ_dis, and regularization term ℒ_reg as the previous work <cit.>, which are also listed here:
ℒ_VLB = ∑_s=2^S||_0-f_Mamba(_s,s,m)||^2 + ||ℱ-ℱ̂||^2,
ℒ_dis =1/N_f∑_t=1^N_fD_dis(H_t,H_t^gt),
ℒ_reg =1/N_f∑_t=1^N_fD_dis(H̃_t,H_t^gt),
where D_dis(·) represents the Euclidean distance between predicted hand waypoints and ground-truth ones, and H̃_t denotes the output of the trajectory decoder with ℱ as input. Moreover, we design two new loss functions, angle loss and length loss, to supervise our MADiff during the training process. As depicted in Fig. <ref>, the two predicted hand trajectories have the same displacement error, while the right case seems to be worse than the left one since it has ambiguous directionality with large angle errors, and unreasonable stability with large length errors. We argue that directionality and stability jointly reveal the causality and underlying human intention in the hand trajectory prediction task. Besides, they implicitly correspond to the potential physical model of hand motion and continuity constraints, closely associated with human habits. To promote the model capturing directionality and stability better, we propose the trajectory angle loss and length loss as follows:
ℒ_angle =1/N_f∑_t=0^N_f-1D_cos(H_t+1-H_t,H_t+1^gt-H_t^gt),
ℒ_len =1/N_f∑_t=0^N_f-1D_L2(H_t+1-H_t,H_t+1^gt-H_t^gt),
where D_cos(·) and D_L2(·) represent the cosine similarity and L2 norm of the two input vectors respectively. The total loss function supervising the training process of MADiff is the weighted sum of all the above-mentioned losses, which is depicted in Sec. <ref> of the supplementary material. The significant effectiveness of our proposed new losses is experimentally demonstrated in Sec. <ref>.
Inference with CDC Operation: In the reference stage, we first sample noise ℱ_noise={F_t,noise}_t=1^N_f from a standard Gaussian distribution, and concatenate it with the past tokens ℱ={F_t}_t=-N_p+1^0 along the time dimension to generate _S. Subsequently, the combination of motion-aware Mamba and our proposed CDC operation is adopted to predict future latent features by denoising _S to _0.
Specifically, prior to proceeding with the next denoising step s-1, the output of the stacked motion-aware Mamba blocks y_ℓ-1,s, lying in the continuous latent space, is first converted to discrete hand waypoints ℋ̌_s by the trajectory decoder in Fig. <ref>. We round the intermediate predictions ℋ̌_s following the fact that the coordinates of hand waypoints on the 2D image grids are discrete. Since the denoising diffusion is implemented on the continuous latents, we subsequently project the discrete waypoints back to trajectory features 𝒳̌^traj_s by the trajectory encoder in Fig. <ref>. They are further fused with the vanilla semantic features 𝒳^sem by the fusion module in Fig. <ref> to derive ℱ̌_s, which is ultimately transformed to _s-1 for the following denoising steps. The overall pipeline of our proposed CDC operation for diffusion-based HTP and intermediate discrete HTP results after rounding are shown in Fig. <ref>.
Here we further show how our proposed approach bridges the autoregressive (AR) models <cit.> and the iterative non-autoregressive (iter-NAR) models <cit.>, which builds a novel generative paradigm for the hand trajectory prediction task. It captures the temporal causality along the time direction and maintains sufficient iteration in the denoising direction. We denote 𝐟_* as {𝐟_S,…,𝐟_0} where 𝐟 is the future part of , and ℋ^f_* as {ℋ^f_S,…,ℋ^f_1} for brevity. Considering egomotion guidance m, the diffusion-based inference process of MADiff along with CDC operation can be formulated as follows:
p_MADiff(ℋ^f|ℋ^p,m)
= ∑_ℋ^f_*∫_𝐟_*p(ℋ^f|𝐟_0,ℋ^p,m)∏_s=S,…,1p(𝐟_s-1|ℋ^f_s)p(ℋ^f_s|𝐟_s,ℋ^p,m)
= ∑_ℋ^f_*∫_𝐟_*p(ℋ^f_S|𝐟_S,ℋ^p,m)∏_s=S-1,…,0p(ℋ^f_s|𝐟_s,ℋ^p,m)p(𝐟_s|ℋ^f_s+1)
= ∑_ℋ^f_*p(ℋ^f_S|𝐟_S,ℋ^p,m)∏_s=S-1,…,0∫_𝐟_sp(ℋ^f_s|𝐟_s,ℋ^p,m)p(𝐟_s|ℋ^f_s+1).
Then we marginalize over 𝐟, and align the step s with the general iteration number k reversely, obtaining the iter-NAR form of MADiff:
p_MADiff(ℋ^f|ℋ^p,m)
= ∑_ℋ^f_*p(ℋ^f_S|𝐟_S,ℋ^p,m)∏_t=S-1,…,0p(ℋ^f_s|ℋ^f_s+1,ℋ^p,m)
≡ ∑_ℋ^f_1,…,ℋ^f_K-1p(ℋ^f_1|ℋ^p,m)∏_k=1,…,K-1p(ℋ^f_k+1|ℋ^f_k,ℋ^p,m),
where p(ℋ^f_1|ℋ^p,m) and p(ℋ^f_k+1|ℋ^f_k,ℋ^p,m) correspond to the initial prediction and progressive full-context prediction of the general form of iter-NAR models respectively.
Note that we predict hand waypoints ℋ^f_k by the devised CDC operation in each step of the diffusion process rather than only denoised latents <cit.>, and thus Eq. (<ref>) holds explicitly. Subsequently, we consider Mamba-based state transition of MADiff in Eq. (<ref>), which can be an extension of the autoregressive scheme over y:
p_MADiff(ℋ^f|ℋ^p,m)
≡ ∑_ℋ^f_1,…,ℋ^f_K-1p(ℋ^f_1|ℋ^p,m)∏_k=1,…,K-1p(ℋ^f_k+1|ℋ^f_k,ℋ^p,m)
= ∑_ℋ^f_1,…,ℋ^f_K-1p(ℋ^f_1|ℋ^p,m)∏_k=1,…,K-1p(ℋ^f_k+1|y_k^1:N_p+N_f)
p(y_k^1|ℋ^f_k,ℋ^p,m_1)∏_i=1,…,N_p+N_f-1p(y_k^i+1|y_k^1:i,ℋ^f_k,ℋ^p,m_i+1),
where i represents the time horizon where MDSS has been progressively implemented, and p(y_k^1|ℋ^f_k,ℋ^p,m_1) and p(y_k^i+1|y_k^1:i,ℋ^f_k,ℋ^p,m_i+1) represent the initial prediction and progressive left-context prediction of the general form of AR models respectively. Here we only consider one Mamba block with a single scan in Eq. (<ref>) for brevity. y_k^i+1 is generated conditioned on both ℋ^f_k and ℋ^p because the projection functions in Eq. (<ref>) take the holistic latent sequence denoised by the previous steps as input, maintaining potential global-context constraints in the forward-only scan pattern. As the overall inference pipeline illustrated in Fig. <ref>, MADiff adopts the diffusion-based iter-NAR framework to keep sufficient iteration, and integrates motion-driven AR progress into each denoising step to capture temporal dependency orthogonal to the diffusion direction, which can serve as a foundation scheme for hand trajectory prediction and other time series forecasting tasks. Since the future egomotion is unavailable during inference, we simply let m_t (t>0) be m_0 for Eq. (<ref>) assuming that the future egomotion is subtle. This inevitably introduces artifacts but still performs better than the baseline without egomotion guidance due to the powerful generation capability of our diffusion-based approach, which is demonstrated in Sec. <ref>.
§ EXPERIMENTAL RESULTS
§.§ Datasets
We use five publicly available datasets to validate the superiority of our proposed MADiff, including Epic-Kitchens-55 (EK55) <cit.>, Epic-Kitchens-100 (EK100) <cit.>, EGTEA Gaze+ (EG) <cit.>, EgoPAT3D-DT <cit.>, and H2O-PT <cit.>. We use the EK55 and EK100 datasets following the setups of OCT <cit.> and Diff-IP2D <cit.>, where we sample past N_p= 10 frames (2.5 s) to forecast hand waypoints in future N_f= 4 frames (1.0 s), both at 4 FPS. As to the EG dataset, N_p= 9 frames (1.5 s) are used for N_f= 3 hand trajectory predictions (0.5 s) at 6 FPS. Following the setups of USST <cit.>, we use the fixed ratio 60% by default to split the past and future sequences for both EgoPAT3D-DT and H2O-PT at 30 FPS. Sec. <ref> in the supplementary material further presents the effects of different observation ratios in the two datasets. EgoPAT3D-DT contains both seen and unseen scenes, where the unseen scenes are only used for testing. The numbers of video clips in the training, validation, and testing splits for different datasets used in the following experiments are shown in Tab. <ref>. According to the specific annotations in different datasets, we use the image plane of the last observation as the prediction canvas on EK55, EK100, and EG datasets, and instead use the image plane of the first observation as the canvas on EgoPAT3D-DT and H2O-PT datasets.
§.§ MADiff Configurations
We use GLIP <cit.> as the foundation model to generate the semantic feature with a size of 256× 7× 12 for each frame, which is then transformed to a feature vector with a size of 512 in the fusion module. In this work, we use the GLIP version with a Swin-Large backbone <cit.> as well as BERT (base-uncased) <cit.> to encode the text prompt. The trajectory encoder embeds each 2D hand waypoint to a feature vector with a size of 512. The output token of the fusion module for each timestamp is a feature vector with a size of 512. The homography encoder converts each 3× 3 homography matrix to a feature vector with a size of 512. Although MADiff uses SIFT+RANSAC to calculate the homography matrix for the following experiments, we provide an additional study on its robustness to multiple homography estimation algorithms in Sec. <ref> of the
supplementary material. As to the diffusion process, the total number of steps is set to 1000. The square-root noise schedule in Diffusion-LM <cit.> is adopted here for the forward diffusion process. We use 6 stacked motion-aware Mamba blocks with convolutional kernel size d_conv=2, hidden state expansion expand=1, and hidden dimension d_state=16 as the denoising model.
The numbers of diffusion steps and Mamba blocks are both selected according to the ablation study in Sec. <ref>.
We train MADiff using AdamW optimizer <cit.> with a learning rate of 2e-4 for 20 epochs on Epic-Kitchens, and with a learning rate of 1e-4 for 400 epochs on both EgoPAT3D-DT and H2O-PT datasets. Training and inference are both operated on 2 NVIDIA A100 GPUs.
§.§ Baseline Selection
For the EK55, EK100, and EG datasets, we follow the previous work <cit.> and choose Constant Velocity Hand (CVH) <cit.>, Seq2Seq <cit.>, FHOI <cit.>, OCT <cit.>, USST <cit.>, and Diff-IP2D <cit.> as the baselines. For the EgoPAT3D-DT and H2O-PT datasets, we select the baselines including CVH <cit.>, DKF <cit.>, RVAE <cit.>, DSAE <cit.>, STORN <cit.>, VRNN <cit.>, SRNN <cit.>, EgoPAT3D <cit.>, AGF <cit.>, OCT <cit.>, ProTran <cit.>, USST <cit.>, and Diff-IP2D <cit.>, where we partially refer to the baselines of the previous work <cit.>. Note that we use the 2D version of USST since there is no available 3D information for the prediction task in this work. We borrow partial quantitative results for these baselines from the previous works <cit.> since we keep the same experimental configurations as them.
§.§ Evaluation on Hand Trajectory Prediction
We evaluate the weighted displacement error (WDE) and the final displacement error (FDE) of our MADiff and all the baselines on the EK55, EK100, and EG datasets following Diff-IP2D <cit.>, and post the averaged displacement error (ADE) and FDE on the EgoPAT3D-DT and H2O-PT datasets following USST <cit.>.
Moreover, we further design a new metric to better evaluate the interaction between the hand and the next active objects, which is showcased in Fig. <ref>(a). For each video clip, we generate 10 possible hand trajectory predictions {ℋ^f}_n=1^10, and select the waypoint closest to the affordance center O^f of the next active object as the “interaction point” for each trajectory by
H_n^ip=min_t D_dis(ℋ_n, O^f).
Then we calculate the mixture of Gaussians of the 10 interaction points {H_n^ip}_n=1^10 as affordance prediction. The similarity between affordance prediction and affordance ground-truth is ultimately evaluated by Similarity Metric (SIM) <cit.>, AUC-Judd (AUC-J) <cit.>, and Normalized Scanpath Saliency (NSS) <cit.>. Our proposed new metric can distinguish the quality of predictions with similar displacement errors shown in Fig. <ref>(b) based on the fact that the future hand movement always changes the state of an object by using or manipulating it <cit.>. Note that affordance similarity of predicted hand trajectories can only be evaluated on the datasets EK55, EK100, and EG which provide ground-truth affordance labels from annotated contact points <cit.>.
We present the comparison results on the EK55, EK100, and EG datasets in Tab. <ref> and Tab. <ref>. Tab. <ref> shows the comparison results on the EgoPAT3D-DT and H2O-PT datasets. Note that we implement zero-shot transfer from Epic-Kitchens to the EG dataset, from EgoPAT3D-DT (seen) to EgoPAT3D-DT (unseen), to validate the generalization ability on diverse scenes across different datasets and within the same dataset respectively. As can be seen, our proposed MADiff outperforms all the baselines on the EK55, EK100, and EG datasets, and generates comparable (top 2) prediction results on the EgoPAT3D-DT and H2O-PT datasets, which suggests good hand trajectory prediction performance of MADiff. The comparison results on the EG and EgoPAT3D-DT (unseen) datasets also demonstrate the strong generalization ability of our MADiff while facing new human activity environments. As to the evaluation on our new metrics in Tab. <ref>, our MADiff without affordance supervision still generates the most reasonable interaction distribution against other baselines supervised by object affordance annotations. This indicates that our MADiff is capable of capturing potential relationships between hands and active objects. We provide the visualization of predicted hand trajectories from state-of-the-art baselines and MADiff on EgoPAT3D-DT in Fig. <ref>. More illustrations of MADiff predictions can be found in Fig. <ref>, Fig. <ref> and Fig. <ref> of the supplementary material.
§.§ Ablation Study on Motion-Driven Selective Scan
This experiment is conducted on the EgoPAT3D-DT and H2O-PT datasets to show the effectiveness of our proposed motion-driven selective scan. We directly remove the motion guidance m in Eq. (<ref>) to build the baseline MADiff agnostic to egomotion (version 1). MDSS in version 1 thus degrades to the vanilla selective scan in Mamba. Building upon version 1, we linearly merge the egomotion feature and the output of the fusion module in MADiff to replace motion guidance (version 2).
Moreover, we also provide a baseline that replaces the concatenation operation in Eq. (<ref>) of the vanilla MADiff with summation (version 3). To further validate the effectiveness of unidirectionally temporal causality for hand trajectory prediction, we additionally build a baseline with bidirectional Mamba <cit.> (version 4), which implements selective scan in two opposite directions for sequential latents. The experimental results are shown in Tab. <ref>. When comparing version 1 with our vanilla MADiff, it can be seen that motion guidance helps to reduce ADE and FDE on both datasets, which indicates that our proposed motion-driven selective scan narrows the motion-related gaps and concurrently considers the entangled hand motion and egomotion patterns. The enhancement from MDSS is more significant on FDE than ADE, which corresponds to the fact that there is an accumulated motion gap between a later observation and the canvas observation (i.e., the first observation for EgoPAT3D-DT and H2O-PT). Version 2 has the worst prediction performance among all the baselines, revealing that egomotion can only be used as auxiliary information within the diffusion process rather than brutally being fused with semantic and trajectory features that need to be optimally reconstructed by the denoising model, which has been claimed in Sec. <ref>.
In addition, MADiff with concatenation for motion guidance outperforms version 3 with summation operation. This suggests that the feature update from egomotion homography should not be directly added to the original state transition process without reweighting by the input-dependent projection parameters in Eq. (<ref>). Version 4 has worse HTP performance than vanilla MADiff even though it applies bidirectional Mamba. The reason could be that traversing the latent sequence in the opposite direction with MDSS is analogous to strictly reversing the causal relationship and the human motion pattern, leading to unreasonable denoising during training and inference. Therefore, we advocate a forward-only scan with global-context constraints (Eq. (<ref>)) in our proposed motion-aware Mamba rather than the bidirectional one.
§.§ Ablation Study on the Number of Mamba Blocks and Diffusion Steps
We conduct the ablation on the number of Mamba blocks with EgoPAT3D-DT. We evaluate {0, 2, 4, 6, 8, 10} Mamba blocks in Fig. <ref>. The errors at 0 (MLP) represent the HTP performance of the baseline removing the state transition of SSM in MADiff, which is equivalent to an MLP-based diffusion model. The counterpart at 0 (trans) corresponds to the baseline Diff-IP2D <cit.> that uses denoising transformer rather than Mamba. Our proposed Mamba diffusion models significantly outperform the MLP- and transformer-based baselines, and MADiff with 4 and 6 Mamba blocks have similar predictive capabilities. The prediction performance slightly drops when the number of Mamba blocks increases to 8. The reason could be that more Mamba blocks require more data for optimization, and the model with 8 Mamba blocks tends to overfit to our training set.
In addition, we report the effectiveness of different diffusion steps {500, 1000, 1500, 2000} in MADiff on EgoPAT3D-DT (EP) and H2O-PT (H2O). The experimental results illustrated in Fig. <ref> show that the setup of 1000 steps demonstrates relatively balanced performance in terms of ADE and FDE.
§.§ Ablation Study on Angle Loss and Length Loss
Here we ablate our proposed new loss functions, angle loss and length loss. As shown in Tab. <ref>, our proposed angle and length supervisions help to improve prediction accuracy quantitatively and qualitatively. This suggests that the directionality and stability captured by the new losses lead to a better understanding of the temporal causality and human intentions for hand trajectory prediction. It is also notable in Tab. <ref> that the improvement by angle loss is more significant than length loss, which suggests that directionality is more in line with the potential physical model and continuity constraints of hand motion. Fig. <ref> also illustrates that angle supervision significantly reduces the occurrence of trajectory prediction divergence.
§.§ Study on the Effect of Multiple Inputs
In this experiment, we present the contributions of different combinations of inputs for MADiff on the EgoPAT3D-DT (seen) and EK55. As shown in Tab. <ref>, only using past hand waypoints as input cannot semantically understand the hand movement in specific scenes, leading to the worst prediction performance of ADE/FDE on EgoPAT3D-DT and SIM on EK55. Once we exploit the visual prompt as an additional input, ADE, FDE of MADiff prediction drop by 11.4% and 17.6% respectively, and SIM increases by 15.6%. Moreover, after importing the text prompt 𝚑𝚊𝚗𝚍, ADE and FDE further decrease by 7.1% and 6.3% respectively on EgoPAT3D-DT. SIM of predicted interaction points is also improved by an additional 4.8% on EK55. The experimental results validate the effectiveness of semantic features generated by our text-guided grounding model for hand trajectory prediction.
It is also notable in Tab. <ref> and Tab. <ref> that MADiff outperforms OCT <cit.> and Diff-IP2D <cit.> which require devised global/hand/object features as inputs and are both supervised by additional affordance labels.
We therefore argue that the foundation model can capture the relationships between hands and scenarios, avoiding the need for additional task-specific features and affordance labels in the hand trajectory prediction task.
We also present the effectiveness of two additional text prompts, 𝚊𝚛𝚖 and 𝚋𝚘𝚍𝚢 except for 𝚑𝚊𝚗𝚍. Fig. <ref> illustrates respective visual grounding patterns from these text prompts, which also lead to different semantic features. As shown in Tab. <ref>, the text prompt 𝚑𝚊𝚗𝚍 leads to better prediction than 𝚊𝚛𝚖 and 𝚋𝚘𝚍𝚢 on both two datasets, and the reason could be that a model that intentionally concentrates more on hands has a better understanding of hand movement pattern. Over-focusing on the arm part may cause interference in the model optimization since the closer the arm is to the body, the weaker the correlation between the arm’s swing and the hand trajectories becomes.
§.§ GLIP vs. Other Backbones in MADiff
In this experiment, we build two other baselines with CLIP <cit.> and ResNet-18 <cit.> as image backbones. CLIP is pretrained on a variety of image-text pairs by He . It is task-agnostically transferred to our model here to embed each image to a feature vector, which is further fused with the trajectory feature by MLP as diffusion latents. In contrast, we integrate ResNet-18 into MADiff and train it from scratch. Note that both baselines lack a text prompt compared to GLIP of MADiff. The experiment is conducted on EgoPAT3D-DT and the results in Tab. <ref> show that the utilization of GLIP in MADiff presents the best prediction performance in both previously seen and unseen scenes. The pretrained CLIP cannot generate task-specific semantic features due to a lack of text guidance, and ResNet-18 trained from scratch suffers from overfitting to the previously visited scenarios.
§.§ Displacement Errors on Different Action Verb Categories
This is the first work to report the correlation between displacement errors and multiple action verb categories in the realm of hand trajectory prediction. The experimental justification is attributed to the fact that how a hand moves in a video clip can be concretely summarized by an action verb, and each verb category basically exhibits potential similarity in its corresponding set of hand trajectories.
As the comparison in Fig. <ref>, MADiff shows better prediction performance in most verb categories compared to the baseline Diff-IP2D <cit.>, exceptionally skilled at predicting fine-grained actions such as peel, cut, and shake. In addition, we also discover that actions that increase the uncertainty of object states (e.g., turn-on, take, open) tend to result in higher trajectory prediction errors compared to their opposite counterparts (e.g., turn-off, put, close). Our proposed MADiff generally outperforms the baseline even though there is high uncertainty in the ultimate state of the active object due to high-level scene understanding and temporal causality capture inherent in our paradigm.
Moreover, we also explore how to improve HTP performance for some specific action verbs on EK100. The utilized visual grounding model allows us to manually adapt verb prompts to generate specific semantic features. Tab. <ref> indicates that WDE and FDE of the specific verb both decrease significantly if given a more expressive text prompt 𝚑𝚊𝚗𝚍, 𝚠𝚑𝚒𝚌𝚑 𝚒𝚜 {𝚟𝚎𝚛𝚋-𝚒𝚗𝚐} for both training and testing MADiff. This demonstrates that injecting specific verbs into text prompts helps to generate action-related semantic features, remarkably improving the corresponding HTP accuracy. Fig. <ref> also implies that the verb-specific prompt encourages the model to focus more on the hand that matches it, according to the changes of confidence.
This experiment overall suggests that MADiff offers a reasonable picture of more flexible HTP solutions than the existing methods, tailored to specific functions in the applications of care robots or other assistive devices.
§.§ Inference Time
We provide the inference time of our proposed MADiff on Epic-Kitchens datasets using the hardware mentioned in Sec. <ref>. Each prediction by our proposed MADiff costs an average of 0.15 s, with 0.13 s for tokenizer and 0.02 s for the Mamba diffusion process. Since we sample the keyframes in the EK55 and EK100 datasets both with the interval of 0.25 s, MADiff can predict all the future hand waypoints before the first future keyframe arrives, thus available for online operation.
§ CONCLUSION
In this paper, we propose a novel hand trajectory prediction method namely MADiff. We first propose using a foundation model to extract high-level semantic features with no need for affordance supervision. Moreover, we design a diffusion model with a devised motion-aware Mamba for denoising. Specifically, the motion-driven selective scan pattern is proposed to fill the motion-related gaps and capture the temporal causality in the continuous denoising step.
We further integrate a continuous-discrete-continuous operation into the diffusion denoising process, combining explicit trajectory iteration with implicit feature iteration.
In addition, we introduce the angle loss and length loss into the training process to facilitate the model capturing directionality and stability better. The experimental results on five publicly available datasets show that our motion-aware Mamba diffusion model MADiff is highly competitive among all the state-of-the-art HTP baselines and the proposed components help improve prediction accuracy effectively. We also present a detailed analysis of MADiff revealing the relationship between prediction errors and action verb categories, providing a critical resource for future research in the field of hand trajectory prediction.
Insights and Limitations: Firstly, our generative paradigm seamlessly integrates Mamba into the denoising diffusion process and bridges autoregressive models and iterative non-autoregressive models, which can serve as a foundation framework for the hand trajectory prediction or other time series forecasting tasks. Secondly, the consideration of egomotion in temporal causality capture provides new insights for diffusion-based techniques in the field of egocentric vision.
Moreover, our action-relevant analysis opens up a potential direction for future work in the realm of hand trajectory prediction, which is designing distinct prompts specifically for actions of interest.
Despite the encouraging HTP performance, our work still has the following limitations: 1) The specificity of the existing dataset annotations leads to different training and inference setups across different datasets. In the future, we will unify the training and test setups across multiple different datasets. 2) We demonstrate that MADiff can generate good interaction points according to our new evaluation metrics, but it currently cannot actively extract possible affordance maps. We will consider adding a new branch to MADiff, which can achieve affordance prediction for the next active object.
unsrt
Supplementary Material
§ LOSS FUNCTIONS
The total loss function to supervise MADiff is the weighted sum of all the losses in Eq. (10)∼Eq. (14) of the main text, denoted as
ℒ_total=λ_1ℒ_VLB+λ_2ℒ_dis+λ_3ℒ_reg+λ_4ℒ_angle+λ_5ℒ_len,
where the weights are initially set as λ_1=λ_2=1, λ_3=0.2, and λ_4=λ_5=0.01 in our experiments.
§ ADDITIONAL VISUALIZATION OF HAND TRAJECTORIES AND INTERACTION POINTS
Fig. 10 and Fig. 11 of the main text visualize hand trajectory prediction (HTP) by our proposed MADiff. Here we present more visualization of predicted hand trajectories on Epic-Kitchens datasets in Fig. <ref>. As can be seen, our MADiff forecasts plausible hand waypoints in both one-hand and two-hands cases. Moreover, we further show the interaction points of MADiff extracted on Epic-Kitchens datasets according to our new evaluation metrics mentioned in Sec. 5.4 of the main text. We also illustrate the predicted affordance points of two HOI baselines Diff-IP2D <cit.> and OCT <cit.> since they both have an additional head to directly predict specific object affordance without the need for extracting the waypoints closest to the annotated affordance center. We only illustrate the center of each interaction distribution and put a fixed Gaussian on it for clarity. As can be seen in Fig. <ref>, the hand trajectories predicted by MADiff interact well with active objects, which indicates that our proposed method comprehends human observation and object-centric intention better than the baselines. This visualization also suggests that MADiff overcomes the challenge of lacking object affordance annotations since it generates more plausible hand waypoint distributions than the HOI baselines additionally supervised by object affordance labels.
§ ABLATION STUDY ON OBSERVATION TIME
We further ablate different observation times (ratios) on the performance of hand trajectory prediction with EgoPAT3D-DT and H2O-PT datasets. The experimental results are shown in Fig. <ref> and Fig. <ref>. As can be seen, larger time horizons of observation lead to smaller trajectory errors when we fix the observation ratio for both the training set and test set. This suggests that longer past sequences provide more enriched semantic information that helps MADiff comprehend human intention and motion patterns better after sufficient optimization. However, the prediction performance generally drops across these datasets once we randomly select observation ratios ranging from 0% to 100% for the test set. This indicates that the HTP capability heavily depends on the observation time used during the training process. It is also notable that the model trained with the observation ratio 80% exhibits the most severe performance degradation when comparing Fig. <ref> and Fig. <ref>. The reason could be that this model tends to use a large time horizon to capture long-range dependence within the past sequence, but most input observation sequences (randomly sampled ratios <80%) cannot meet this requirement. This experiment suggests that we need to utilize the training observation time in the reference stage for better HTP results in real-world applications. If inference with arbitrary observation times is mandatory, we should avoid using a large observation ratio during training to ensure the model has a good “imagination” without the need for long-range dependence within each sequence.
§ STUDY ON THE ROBUSTNESS TO HOMOGRAPHY ESTIMATION
In Sec. 5.5 of the main text, we validate that the camera egomotion homography estimated by SIFT+RANSAC indeed helps to improve the HTP performance significantly. Here we further conduct an experiment to demonstrate the robustness of MADiff to different homography estimation methods. We select three types of descriptors for feature matching, including SIFT <cit.>, ORB <cit.>, and BRISK <cit.>. Two estimation algorithms, RANSAC <cit.> and MAGSAC <cit.>, are adopted following the above feature matching to solve for the specific homography matrix. We train all the 6 baselines including SIFT descriptors with RANSAC (SIFT+RAN), SIFT descriptors with MAGSAC (SIFT+MAG), ORB descriptors with RANSAC (ORB+RAN), ORB descriptors with MAGSAC (ORB+MAG), BRISK descriptors with RANSAC (BRISK+RAN), and BRISK descriptors with MAGSAC (BRISK+MAG) for 400 epochs with a learning rate of 1e-4 on the EgoPAT3D-DT dataset (the same configuration as training the vanilla MADiff in the main text). We report their ADE and FDE on both seen and unseen scenarios of the EgoPAT3D-DT dataset. We also present HTP performance of the baseline of version 1 proposed in Sec. 5.5 of the main text, which is agnostic to camera egomotion. As can be seen in Tab. <ref>, all the baselines with egomotion guidance show better prediction performance than the counterpart agnostic to egomotion. This experiment demonstrates the robustness of MADiff to the utilization of different homography estimation methods. In future work, we will consider integrating learning-based homography estimation algorithms into MADiff.
|
http://arxiv.org/abs/2409.03610v1 | 20240905151547 | A Dual-Path Framework with Frequency-and-Time Excited Network for Anomalous Sound Detection | [
"Yucong Zhang",
"Juan Liu",
"Yao Tian",
"Haifeng Liu",
"Ming Li"
] | eess.AS | [
"eess.AS"
] |
[
[
September 9, 2024
=====================
§ ABSTRACT
In contrast to human speech, machine-generated sounds of the same type often exhibit consistent frequency characteristics and discernible temporal periodicity. However, leveraging these dual attributes in anomaly detection remains relatively under-explored. In this paper, we propose an automated dual-path framework that learns prominent frequency and temporal patterns for diverse machine types. One pathway uses a novel Frequency-and-Time Excited Network (FTE-Net) to learn the salient features across frequency and time axes of the spectrogram. It incorporates a Frequency-and-Time Chunkwise Encoder (FTC-Encoder) and an excitation network. The other pathway uses a 1D convolutional network for utterance-level spectrum. Experimental results on the DCASE 2023 task 2 dataset show the state-of-the-art performance of our proposed method. Moreover, visualizations of the intermediate feature maps in the excitation network are provided to illustrate the effectiveness of our method.
Anomalous sound detection, squeeze and excitation, frequency pattern analysis, temporal periodicity analysis
§ INTRODUCTION
Anomalous sound detection (ASD) is a task to distinguish anomalous sounds from normal ones. It is useful to monitor a machine's condition and detect malfunctions of an operating machine before it is damaged. ASD is a challenging task and is often regarded as an unsupervised learning problem <cit.>, given the rare occurrence and high diversity of anomalous events. Furthermore, in real-world scenarios, machines may operate under different settings and environmental conditions, leading to potential domain shifts <cit.>, thereby increasing the difficulty of the ASD task.
To address the lack of anomalous data, conventional ASD systems adopt a generative method <cit.> to model the distribution of normal data. Recently, self-supervised methods <cit.> are getting more attention, which is widely adopted by top-ranked teams <cit.> in recent DCASE[DCASE: Detection and Classification of Acoustic Scenes and Events, <https://dcase.community>] challenges. These systems train a feature extractor on normal data to obtain expressive embeddings, and use distance metrics to assess the abnormality by comparing test embeddings with normal ones. Despite the success of these systems, the frequency patterns and temporal periodicity remain relatively under-explored when modeling machine sounds.
Some recent studies have investigated the efficacy of frequency patterns in machine-generated sounds. In DCASE 2022 Challenge, the first-ranking team <cit.> builds customized high-pass filters for individual machine types, enhancing ASD performance by applying them before the Mel filters. Additionally, experiments conducted by <cit.> demonstrate notable high-frequency characteristics produced by certain machine types. Nevertheless, these approaches rely on manually constructed filters to leverage frequency patterns, limiting their adaptability to new machine types.
To automatically explore the frequency patterns, one possible solution is to learn the patterns with deep learning. Recently, researchers in <cit.> have explored automated analysis of frequency patterns on top of their prior work <cit.>. They introduce a multi-head self-attention <cit.> to adaptively filter the log-Mel spectrogram. Their experimental results demonstrate the feasibility of integrating frequency pattern analysis into the training process of ASD.
In this paper, we propose a novel framework that leverages both the frequency and temporal characteristics. We use the framework from <cit.> as the backbone, dealing with both frame-level spectrogram and utterance-level spectrum. Different from <cit.>, we employ a Frequency-and-Time Excited Network (FTE-Net) in the spectrogram pathway to enrich the learnt representation by capturing salient patterns in both the frequency and time domains. To the best of our knowledge, our work is the first to integrate both frequency and temporal pattern analysis of a spectrogram within a deep-learning framework for machine ASD.
§ METHODS
Our proposed framework uses <cit.> as the backbone, integrating a 1D convolutional network for utterance-level spectrum and an FTE-Net for frame-level spectrogram. The FTE-Net incorporates a Frequency-and-Time Chunkwise Encoder (FTC-Encoder) and an excitation network. The overall structure of our method is depicted in Fig. <ref>. In section <ref>, we introduce the backbone framework <cit.> and briefly explain the difference between theirs <cit.> and ours. In section <ref>, we introduce the proposed FTE-Net module and explain in detail the FTC-Encoder and the excitation network in the module.
§.§ Backbone framework
The backbone framework is a dual-path ASD framework <cit.>, designed to process both the frame-level spectrogram and utterance-level spectrum of machine-generated sounds in separate paths. The spectrum is processed by three 1D convolutional layers and five dense layers, and the spectrogram is processed by four ResNet <cit.> layers. Comparing to using only the spectrogram, empirical results from top-ranked teams <cit.> show that by adopting such dual-path structure that handles the spectrogram and spectrum separately can produce better results.
In this work, we replace the network used in the spectrogram pathway of <cit.> with a novel FTE-Net, aiming to learn frequency and temporal patterns.
§.§ Frequency-and-Time Excited Network (FTE-Net)
The FTE-Net is a two-branch network. One branch employs an FTC-Encoder, and the other branch uses an excitation network. The FTC-Encoder allows the network to learn the potential patterns within small intervals of frequency or time, while the excitation network is used to filter out unrelated information and enhance the useful patterns in a global context.
§.§.§ Frequency-and-Time Chunkwise Encoder (FTC-Encoder)
The FTC-Encoder is designed to process spectrogram data in a chunkwise manner, with separate pathways dedicated to handle frequency chunks and time chunks respectively. The goal of this module is to capture potential patterns within short frequency bands and time intervals.
In the frequency pathway, the input spectrogram X∈ℝ^F × T is equally segmented into N overlapping frequency bands, denoted as f_i ∈ℝ^F/N× T. These frequency bands f_1, f_2, ⋯, f_N are subsequently merged to create a band-wise 3D feature matrix M_f ∈ℝ^N ×F/N× T. Finally, M_f is passed through a 2D convolution network (as shown in Table <ref>) to get the embedding z_f∈ℝ^d, with the number of chunks serving as the number of input channels. The first Conv2D and MaxPooling layer uses large kernel size, aiming to reduce the dimension of the input. The last MaxPooling layer is used to flatten the feature maps.
Similar strategies are applied to the dual pathway along the time axis, using the same structure after splitting the spectrogram into small time segments.
§.§.§ Excitation network
The detailed structure of the excitation network is shown in Table <ref>. Modified squeeze-and-excitation (SE) <cit.> modules are integrated between ResNet blocks to form the excited block. While the conventional SE generates a mask (w_c) to adjust channel-wise feature maps, we introduce two additional masks, namely the frequency excitation mask (w_f) and the time excitation mask (w_t). As shown in Fig. <ref>, given an input x∈ℝ^C× H× W, where H and W are the dimensions along the frequency and time axis, the excitation map is formulated as follows:
w_i = 1/1+exp(-(a_i· W^T+b)), a_i=S_i(x)
where S_i is a 2D average pooling operation, cancelling out the dimension other than i. W and b are learning parameters. The output is aggregated using the excitation maps as follows:
y = x + ∑_i∈{c,f,t} w_i(x)· x,
where w_c, w_f, and w_t represent the excitation masks for channel, frequency, and time respectively.
As a result, the output embeddings (z_f,z_t) of the FTC-Encoder, the embedding (z_s) of the excitation network are concatenated before passing to a linear layer to get the spectrogram embedding (z_gram). Meanwhile, the spectrum embedding (z_trum) is generated by the 1D convolutional network. To train the embeddings, z_gram and z_trum are stacked together, used as an input to a linear classifier to classify different machines.
§ EXPERIMENTS
§.§ Dataset
The experiments are conducted on the DCASE 2023 Task 2 dataset <cit.>, which comprises audio clips from seven distinct machine types. Each machine type has roughly 1,000 audio clips, including 990 clips of source data and 10 clips of target data. Each audio clip lasts 6 to 18 seconds with a sampling rate of 16 kHz. The dataset includes a development dataset, an additional dataset, and an evaluation dataset. To compare with other systems in the challenge,
the model is trained using the training portion of the development dataset and the additional dataset, while performance evaluation is conducted on the evaluation dataset. It is important to note that only normal machine sounds are used for training.
§.§ Implementation details
For data processing, we use linear magnitude spectrograms and spectrum as the inputs. The spectrogram is obtained by Short-time Fourier Transform, with the sampling window size and hop length set to 1024 and 512 respectively. The entire signal is used to obtain the utterance-level spectrum. In our experiments, we repeat and clip the audio to force its length to be 18 seconds.
In terms of
the training strategy, we use the sub-cluster AdaCos <cit.> as the loss function to train the model. Wave-level mixup <cit.> strategy is adopted as the data augmentation. We set the number of classes to match the joint categories of machine types and attributes. The model is optimized with the ADAM optimizer <cit.> with a learning rate of 0.001. We set the batch size to 64 and train the model for 100 epochs.
The ASD results are generated by measuring the cosine distance between the prototypes of normal embeddings with the test embeddings for each machine type. Each machine type has 26 prototypes, including 16 center embeddings generated by K-Means on the source domain, and all the 10 embeddings from the target domain.
The results are evaluated using the official scripts[Official scripts available at <https://github.com/nttcslab/dcase2023_task2_evaluator>].
Three commonly used metrics are adopted for evaluating the ASD performance in this paper: AUC, pAUC and the integrated scores. AUC is divided into source AUC and target AUC for the data in separate domains. pAUC is calculated as the AUC over a low false-positive-rate (FPR) range [0, 0.1]. The integrated score is the harmonic mean of AUC and pAUC across all machine types, which is the official score used for ranking.
§.§ Performance comparison and ablation studies
We compare the performance of our proposed framework with the top 4 teams <cit.> in the DCASE 2023 challenge. As presented in Table <ref>, our method surpasses all teams across all evaluation metrics. Notably, our approach exhibits a superior performance with a 4.3% absolute improvement over the first-ranking team <cit.> and a substantial 10.22% absolute improvement over the official system <cit.> in terms of the integrated score. The self-implement baseline shown in the table is re-implementation of <cit.> with more ResNet blocks added to the spectrogram branch. The results indicate that the proposed FTE-Net leads to improvements in the overall ASD performance.
Moreover, we find that the proposed framework exhibits a noteworthy capacity for domain generalization. As observed in Table <ref>, despite a moderate reduction in the source AUC compared to the baseline system, our framework demonstrates a substantial improvement in terms of the target AUC. We argue that the inferior performance of the source AUC is likely attributed to overfitting of the baseline system to the source data, given that the source and target domains feature a highly imbalanced ratio. An indicator of the overfitting phenomenon in the baseline system is the significant disparity between the source and target AUC values presented in the table. In contrast, the proposed FTE-Net exhibits a relatively minor difference, showing its generalization ability.
To show the effectiveness of the individual modules in FTE-Net, we conduct some ablation studies. In Table <ref>, we show that the best performance is achieved by using all the modules. In Table <ref>, we conduct an excitation mechanism ablation study. Our findings demonstrate that employing more excitation maps results in improved performance. Notably, frequency excitation maps outperform time excitation maps in terms of ASD performance.
§.§ Visualization analysis
To illustrate the impact of excitation mechanism in the excitation network, we present spectrogram comparisons before and after applying the excitation maps. In this example featuring a fan shown in Fig. <ref>, we observe that the original spectrogram undergoes enhancement both in terms of frequency and time, highlighting the effectiveness of our method. Particularly, in the frequency excitation map shown in Fig. <ref> (a), our network predominantly focuses on the high-frequency band, in accordance with the results given by recent discoveries <cit.>. This indicates that our method effectively generates excitation maps conducive to machine sound modeling.
From Fig. <ref> (a) and (b), frequency and temporal patterns can be highlighted. For example, despite the enhancement of the high frequency, some prominent frequency patterns in the middle range of the spectrogram are highlighted while some of them are filtered out. Additionally, despite the simple temporal periodicity, much more complicated temporal patterns within tiny time segments are shown. These patterns hold potential as features for analyzing sounds emitted by specific machine types in future research.
§ CONCLUSION
In our paper, we introduce a novel dual-path framework for anomaly detection in machine-generated sounds, which has the ability to leverage distinctive frequency and temporal patterns found in machine sounds. One pathway employs the Frequency-and-Time Excited Network (FTE-Net) to capture features across both frequency and time axes of the spectrogram. The other pathway utilizes a 1D convolutional network for utterance-level spectrum. The experiments on the DCASE 2023 task 2 dataset shows that our framework achieves state-of-the-art performance, demonstrating the effectiveness of leveraging dual attributes for machine ASD.
IEEEtran
|
http://arxiv.org/abs/2409.02522v1 | 20240904083003 | Cog-GA: A Large Language Models-based Generative Agent for Vision-Language Navigation in Continuous Environments | [
"Zhiyuan Li",
"Yanfeng Lu",
"Yao Mu",
"Hong Qiao"
] | cs.AI | [
"cs.AI",
"cs.RO"
] |
A background-estimation technique for the detection of extended gamma-ray structures with IACTs
T. Wach 1 A. Mitchell 1 L. Mohrmann 2
September 9, 2024
===============================================================================================
empty
empty
§ ABSTRACT
Vision Language Navigation in Continuous Environments (VLN-CE) represents a frontier in embodied AI, demanding agents to navigate freely in unbounded 3D spaces solely guided by natural language instructions. This task introduces distinct challenges in multimodal comprehension, spatial reasoning, and decision-making. To address these challenges, we introduce Cog-GA, a generative agent founded on large language models (LLMs) tailored for VLN-CE tasks.
Cog-GA employs a dual-pronged strategy to emulate human-like cognitive processes. Firstly, it constructs a cognitive map, integrating temporal, spatial, and semantic elements, thereby facilitating the development of spatial memory within LLMs. Secondly, Cog-GA employs a predictive mechanism for waypoints, strategically optimizing the exploration trajectory to maximize navigational efficiency.
Each waypoint is accompanied by a dual-channel scene description, categorizing environmental cues into 'what' and 'where' streams as the brain. This segregation enhances the agent's attentional focus, enabling it to discern pertinent spatial information for navigation. A reflective mechanism complements these strategies by capturing feedback from prior navigation experiences, facilitating continual learning and adaptive replanning.
Extensive evaluations conducted on VLN-CE benchmarks validate Cog-GA's state-of-the-art performance and ability to simulate human-like navigation behaviors. This research significantly contributes to the development of strategic and interpretable VLN-CE agents.
§ INSTRUCTION
Vision Language Navigation (VLN) plays a pivotal role in robotics, where an embodied agent carries out natural language instructions inside real 3D environments based on visual observations. Traditionally, the movements of agents in VLN environments are processed by a pre-prepared navigation graph that the agent traverses. Recognizing this, Krantz et al.<cit.> introduced an alternative approach known as Vision-Language Navigation in Continuous Environments (VLN-CE). Unlike traditional methods, VLN-CE eliminates the need for navigation graphs, enabling agents to move freely in 3D spaces. This framework has gained prominence for its realistic and adaptable approach to robotic navigation, allowing agents to respond effectively to verbal commands. Previous works such as ERG<cit.>, VLN-Bridge<cit.>, and CKR model<cit.> primarily focus on reinforcement learning methods. However, reinforcement learning requires lots of interactive data.
Large language models (LLMs) have recently illustrated remarkable performance in various fields. Several recent studies have explored the versatility of LLMs in interpreting and navigating complex digital environments, demonstrating their remarkable performance in various fields. For instance, Velma<cit.> adopts the LLM in Street View VLN tasks. Esc<cit.> and LFG<cit.> focus on zero-shot object navigation(ZSON) tasks. ProbES<cit.> further enhances the generalization of LLMs in REVERIE tasks. We aim to leverage the wealth of prior knowledge stored in LLMs to construct an agent with better generalization abilities for VLN-CE tasks. This agent receives dual input from visual and language modalities. It summarizes the key information from the two modalities through its abstract knowledge structures powered by LLMs, bridging sensory modalities and establishing abstract concepts and knowledge structures.
To this end, we propose Cog-GA (Cognitive-Generative Agent), a LLM-based generative agent for
vision-language navigation in continuous environments. One of the key challenges in building an efficient VLN agent with LLMs is their lack of inherent spatial memory abilities, as LLMs are trained on flattened text input, lacking the ability to model 3D spatial environments natively. To address this, we introduce the cognitive map, which maintains spatial information related to scene descriptions and landmark objects at each navigation step as a graph. These recorded spatial memories are then retrieved and utilized in subsequent navigation steps. Another core challenge is that valuable waypoints for decision-making by LLMs are often sparsely distributed in the environment. To construct a more reasonable and efficient search space for the agent, we employ the waypoints predictor<cit.>. For each waypoint, we adopt the dual-channel theory<cit.> to describe the observed scene efficiently, which divides scene descriptions into the "what" stream related to landmark objects and the "where" stream concerning spatial characteristics of indoor environments. This division aligns well with the navigation task of reaching objects in different environmental contexts. Since the instructions received by the agent can be separated into sub-instructions corresponding to reaching objects and switching environments, the LLM can effectively focus on the current target. We further introduce a reflection mechanism with a waypoint instruction method to enable the agent to abstract new knowledge from interactions with the environment.
The LLM then combines these past experiences with the spatial information from the cognitive map to perform more informed navigation planning and facilitate continuous learning and adaptation. Cog-GA employs the LLM to fuse perception results and historical information by maintaining temporal, spatial, and descriptive memories in a cognitive map.
Each navigation step optimizes the search space using predicted waypoints, abstracting scene descriptions through dual "what" and "where" channels to emphasize relevant objects and spatial contexts. The system learns from experience and adapts its policy, with a reflection mechanism capturing navigation feedback via the LLM.
Extensive experiments on the VLN-CE dataset confirm that our Cog-GA agent achieves promising performance with psychologically human-like behavioral simulation. This work lays a foundation for developing more intelligent, human-like vision-language navigation agents that can strategically adapt to new environments while leveraging prior knowledge from language models. Our key contributions can be summarized as follows:
* We propose the Cog-GA framework, a generative agent based on large language models (LLMs) for vision-language navigation in continuous environments (VLN-CE), simulating human-like cognitive processes, including cognitive map construction, memory retrieval, and navigation reflection. Experiments demonstrate that Cog-GA achieves a 48% success rate comparable to the state-of-the-art on the VLN-CE dataset.
* We introduce a cognitive map-based memory stream mechanism that stores spatial, temporal, and semantic information, providing contextual knowledge to the LLM to facilitate navigation planning and decision-making.
* We introduce a waypoints predictor and a dual-channel ("what" and "where") scene description approach that optimizes the search space, enabling the LLM to focus on current goals. This method significantly improves the navigation success rate.
§ RELATED WORK
§.§ Vision-Language Navigation
Visual language navigation (VLN) is an emerging interdisciplinary field that has garnered significant attention from natural language processing, computer vision, and robotics communities. Anderson et al. first proposed the VLN task in 2018, introducing the Room-to-Room (R2R) dataset based on real-world environments <cit.>. Following the release of the R2R dataset, various expansions of VLN tasks emerged, such as the outdoor visual language navigation Touchdown <cit.> and Remote Embodied Visual Referring Expression in Real Indoor Environments (REVERIE) <cit.>. Numerous methods have been proposed to address these tasks. A notable advancement is the Reinforced Cross-Modal Matching (RCM) approach, which has outperformed baseline methods by 10%
The History Aware Multimodal Transformer (HAMT) has set a new benchmark in long-term navigation, demonstrating the importance of incorporating historical context in navigation tasks <cit.>.
§.§ VLN in Continuous Environments
In traditional VLN tasks, agents navigate through a restricted graph, an unrealistic assumption for real-world navigation robots. Krantz et al. <cit.> extended the discrete R2R VLN task setup to continuous environments, where agents make navigation decisions in freely traversable 3D spaces. The VLN-CE framework introduces new challenges and brings the task closer to real-world navigation scenarios. Zhang et al. <cit.> highlighted the significant impact of visual appearance features on agent performance, underscoring the need for models that generalize better across diverse environments. Similarly, Wang et al. <cit.> introduced the Reinforced Cross-Modal Matching (RCM) approach, achieving notable improvements in navigation performance. Guhur et al. <cit.> demonstrated that in-domain pretraining using the BnB1 dataset significantly enhances generalization to unseen environments.
To address the complexities of the VLN-CE task, Hong et al. <cit.> showed that agents navigating with predicted waypoints perform significantly better, reducing the discrete-to-continuous gap. Wang et al. <cit.> proposed an Environment Representation Graph (ERG) that strengthens the relationship between language and environment, leading to improved performance in VLN-CE tasks. Chen et al. <cit.> introduced DNA, a direction-guided navigator agent that integrates directional cues from instructions into the encoder-decoder framework. However, current VLN-CE methods require massive training to obtain prior knowledge.
§.§ Large Language Models Guided Navigation
Leveraging their powerful information processing capabilities and extensive prior knowledge, large language models (LLMs) have emerged as a novel approach for reasoning in navigation tasks. The VELMA model <cit.> uses LLMs to predict movement directions based on landmark information from language instructions. The Esc model <cit.> considers correlations with target objects at both the object and room levels. For zero-shot object navigation, LFG <cit.> introduced chain-of-thought (CoT) reasoning in LLMs to avoid navigating to irrelevant areas. Cai et al. <cit.> extended the use of CoT by clustering panoramic images into scene nodes and employing CoT to decide whether to explore or exploit, selecting images most likely to contain the target object and navigating accordingly. The Prompt-based Environmental Self-exploration (ProbES) <cit.> significantly enhances LLMs' generalization capabilities in tasks like VLN and REVERIE, indicating a shift towards more adaptive and context-aware LLMs in navigation. Co-NavGPT <cit.> applies LLMs' decision-making processes in multi-robot collaborative navigation, centrally planning midterm goals for each robot based on online map information. However, more cognitive processes in the brain can also be introduced to this process.
§ MATERIAL AND METHODS
We leverage the LLM to stimulate the cognitive process of navigation, including creating the cognitive map, instruction understanding, and the reflection mechanism. By introducing LLM, the VLN agent can obtain tremendous prior knowledge, which enables the agent to process tasks effectively. We construct a graph-based cognitive map as external memory to address the LLM lack of long-term and spatial memory. That allows the LLM-based agent to understand and remember the continuous environment.
§.§ Generative Agent for VLN-CE Tasks
We divide VLN-CE tasks into three phases: generating the search space, high-level target planning, and low-level motion generation. Constructing the search space, a crucial preprocessing step, segments the continuous environment into waypoints, simplifying navigation to point selection and improving efficiency. We use a planner based on large language models (LLMs) for high-level target planning. It selects a waypoint as the next target based on current sub-instructions using spatial memories from the cognitive map of the memory stream. The target waypoint is then sent to the motion generator for action execution.
We propose a generative agent for VLN-CE tasks, including the waypoint predictor, memory stream, instruction processing, high-level planner, and reflection module. The instruction processing module divides the task into shorter-range sub-instructions. The waypoint predictor uses panoramic observations to construct the waypoint search space at each step. A scene describer characterizes the observed scene, dividing descriptions into the "what" stream (landmark objects) and the "where" stream (spatial characteristics). These descriptions adjust the sub-instruction for alignment with the environment, allowing the planner to focus on the current target: cognitive maps and reflection memories from the memory stream guide the high-level planner. The planner uses scene descriptions, memories, and the sub-instruction to compose a prompt for the LLM. The target waypoint index is conveyed to the low-level actuator. During movement execution, a reflection generator assesses the navigation results, constructing feedback to evaluate each navigation impact of step.
§.§ Cognitive Map based Target Inference
Humans and animals create cognitive maps to code, store, and retrieve information about their environments' relative locations and attributes. Introduced by Edward Tolman in 1948 <cit.>, this concept explains how rats learn maze layouts and apply them to humans for navigation and spatial awareness. We use the cognitive map for LLM-based agents and VLN-CE tasks.
A significant challenge for LLMs is the lack of long-term and spatial memory, making external memory crucial <cit.>. We address this by introducing a graph-based cognitive map as external memory, which builds and stores spatial memory to help the LLM understand and remember the environment.
The cognitive map starts as an undirected graph 𝒢(ℰ, 𝒩) with nodes 𝒩_p for traversed spaces and 𝒩_o for observed objects. 𝒩_o nodes connect to their corresponding 𝒩_p nodes with 1-weight edges ℰ_p,o. Connections between 𝒩_p nodes (ℰ_p) are weighted to represent distance and angle between waypoints, ranging from 0.25 to 3 for distance and 1 to 8 for direction. Each 𝒩_p node also has a time step label t. The cognitive map graph is represented as:
𝒢({ℰ_p,o, ℰ_p}, {𝒩_p, 𝒩_o})
Retrieving the cognitive map is crucial for target inference. As shown in Figure <ref>, we define two retrieval methods: the history and observation chains. The history chain focuses primarily on navigated nodes, providing planners an abstract view of the current path. In contrast, the observation chain focuses on potential targets between the current and previous positions, offering a broader view of past decisions.
Figure <ref> outlines the target inference process. After the waypoint predictor segments the panorama, the LLama-based scene describer processes the waypoint image into 'where' and 'what' related words. These waypoints update the cognitive map in the memory stream. The history chain, environment descriptions, reflection memory, and sub-instruction form a unified prompt input to the LLM-based planner. The planner outputs the target waypoint index, which is stored in the memory stream for the cognitive map. The actuator then extracts distance and angle information for the agent's action.
§.§ Instruction Rationalization based Instruction Processor
For VLN-CE agents, handling instructions is nontrivial. Using unprocessed instructions directly confuses the planner, causing it to perform meaningless actions. To solve this, we propose an instruction rationalization mechanism. We break the instruction into several sub-instructions using LLMs to guide the agent. However, the original sub-instructions often lack context. For example, the sub-instruction "Exit the living room." might confuse the agent about its current target, causing it to repeat routes. Therefore, we include current environment information and the unprocessed instruction to adjust the sub-instruction.
This process can be expressed as
I_i,1 = R(I_i,0|D, ℐ) → ... → I_i,n = R(I_i,n-1|D, ℐ)
where I_i,0 is the original sub-instruction, and ℐ is the unprocessed instruction.
As the agent moves through the environment, sub-instructions are continuously rationalized. If a sub-instruction is completed, the agent moves to the next one until all sub-instructions are finished. For example, the rationalized sub-instruction "Find the door of the living room and look for the sign to the kitchen" is more effective for the agent. At the start of the navigation task, the agent breaks down the natural language instruction into multiple sub-instructions. Each sub-instruction is updated based on observations at each time step as the agent moves, a process we call instruction rationalization.
Detailed discussions of instruction rationalization will be provided in the appendix.
§.§ Generative Agent with Reflection Mechanism
The concept of a generative agent, which blends AI with human-like simulation, represents a significant advancement. These agents mimic human behaviors based on interactions with the environment and past experiences. VLN-CE closely mimics real-world navigation, making it an ideal application for simulating psychological processes during human navigation.
While navigating, the agent receives panoramic waypoint inputs as observations. The scene describer provides a structural description of each waypoint to the planner, along with the history chain from the cognitive map (section <ref>) and reflection memories from the memory stream. The planner uses this information to determine the next navigation waypoint based on the sub-instruction.
After the planner identifies the target, the angle and distance to the waypoint are sent to the low-level actuator to move the agent. The agent then reflects on its movements to gather valuable experiences for future tasks. This reflection helps the agent understand why it succeeded or failed, gaining general knowledge about the environment. However, LLMs can be overwhelmed by too many experiences, disrupting their decision-making. The forgetting mechanism and evaluation metric for reflection memory are introduced by managing the stored information to address this issue. The agent calculates the score of each reflection memory. The bottom 10% of reflections will be eliminated.
§ EXPERIMENTS
To verify the performance of our agent, we deployed our method in VLN-CE environments. This section outlines our experimental setup and implementation details and compares our performance against standard VLN-CE methods. We also highlight several notable features of LLM agents that could inform future research directions. Finally, we assess the impacts of our core methods and provide visual analyses.
§.§ Experimental Setup
We conducted experiments on the VLN-CE dataset <cit.>, which includes 90 Matterport3D <cit.> scenes. Due to the extended response time of LLaMA, we randomly selected 200 tasks in unseen validation environments for our experiments. Following the methodologies of <cit.>, we used five evaluation metrics <cit.>: Navigation Error (NE), Trajectory Length (TL), Success Rate (SR), Oracle Success Rate (OSR), and Success Rate weighted by Path Length (SPL), with SR being the primary metric.
§.§ Implementation Details
We utilize Vicuna-7b <cit.> as the scene describer to align visual modality information with natural language information. For path planning, considering the balance between performance and response time, we adopted GPT-3.5. As used in <cit.>, the Waypoint Predictor is employed with a candidate waypoint number set to 7. Our experiment is implemented in PyTorch, utilizing the Habitat simulator <cit.>, LangChain, and trained on two NVIDIA RTX 4090 GPUs.
§.§ Comparison with Previous VLN-CE Methods
In line with previous research, we compare our agent with five previously published VLN-CE methods: Waypoint <cit.>, CMA <cit.>, BridgingGap <cit.>, LAW <cit.>, and Sim2Sim <cit.>. All experiments were conducted using the same setup. The results are presented in Table <ref>. Our Cog-GA demonstrated a notable advantage in Success Rate (SR) and Oracle Success Rate (OSR), indicating that the LLM-based agent performs better and effectively transfers its prior knowledge. However, it is essential to note that the trajectory length is significantly higher than other methods. That is attributed to the agent's conservative stopping mechanism, which prefers to get as close to the target point as possible.
§.§ Ablation Experiments
To verify the effectiveness of each component of our method, we conducted ablation experiments based on the validation setup in unseen environments. These experiments focused on the influence of each element on Trajectory Length (TL), Success Rate (SR), and Oracle Success Rate (OSR). Specifically, we examined the reflection mechanism, the instruction rationalization mechanism, and the cognitive map. The results of the ablation experiments are presented in Table <ref>.
The results demonstrate that the instruction rationalization mechanism and the cognitive map significantly influence the agent's performance, while the reflection mechanism has a relatively lower impact. However, all components contribute to the overall effectiveness of the agent. The reflection mechanism, in particular, is primarily used for experience accumulation, suggesting that its importance will grow over the long term as more reflective memory is accumulated.
§ CONCLUSION
In this paper, we introduce a generative agent for VLN-CE that demonstrates the powerful ability of natural language to represent. By mimicking human navigation processes, the agent excels in performance. The simulation of brain navigation could bring an advantage for VLN-CE tasks. The cognitive map-based external memory enables the LLM agent to memorize spatial information. However, communication speed with LLMs is a significant hurdle when using these agents in robotic systems. Future efforts will aim to create a more efficient, high-performing generative agent and improve multimodal large models for vision-language navigation.
§ ACKNOWLEDGEMENTS
This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under (Grants XDA0450200, XDA0450202), Beijing Natural Science Foundation (Grant L211023), and National Natural Science Foundation of China (Grants 91948303, 61627808)
IEEEtranS
§ ALGORITHM PSEUDOCODE
§ TASK SETUP
1. Task Setup
In Vision-Language Navigation in Continuous Environments (VLN-CE) <cit.>, agents must navigate through unseen 3D environments to specific target positions based on language instructions. These environments are considered as continuous open spaces. The agent selects a low-level action from an action sequence library at each step, given the instruction ℐ and a 360^∘ panoramic RGB-D observation 𝒴. Navigation is successful only if the agent selects a stop within 3 meters of the target location.
Recent VLN-CE solutions <cit.> have adopted a high-level waypoint search space approach. During navigation, the agent utilizes a Waypoint Predictor to generate a heatmap covering 120 angles and 12 distances, highlighting navigable waypoints. Each angle increment is 3 degrees, and the distances range from 0.25 meters to 3.00 meters, with 0.25-meter intervals corresponding to the turning angle and forward step size in the low-level action space. This approach translates the problem of inferring low-level controls into selecting an appropriate waypoint.
§ OPTIMAL PROMPT MECHANISM FOR LLMS IN NAVIGATION TASKS
2. Optimal Prompt Mechanism for LLMs in Navigation Tasks
During the development of the agent, we observed several intriguing features. The structural context is crucial for navigation tasks. To direct the LLM's focus toward navigation-related information, we categorized the information into three distinct types: objects, room types, and directions. Consequently, the context should be structured in the format 'Go (direction), Is (room type), See (objects).' Maintaining concise context is essential, as complex and miscellaneous contexts can disrupt the LLM's performance.
For waypoint selection, the clarity of surrounding environment descriptions also plays a significant role. We format the environment descriptions as 'In (direction), See (objects), Is (room type).' Clear and concise information reduces unnecessary processing burdens for LLMs and minimizes the risk of irrational outputs due to redundant input. However, a fully structured prompt alone is insufficient for VLN-CE tasks. Original instructions often involve multiple steps, and structural division of sub-instructions can lead to information loss and misdirection. Therefore, we introduced a guidance mechanism to provide structural information for the current target while supplementing sub-instructions. As detailed in section <ref>, we divided sub-targets into 'where' and 'what.' The 'where' targets involve switching environments based on room type, and the 'what' targets involve finding specific objects in the current environment. Thus, we constructed the structural guidance as 'You should try to go (where)' and 'You should try to find (what).' This guidance updates simultaneously with the rationalized sub-instruction to ensure coherence.
§ THE INFLUENCE OF INSTRUCTION QUALITY AND CONSTRUCTING BETTER SUB-INSTRUCTIONS
3. The Influence of Instruction Quality and Constructing Better Sub-Instructions
Our experiments highlighted the critical importance of instruction quality on navigation outcomes. This section analyzes how to enhance instruction quality during navigation and explores its implications for future work. As described in section <ref>, splitting instructions into multiple steps and continuously rationalizing each step has proven effective. For example, the rationalized sub-instruction 'Find the living room door and look for a sign to the kitchen.' yielded better results than the unprocessed sub-instruction 'Exit the living room.'. This finding reveals an interesting phenomenon: for an agent performing a task, the sequence of separated steps should maintain instructional coherence and constantly adapt the description of the target to the practical environment. Similar phenomena may also occur in human cognitive processes. Furthermore, performance improvements observed before and after splitting the original instruction demonstrate that LLMs have limited capacity to process long-term descriptive instructions. Long-term instructions can easily confuse the LLM by presenting multiple potential targets.
§ THE EVALUATION METRIC OF REFLECTION MEMORY
4. The evaluation metric of reflection memory
We define three parameters for each reflection memory: optimal distance, proximity, and repeatability. Optimal distance is the dynamic time warping (DTW) between the current and ground-truth navigation sequences. Repeatability counts how often similar memories occur, and proximity is the time between the memory and the current step. If a new memory is identical to an existing one, it won't be stored, and the current memory's proximity and repeatability will be updated. The score of each reflection memory is defined as follows:
Score_m = |d_m - δ|/δ + t_m/T + r_m/max_r_n ∈ Rr_n
where d_m is the optimal distance, δ is the threshold parameter of the optimal distance, t_m is proximity, T is the current time step, r_m is repeatability, and R is the set of repeatability of reflection memories. The forgetting process will eliminate reflection memories with scores in the bottom 10%.
§ VISION-LANGUAGE NAVIGATION TASK SAMPLE
5. Vision-Language Navigation Task Sample
For the following figures, the left part is the first view image chosen by the agent, and the right part is the map for the task environment. The 'Action,' 'In,' and 'See' are the linguistic observations and movements for the first view image chosen by the agent.
TASK INSTRUCTION: Exit the living room and turn right into the kitchen. Turn left at the end of the counter and wait in the room across the hallway slightly to the left.
Step: 1 Action: go Left Front for 0.75 meters
In: living room
See: ['sofa', ' picture frame', ' lamp', ' bookshelf', ' window', ' clock', ' rug', ' painting', ' curtains', ' table']
Step: 2 Action: go Left Front for 1.25 meters
In: living room
See: ['grey wall', ' white couch', ' table', ' flat screen tv', ' white desk', ' window', ' fireplace', ' carpet', ' black tiles', ' lamp', ' mirror', ' plants']
Step: 3 Action: go Left Front for 2.5 meters
In: living room
See: ['couch', ' chairs', ' window', ' mirror', ' floor', ' walls', ' table', ' chairs', ' window', ' table']
Step: 4 Action: go Front for 1.75 meters
In: living room
See: ['mirror', ' furniture', ' living room', ' spacious', ' white tiles', ' large mirror']
Step: 5 Action: go Right Rear for 1.75 meters
In: living room
See: ['sofa', ' chandelier', ' fireplace', ' bookshelf', ' painting', ' rug', ' coffee table', ' curtains', ' lamp', ' mirror']
Step: 6 Action: go Right Front for 1.5 meters
In: kitchen
See: ['marble floor', ' blue tiles', ' grey walls', ' white refrigerator', ' silver handle', ' black stove', ' silver microwave', ' silver cabinet doors', ' white handles', ' white dishwasher', ' large mirror', ' silver frame', ' window']
Step: 7 Action: go Left Rear for 2.0 meters
In: living room
See: ['couch', ' coffee table', ' windows', ' wall', ' pictures', ' sink', ' counter', ' cabinets', ' floor', ' rug']
Step: 8 Action: go Front for 1.25 meters
In: kitchen
See: ['clock', ' hallway', ' kitchen', ' tiles', ' floor', ' curtains', ' walls', ' countertop', ' appliances', ' stove']
Step: 9 Action: go Front for 2.0 meters
In: kitchen
See: ['kitchen', ' island counter', ' appliances', ' white walls', ' floor tiles', ' living area']
Step: 10 Action: go Right Front for 1.0 meters
In: living room
See: ['sofa', ' door', ' window', ' table', ' chair', ' lamp', ' bookshelf', ' painting', ' rug', ' clock']
Step: 11 Action: go Right Rear for 2.0 meters
In: kitchen
See: ['kitchen', ' center island, ' counter top', ' stools', ' refrigerator', ' oven', ' dishwasher', ' sink', ' table', ' lamp']
Step: 12 Action: go Behind for 1.75 meters
In: living room
See: ['cabinets', ' countertops', ' windows', ' sofa', ' coffee table', ' dining table']
Step: 13 Action: go Front for 1.5 meters
In: kitchen
See: ['windows', ' door', ' counter', ' mirror', ' sink', ' refrigerator', ' chairs', ' coffee table', ' bright', ' spacious']
Step: 14 Action: go Right Side for 1.75 meters
In: kitchen
See: ['kitchen island', ' marble countertops', ' induction cooktop', ' chairs', ' paintings', ' walls', ' flooring', ' tile']
Step: 15 Action: go Behind for 2.25 meters
In: living room
See: ['sofa', ' door', ' window', ' table', ' chair', ' lamp', ' painting', ' bookshelf', ' rug', ' clock']
Step: 16 Action: go Right Front for 2.0 meters
In: living room
See: ['sofa', ' door', ' window', ' table', ' chair', ' lamp', ' bookshelf', ' painting', ' rug', ' clock']
Step: 17 Action: go Right Rear for 2.0 meters
In: kitchen
See: ['kitchen counters', ' sink', ' stools', ' refrigerator', ' oven', ' stove', ' curtains', ' tiles', ' dining table', ' chairs']
Step: 18 Action: go Behind for 2.25 meters
In: kitchen
See: ['kitchen', ' counter top', ' subway tiles', ' island', ' stools', ' fridge', ' oven', ' microwave', ' dishwasher', ' cabinets']
Step: 19 Action: go Front for 1.25 meters
In: living room
See: ['kitchen', ' dining area', ' living area', ' sofa', ' door', ' window', ' table', ' chair', ' stove', ' refrigerator']
Step: 20 Action: go Right Rear for 2.25 meters
In: living room
See: ['sofa', ' door', ' window', ' table', ' chair', ' lamp', ' bookshelf', ' painting', ' rug', ' clock']
|
http://arxiv.org/abs/2409.02676v1 | 20240904130640 | Improved Single Camera BEV Perception Using Multi-Camera Training | [
"Daniel Busch",
"Ido Freeman",
"Richard Meyes",
"Tobias Meisen"
] | cs.CV | [
"cs.CV"
] |
[
S. Skoupý and M. Štefaňák
September 9, 2024
=============================
empty
empty
§ ABSTRACT
BEV map prediction is essential for downstream autonomous driving tasks like trajectory prediction. In the past, this was accomplished through the use of a sophisticated sensor configuration that captured a surround view from multiple cameras. However, in large-scale production, cost efficiency is an optimization goal, so that using fewer cameras becomes more relevant. But the consequence of fewer input images correlates with a performance drop. This raises the problem of developing a BEV perception model that provides a sufficient performance on a low-cost sensor setup.
Although, primarily relevant for inference time on production cars, this cost restriction is less problematic on a test vehicle during training. Therefore, the objective of our approach is to reduce the aforementioned performance drop as much as possible using a modern multi-camera surround view model reduced for single-camera inference.
The approach includes three features, a modern masking technique, a cyclic LR schedule, and a feature reconstruction loss for supervising the transition from six-camera inputs to one-camera input during training.
Our method outperforms versions trained strictly with one camera or strictly with six-camera surround view for single-camera inference resulting in reduced hallucination and better quality of the BEV map.
Single Camera BEV Perception, Masking Method, Vision Transformers
FirstPage
§ INTRODUCTION
BEV map prediction delivers easily interpretable traffic scene information. It implicitly includes objects and their positions in world coordinates. Many modern methods can extract the needed semantic information and predict the BEV e.g. <cit.>. With the use of such state-of-the-art methods, it is now feasible to generate full scenes from just a few seconds of recorded footage captured by a sophisticated camera setup. However, a problem with these methods for such environmental perception is their need for multiple cameras to cover a 360 degrees surround view during training and inference. Some even require additional sensors like radar or lidar <cit.>. On the other hand, methods using only a single front camera come with a significant drop in quality. For example in <cit.>, a Pseudo-LiDAR model is developed that loses performance along with two benchmark models due to the reduction from stereo to single camera. Moreover, in <cit.> several different approaches were compared on the nuSecnes dataset <cit.>, with a single camera method performing second worst. This is understandable up to a certain extent, as they receive less input information. Apart from highly equipped research vehicles, the bulk of production vehicles just have a front camera. Even though, some low-volume premium vehicles already have more cameras, adding a comparably low-priced camera will have a large financial impact on higher production volumes. Accordingly, bringing single-camera models as close as possible to the performance of a modern surround-view model is beneficial for mass-production vehicles. As stated in <cit.> for sufficient perception of the whole scene a multi-camera setup is needed. This also underlines the performance drop by the reduction just from stereo to single camera input reported in <cit.>.
This paper presents a method to reduce the performance drop between training with a full environment view using a multi-camera setup and inference that can be performed with only one camera. The method intelligently reduces the information of the multi-camera setup during the training phase. More precisely, it combines the advantages of BEVFormer <cit.> as a modern surround view model, with a single front camera limitation during inference. In this way, our trained model benefits from the different camera angles of the surround view and handles aspects such as object shadows and occlusion more robustly. To do that, we present the following three contributions: First, we utilized a state-of-the-art masking technique known as inverse block masking <cit.> from a modern self-monitoring approach. The ratio of this masking is stepwise increased over the training epochs. The increase ends at the limit of the single front view. Additionally, we ignore GT bounding boxes in the loss computation if their corresponding input images are completely masked. Secondly, a cyclic LR schedule is introduced to align with the masking method. Due to the different masking ratios, the input data distribution changes. Therefore, the LR is aligned to enable the model to transition between the changing data distributions. Lastly, the full sample containing all six camera inputs is used to supervise the masked sample. To achieve this, we introduce a BEV feature reconstruction loss that is targeted at the performance of the surround view BEVFormer model. Combining these features, we propose our final training method that increases the performance of the BEVFormer for single-camera inference. Compared to a single camera training, the mIoU of our model has increased by 19% and the mAP by 414%. These numbers reflect a better quality in the BEV map and a drastic decrease in the number of false positive detections, since the baseline was trained on objects that lie outside the single camera's view.
§ RELATED WORK
§.§ Inputs for single camera BEV models
Depending on the point of view, reducing input information of a surround-view model or adding input information to a single camera model leads to the same approach. Utilizing additional inputs from other cameras, other time steps or even other sensor types for better performance is not new for BEV prediction models <cit.>. The method in <cit.> from the robotics domain performs a camera rotation to get a surround-view input instead of utilizing multiple cameras. Moreover, in <cit.> an optional dynamics module can exploit additional temporal information by using the same sensor setup. BEV-MODNet <cit.> exploits two sequential images to improve the 3D detection of moving objects. Besides the utilization of temporal information, the models presented in <cit.> show an increase in performance from mono to stereo camera training for 3D object detection. In <cit.>, they explain the need for a full surround view to perceive a whole traffic scene and provide a method that fuses the BEV feature maps from different camera views. In this way, it extended to a full surround view model. However, even though the previous methods benefit from their extended sensor inputs, the setups stay the same for training and inference. In contrast, in LPCG <cit.>, more inputs are used during training than on inference by introducing a lidar sensor for label guidance. Thus, it benefits from the lidar data but still just needs the single camera setup for inference.
§.§ Inputs for multi camera BEV models
Instead of reducing inputs in multi-view BEV perception models, extending inputs for better performance is often done following the same principle of additional training input: In <cit.> and <cit.> long-term temporal fusion strategies are developed to extract more information from past frames. In BEVStereo <cit.>, a combination of mono and temporal stereo depth estimation is used as an iterative optimization process. In addition, the authors utilize lidar data during training. Lidar is also used in BEV-LGKD <cit.>, a knowledge distillation framework that is extended by lidar guidance for better performance. Furthermore, in BEVDepth lidar is applied for GT data <cit.>. The PETRv2 <cit.> model extends the base PETR <cit.> model by a history input. Moreover, the time horizon differs for training and inference. During training time, it is sampled flexibly from between 3 and 27 full lidar rotations in the past whereas on inference a sample of 15 rotations in the past is selected. Thus, the model has a greater variety of time horizons and time steps which makes the model more robust for different vehicle speeds.
The purely camera-based BEVFormer <cit.> does similarly exploit past frames with its temporal self-attention. In addition, the input is extended by an extra time step during training. In total, it uses three random samples from a two-seconds time horizon, whereas during inference, this is reduced to two consecutive samples.
The above-mentioned methods like <cit.> are still considered full surround view methods, but with additional inputs in the form of time steps or lidar inputs that were not considered during inference.
§ METHOD
Our approach is based on the modern BEVFormer <cit.> for predicting a BEV map, which we combine with a ResNet50 <cit.> backbone. To reduce the BEVFormer from a surround view to a single camera inference we combined three approaches:
* Firstly, we implement the inverse block masking <cit.>.
* Secondly, we adapt the cyclic LR schedule in response to the change in the input data distribution due to different masking ratios.
* Lastly, we introduce a loss called BEV feature reconstruction loss to rate how well the BEV features are reconstructed out of partially masked image parts.
§.§ Model Architecture
The BEVFormer architecture is visualized in <ref>. It uses two deformable attention mechanisms based on deformable DETR <cit.>, named spatial cross-attention and temporal self-attention <cit.>. Grid-shaped BEV queries are expanded into the vertical dimension by uniformly distributed reference points. These are projected into the 2D image feature maps that are predicted by the CNN backbone. The spatial cross-attention takes place only in the 2D image feature maps into which the point is reprojected and the features are sampled around their corresponding reference point. The temporal attention exploits the history BEV features by first aligning them with the current time step to compensate for object motions. Then the self-attention takes place. In total, it has three transformer layers, which corresponds to a mid-size version provided by <cit.>. This version is chosen to reduce the time and computational effort. Afterwards, two heads are added, one detection head responsible for the 3D bounding box prediction and one segmentation head for the BEV segmentation of lane markings.
§.§ Approach
§.§.§ Masking Methods
The first part of our algorithm relies on the stepwise reduction of usable camera input by using the inverse block masking method <cit.>. Since we are limiting ourselves to the front camera, the masking is applied only to the five non-front-facing cameras. The step height and width are balanced out such that the input information is reduced only by a small portion (20%) and the network is trained for four epochs before further increasing the masking ratio. Thus, the network can utilize these four epochs to handle the set ratio of missing information by attending to hints from visible portions. Using masks for this purpose is a common practice in self-supervised learning methods as discussed for example in <cit.>. The graph of the mean masking ratio is visualized in <ref>. To give the masking method more variety during training, the masking ratio is sampled by a Gaussian distribution with a fixed mean (μ) for every reduction step. A masked input sample with a ratio of μ=0.4 is shown in <ref>. The inverse block masking was originally designed to mask images leaving rectangular contiguous regions visible to provide enough context for a reconstruction of the noised parts. In this way, the model can learn to predict features in hidden regions based on reliable data from visible regions.
Additionally, a GT bounding box filter is implemented. It filters the GT boxes by the camera view angle to force the model to completely neglect blind views produced by the masking method. The GT filtering is used during training in the last epochs where the model only receives the front view input. Then, the GT bounding boxes are filtered for all completely blind camera views except for the visible front view. In this context, the front view angle is extended on both sides by a tolerance angle. This tolerance area is just out of view. Thus, history information could still be meaningful as long as the performance metrics will not drop significantly due to further angle extension.
§.§.§ LR Schedule
The second feature of our approach deals with the adjustment of the LR. As described in <cit.> the LR is a crucial hyper-parameter and can slow down the training or even result in divergence of the loss. The BEVFormer uses a cosine annealing LR scheme which does not take a change in the data distribution during training into account. Therefore, we align the LR with the stepwise increasing masking ratio using the cyclic LR scheme depicted in <ref>. The idea is that at the beginning of every cycle, the LR is large enough to give the network the chance to react to the new data distribution. During the cycle, the LR is slowly decreased for tuning. During the last epochs at 100% masking ratio, the LR is further reduced into small values for fine-tuning.
§.§.§ Reconstruction Loss
The third feature of our approach introduces a BEV feature reconstruction loss which considers the masked input modified by <ref> as a second sample. The procedure is visualized in <ref>.
Each training sample is fed to the network twice. In the first step it is used without any masking and the BEV features are kept in memory. The sample is then fed to the network again, now with the mask applied. After the second step, the BEV feature reconstruction loss is computed as an L2 loss which is used for a similar purpose in <cit.>. It is computed between the features obtained with and without masking, constraining the features from masked inputs to be close to the ones from the original input.
§.§ Dataset
The features are trained and tested on the public nuScenes dataset <cit.>. It contains 1000 traffic scenes of 20s in length. The recording vehicles were equipped with one lidar, five radars and a six-camera surround view. It has annotations for 23 object classes as well as HD maps of the road layout around the ego-vehicle <cit.>. The nuScenes developers have defined several validation metrics. To quantify detection quality, they compute the mean average precision (mAP) which is averaged over all classes using BEV bounding box center distance for the thresholds. Furthermore, five TP scores are defined named as average translation (ATE), scale (ASE), orientation (AOE), velocity (AVE) and attribute (AAE) error. The nuScenes detection score (NDS) takes all previous metrics into account in the following way: NDS=1/10[5mAP+∑_mTP∈𝕋ℙ(1-min(1,mTP))] <cit.>. Thus, the mAP is weighed with 50% against the true positive scores. Lastly, the mean Intersection over Union (mIoU) is used to rate the BEV map segmentation. Each metric is computed both as a mean over all classes and individually.
§.§ Training and Experimental Setup
A ResNet50 <cit.> backbone is used and pre-trained on the ImageNet dataset <cit.>. It is chosen as a trade-off between training time and quality. The model is trained on one A100 GPU for 30 epochs. The implementations are based on BEVFormer as published in <cit.>.
Our experiments can be divided into three main sections. Firstly, we evaluate the reduction in false-positive detections for masked image regions achieved by filtering GT bounding boxes. To isolate the effect of the GT bounding box filter, the model is trained on the front camera only, once with the GT filter and once without. In this case, the evaluation is done considering all GT boxes of the whole 360 degrees view to consider also the false positive detections in the camera views that are masked. Secondly, the combination of all three approaches is compared against two baselines: One with a single front camera training and one with the total surround view training. Lastly, a detailed ablation study is done to isolate and compare each approach. For all runs with the inverse block masking technique, the variance of the masking ratio is set to σ=0.2 except for the first and last cycle where the variance is set to σ=0. The mean (μ) is stepwise increased in 20% steps as described by <ref>.
§.§ Validation
To focus on the actual effect of our approach, the GT bounding boxes are only considered within a 90 degrees opening angle facing in the driving direction. The camera has an aperture angle of 64.5 degrees leaving a tolerance angle of 12.75 degrees to each side. In this area, temporal attention could deliver meaningful output out of history BEV features. Therefore, GT bounding box filtering is performed everywhere outside the 90 degrees front facing field-of-view. For comparability, this field of view is consistent for all approaches and baselines.
§ RESULTS
§.§ GT bounding box filter
To suppress false-positive detections, we implement the GT bounding box filter during training in the last ten epochs where all non-front-facing cameras are fully masked (100% masking ratio). The effect of this GT filter is shown in table <ref>. The mAP score is the most meaningful metric as it is lower for rising false positive values. We observe that all metrics for object detection and semantic segmentation improve.
§.§ Evaluation of combined features
The combination of all three features is compared against our baselines in table <ref>.
One baseline is trained with all six cameras and one baseline is trained only on the front camera.
The evaluation is done only on the front camera with the GT bounding box filter applied as described in <ref> for all three runs. Our method outperforms both baselines in the two most important metrics for the object detection NDS and mAP by 20%, 25% compared to the second best value. The NDS is a weighted sum of the mAP and the five TP scores and the mAP considers false positive values. Additionally, the mIoU is improved by 19% which is the only measured indicator for the semantic segmentation of the BEV map.
Apart from the quantitative results, <ref> shows the results qualitatively on one representative sample. The model trained on one camera (<ref>) shows the highest false positive rate in the blind areas and the visible front view compared to the other two runs. The semantic segmentation appears also most hallucinative and inaccurate for the single-camera run. Even though it shows many lane and object information in the blind areas it looks less precise and most different to the corresponding GT map. For example, the merging street in the left which is just out of view is missing. The baseline trained on six cameras (<ref>) looks closer to our approach in the visible front view. Besides that, it only predicts segmentation artifacts in the blind area. Additionally, it provides almost no hint of the semantic segmentation map in the area behind the ego vehicle. Our approach (<ref>) shows a more accurate BEV map also in areas that are just out of view. E.g. it shows the corner of the left intake even though it is not seen anymore by the front view. Moreover, it predicts the highly occluded pedestrian on the left side of view. It shows fewer false-positive detections compared to the single camera baseline but also predicts some information that is out of view.
§.§ Ablation Study
Table. <ref> shows the results of the detailed ablation study. Each feature, including inverse block masking, cyclic LR, and feature reconstruction loss, was tested individually as well as in combination to determine their effectiveness. The baseline without any of our features is trained on all six cameras and all runs use the GT bounding box filter as tested in <ref>. Additionally, all runs have only the front view as input information during the inference. Each of the isolated feature runs shows an improvement at least in one metric but on the cost of a decrease in another metric. The isolated feature reconstruction loss shows the most significant improvement in the mIoU. Considering only the NDS, the feature reconstruction loss in combination with the inverse block masking shows the most significant improvement. Besides this, the mAP has the best improvement for the cyclic LR in combination with the feature reconstruction loss. However, the combination of all three approaches delivers the best results for the NDS, mAP and mIoU. Additionally, the five true positive errors are among the first three places in their category.
§ DISCUSSION
In this paper, we presented our enhanced training method that contains the inverse block masking technique aligned with a cyclic LR schedule and a feature reconstruction loss for supervising the transition from six camera training inputs to a single front camera inference. Our method outperforms the two baselines in the important metrics.
§.§ Effects in latent space
The effect of our approach in the latent space of the model is visualized in <ref>. It shows two of 256 BEV feature embeddings. The BEV feature embeddings already include the information from the spatial and temporal attention. The features are visualized during inference for both the six camera baseline (<ref>) and our method (<ref>). Each training is shown one time with six camera inference and one time with the single camera inference. Even though the feature visualizations are limited in their interpretability, some differences stand out: The BEV embeddings of our method (<ref>) show a hint of the traffic scene even in blind areas as the shape of the street and some objects are more visible compared to the baseline embeddings (<ref>). Moreover, our method (<ref>) shows more similarity to its six camera equivalent (<ref>) than the baseline (<ref>) to its equivalent (<ref>). Since both, our and the baseline run have the same single camera input it could only predict more feature information for blind areas by attending into past frames. This richer feature information underlines the more precise results of our run in <ref>. Additionally, the baseline (<ref>) shows artificial star-shaped rays which might lead to the reprojection function. In this case, this function might just transport the noised mask fed from the backbone features. These rays are also discussed in <cit.>.
§.§ Effects of features and combinations
As shown in <ref> the NDS is improved most significantly when the feature reconstruction loss is introduced. The mIoU behaves in the same way. It might be the case that learning what is behind the mask from a completely noise-free sample helps to focus more on the temporal information. The provided sample (<ref>) underlines the behavior of the values in <ref>. In more detail, the bounding boxes appear more accurate and show fewer false positives compared to the one-camera baseline.
The mAP, which is more influenced by false positive values, drops due to the inverse block masking and the cyclic LR which might be the case due to the change in data distribution and reduced input information. This can be improved by the combination of the unmasked sample in the feature reconstruction loss and the ability of larger training steps in the cyclic LR. The GT for computing the mIoU of the semantic segmentation map is not masked in the blind areas. Since it describes only the prediction of static classes in the BEV map, it theoretically has the chance of predicting things like lanes behind the vehicle purely out of past frame information.
As the results show, this seems to be harder than just guesswork which seems to be the case for the baseline on one camera in <ref>. It already has better mIoU but shows the most hallucinative visible results as the representative example (<ref>) underlines. Again the effect of the feature reconstruction loss and thus having a guidance seems to have the most increase in performance to the mIoU. This can be underlined by the latent visualization of <ref>. Since the feature reconstruction loss directly impacts the BEV feature embedding which changes visibly and needs to rely more on temporal information.
§.§ Limits
Due to time and computational constraints, we just developed and tested our training method on the BEVFormer which was trained only on the nuScenes dataset. In addition, our tests were focused on quality improvement, but there is potential for a reduction in computational overhead, as the backbone only needs to be run for one image rather than six at inference time. Even though the method just requires the front camera view during inference it still needs all the GT data for the complete sensor setup during training. To determine how the method can be generalized to other models and datasets, as well as to investigate the computational effort and expenses in GT data, further investigation is required.
§.§ Conclusion
To summarize, our method reduces the number of input images during training for a single camera inference using the BEVFormer model. It reduces the performance degradation, resulting in fewer false-positive detections and more accurate BEV segmentation compared to the presented baselines. Additionally, it improves the three most important metrics by 20% NDS, 25% mAP and 19% mIoU.
IEEEtran
|
http://arxiv.org/abs/2409.03390v1 | 20240905095159 | Doping the spin-polarized Graphene minicone on Ni(111) | [
"Cesare Tresca",
"Gianni Profeta",
"Federico Bisti"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
[][email protected]
CNR-SPIN c/o Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, Via Vetoio 10, I-67100 L’Aquila, Italy
CNR-SPIN c/o Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, Via Vetoio 10, I-67100 L’Aquila, Italy
Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, Via Vetoio 10, I-67100 L'Aquila, Italy
[][email protected]
Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, Via Vetoio 10, I-67100 L'Aquila, Italy
§ ABSTRACT
In the attempt to induce spin-polarized states in graphene, rare-earth deposition on Gr/Co(0001) has been demonstrated to be a successful strategy: the coupling of graphene with the cobalt substrate provides spin-polarized conical-shaped states (mini-cone) and the rare-earth deposition brings these states at the Fermi level.
In this manuscript we theoretically explore the feasibility of an analogue approach applied on Gr/Ni(111) doped with rare-earth ions. Even if not well mentioned in the lecture also this system owns a mini-cone, similar to the cobalt case. By testing different rare-earth ions, not only we suggest which one can provide the required doping but we explain the effect behind this proper charge transfer.
Doping the spin-polarized Graphene minicone on Ni(111)
Federico Bisti
^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique
et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France
^2 Laboratoire de Physique de la Matière Condensée, CNRS,
École Polytechnique,
Institut Polytechnique de Paris, 91120 Palaiseau, France
=======================================================================================================================================================================================================================================================================
§ INTRODUCTION
The exceptional properties of graphene<cit.> has stimulated studies for engineering the Dirac bands, giving rise to the stabilization of new and interesting electronic phases.
Exceptional examples are the superconducting phase<cit.>, magnetic systems<cit.> for spintronic applications, heterostructures showing new properties, twisted and Moiré configurations<cit.>, nanostructures<cit.> and photonic applications in linear and non-linear optics<cit.>, opening new routes for technological applications<cit.>.
At the moment the most promising ways to tailor the graphene physical properties is by means of hetero-atoms deposition, substrate induced interaction thought electronic and structural modifications, twisted and/or heterostructures formed with 2D systems<cit.>.
In particular, its growth or transfer onto different substrates has been the subject of massive investigations in the past, with the aim of understanding how its electronic properties change from the isolated picture ('free-standing') as a result of interaction with the substrate<cit.> or following atomic intercalation and doping<cit.>.
The most important parameters defining the final graphene electronic properties are the lattice matching and the degree of hybridization of π bands of graphene with the substrate.
For examples, the graphene grown on substrates with large lattice mismatch as Ir(0001) or Ru(0001) retains the linear π-bands dispersion close to the Fermi level without appreciable doping.
On the contrary, lattice matched Ni(111) and Co(0001) substrates strongly interact with graphene electrons while exhibiting spin-polarized bands<cit.>. In such strongly interacting scenario the peculiar high electronic charge mobility coming from the almost linear dispersion of the Dirac bands could be compromised if them are completely destroyed by the interaction with the substrate.
This is not the case for Graphene growth on Co(0001). Indeed, even if a gap at the Dirac point of ∼0.4 eV is gained from the interaction with the substrate, the carbon π-bands are still highly dispersing bands (commonly called "minicone")<cit.>.
These features can bring to relevant spintronics applications once combined to the already known capability of graphene to sustain spin currents once injected by spin-polarized electrodes<cit.>, if low spin-orbit coupling, negligible hyperfine interaction, and gate tunability are preserved.
A fundamental element to consider is that the minicone in graphene/Co(0001) results to have the lower part of splitted cones fully occupied and the upper part fully unoccupied, determining a null contribution in the electric conduction. Finding a process that may partially occupy these minicones is therefore necessary in order to take advantage of them. One such mechanism is the deposition of dopant adatoms, which increases the electron charge on the carbon layer.
Although this procedure has been efficiently done for quasi-free-standing graphene<cit.>, in the present case it is fundamental to avoid intercalation of the dopants adatoms between substrate and the graphene adlayer. In fact, intercalation tend to detach graphene from the substrate<cit.>, destroying the spin-polarized states induced by the strong hybridization with the magnetic substrate.
Very recently, low temperature deposition of dopants, namely Europium adatoms on graphene/Co(0001) was demonstrated to be an effective technique to adsorb the dopants on graphene sheet in an ordered √(3)×√(3)-R30^∘ reconstruction, without intercalation and thus heavily doping the minicone keeping its peculiar spin-polarization<cit.>.
This finding paves the way for the search of other magnetic substrates where to observe an analogue mechanism.
A good candidate is Ni(111): the electronic band dispersion of graphene growth on Ni(111) shows the presence of the spin-polarized minicone<cit.> in analogy with graphene on Co(0001), guaranteeing a magnetic ordering above room temperature<cit.>.
At the same time, if the intercalation is precluded (as by deposition at low temperature), RE adatoms are expected to reconstruct into an ordered surface on top of graphene transferring electronic charge to it, as demonstrated in the case of Eu on graphene/Co(0001)<cit.>.
The scope of the present manuscript is precisely to examine the doping mechanism induced by rare-earth (RE) deposition on the graphene/Ni(111) minicone using first-principles density functional theory (DFT) calculations.
The considered RE (RE=La, Eu, Gd, Yb) adatoms are expected in a +2 configuration, and adsorbed on graphene in a √(3)×√(3)-R30^∘.
The chosen RE adatoms provide a comprehensive picture on the influence of the different adatoms electronic configuration on the minicone doping and dispersion highlighting the relevant role of RE-d-states.
Finally, the calculations are expanded outside the domain of rare earth to illustrate that analogues mechanisms can also be traced back in Lu and Y,
demonstrating the wider valence of the presented concepts.
§ COMPUTATIONAL METHODS
Theoretical calculations were performed using the Vienna ab-initio simulation package (VASP)<cit.>, using the generalized gradient approximation in the revised Perdew-Burke-Ernzerhof version (PBEsol)<cit.> for the exchange-correlation energy.
We used projected augmented-wave pseudopotentials<cit.> for all the atomic species involved, with an energy cutoff up to 500 eV.
The surfaces were simulated within a supercell approach which considers 6 Ni layers along the [111] direction and about 20 Å of vacuum.
Graphene was adsorbed on the topmost Ni surface layer at the experimental lattice parameter 2.49 Å, in the 1×1 reconstruction with the top-fcc stacking.
The ferromagnetic (FM) solution was considered for the Ni atoms in the calculations, while different spin-configurations were considered for the magnetic adatoms adsorbed on the hollow site of graphene<cit.> in a
√(3)×√(3)-R30^∘ reconstruction.
Integration of charge density over the two dimensional Brillouin zone (BZ) was performed using an uniform 6×6 Monkhorst and Pack grid<cit.> with a Gaussian smearing parameter σ=0.05 eV.
Total energy minimization was performed for all the atoms except for the bottom most four Ni layers that were fixed to their Ni bulk positions.
The DFT+U approximation was adopted for the treatment of the f-orbitals, with the U and J parameters chosen in agreement with literature (for Eu and Gd we adopted U=5.9 eV, J=0.9 eV<cit.>; for Yb U=2.0 eV, J=0.7 eV<cit.>, while for La, Lu and Y no correction is needed).
§ RESULTS
We start our study with graphene/Ni(111) system, taken as a reference. The adsorption distance between graphene and the Ni surface is predicted to be 2.05Å in agreement with previous studies<cit.>. Nickel substrate induces a small magnetic moments on the carbon atoms, showing an antiferromagnetic order between nonequivalent carbon sites (carbon in the on-top position has a magnetic moment aligned with those of the Ni substrate (see Tab.<ref>).
Once the RE-adatom is included in the calculation, its adsorption causes the increase of the graphene vertical distance from the Ni(111) substrate with respect to the undoped case, regardless of the RE-atom involved (see Tab.<ref>). This effect is the natural consequence of the electronic charge transfer from the RE-atom to graphene, as it will be shown in the electronic band structure (see below). The adsorption distance between RE-atoms and the carbon layer is larger (of about 2.2Å) for La and Gd than in the case of Eu and Yb (of about 2.1Å).
The structural properties correlate with the magnetic interaction between RE and the substrate: to the large (small) graphene-substrate distance of La and Gd (Eu and Yb) corresponds a ferromagnetic (anti-ferromagnetic) arrangement of them with respect to the Ni substrate (even if Yb shown a very poor residual magnetization). Such ordering of the adatoms influences the magnetic moment of the last Ni layer at the surface, giving an enhancement (reduction) in the ferromagnetic (anti-ferromagnetic) configuration (see Tab.<ref>).
In all the considered cases, the fragile magnetic ordering present in carbon atoms in Gr/Ni(111) system is practically destroyed by the presence of RE adatoms, due to the increased graphene-substrate distance.
The analysis of the electronic properties reveals the origin of these different behaviours. In Fig.<ref> we report the spin polarized band structures for the considered systems, unfolded on the graphene BZ 1×1 cell and projected on the C-p_z orbitals to facilitate the recognition of the most dispersive graphene bands.
In line with previous studies<cit.>, both majority and minority spin components exhibit a gap opening at the Dirac point, resulting in the so-called "minicone" shape at the K-point. Only the majority spin-component has occupied valence states, separated by an energy gap of ∼0.35 eV from the conduction valley (see red dots in Fig.<ref>).
The minicone gap opening originates from two main effects: the sublattice asymmetry induced by the Ni(111) substrate and the exchange field due to the strong p-d hybridization coming from the spin-splitted Ni d-orbitals.
The adsorption of RE atoms drastically changes the electronic properties of the system: it induces electron doping, shifting downwards the spin-polarized carbon bands, and it brings strong modification of the graphene minicone producing an overall flattening of the band (in particular along the K-M direction, see below).
Similarities in La and Gd behaviour opposed to the Eu and Yb cases are recognizable. First of all, the former provide an higher doping regime (downward shift of -0.6 eV) than the latter (-0.2 eV). Then, such higher doping present in La and Gd is accompanied by a strong flattening of the majority spin band dispersion along the K-M direction placing it below the Fermi level at about -0.25 eV. Whether instead in the case of Eu and Yb the graphene conduction majority spin band is almost unaltered in shape. All these effects can be connected to the valence configuration of the RE atom involved.
In fact, both La and Gd have a d states in valence (electronic configurations are [Xe] 5d^1 6s^2 and [Xe] 4f^7 5d^1 6s^2 respectively), and those states tend to bond with graphene p_z.
An hybridization of the RE-d_xy/x^2-y^2 orbitals with the C-p_z ones can completely disrupt the minicone structure, giving rise to a semi parabolic dispersion along the Γ K path. In addition the band becomes extremely flat from K to M and, via a super-exchange mechanism, the RE results in a magnetic moment aligned parallel to the Ni(111) substrate.
To better clarify this effect, in Fig.<ref> we report the surface-Ni and RE d-projected states. From the first row in Fig.<ref> it is evident as the hybidrization between the C-p_z and Ni-d_z^2-r^2 orbitals underlies the spin-polarized state at the Fermi level both in the pristine system and in the case of RE adsorption cases.
At the same time, from the second row in Fig.<ref>, we note the presence of the RE-5d_xy,x^2-y^2 states hybridised with the C-p_z (and Ni-d_z^2-r^2) ones only for La and Gd. Therefore, the spin-polarized electronic state at the Fermi level results extremely extended in real space: from the RE up to the surface-Ni layer in the out-of-plane direction.
A counter-proof of the hybridisation between the RE-d_xy,x^2-y^2 states with the and C-p_z orbitals comes from the analysis of the projected densities of states reported in Fig.<ref>. As shown, only for La and Gd adsorption we have a perfect superposition of the C-p_z and RE-d_xy,x^2-y^2 states for the "up"-spin channel. A negligible contribution from the other d-states is present.
To further expand the investigation of this effect, we thus consider the last of Lanthanides (Lutetium) and the first of transition metals (Yttrium), having respectively a filled f orbital with a 5d^1 6s^2 valence configuration (Lu) and a similar 4d^1 5s^2 environment for Y (without f-states).
In agreement with the already observed behaviour, also Lu and Y are capable to "detach" graphene from the Ni(111) surface moving to a distance of 2.20Å. A residual magnetization on the adatoms of 0.05 and 0.10μ_B respectively is present, ferromagnetically aligned with the Ni(111) substrate. The last Ni surface atoms exhibit a magnetization of 0.54 μ_B, in both Lu and Y cases, in complete analogy with the previously considered RE-5d cases (La and Gd).
The structural details for these last system are summarized in in Tab.<ref>.
From Fig.<ref> the electronic properties of the systems are essentially indistinguishable between the Lu or Y adsorption cases. The adatom d-states interact with C-p_z ones, destroying the minicone, in perfect agreement with what happens in the already considered cases of La and Gd adsorption. Thus we conclude that the presence of d-states in valence is detrimental for the conservation of the graphene minicone.
§ CONCLUSIONS
In conclusion, this work confirms the possibility to induce, modify and dope the minicone state in graphene growth on Ni(111) by different kind of adatoms, revealing the microscopic mechanisms that determine the hybridization between Ni-d states, graphene p_z Dirac orbitals and RE valence electrons.
Our predictions for Eu adsorption on graphene are in line with what was recently experimentally observed for Eu adatoms on graphene/Co(0001)<cit.> system, the main difference consisting on reduced doping in graphene due to the Ni(111) substrate.
Our extensive first-principles calculations, reveal the successfully doping of the spin-polarized minicone is realized when the adatoms do not present d-states in valence, while the presence of them leads to the formation of a single-spin electron-like state crossing the Fermi level around the K-point of the graphene BZ which flattens at ∼-0.25 eV along the K-M direction.
Our work proposes a feasible way to engineer the minicone band present in graphene on Ni(111) substrate, which could serve as a material platform for spintronic applications, transport experiments and Kondo physics.<cit.>.
§ ACKNOWLEDGEMENTS
C. T. acknowledges financial support under the National Recovery and Resilience Plan (NRRP), Mission 4, Component 2, Investment 1.1,
funded by the European Union – NextGenerationEU– Project Title “DARk-mattEr DEVIces for Low energy detection - DAREDEVIL” – CUP D53D23002960001 - Grant Assignment Decree No. 104 adopted on 02-02-2022 by the Italian Ministry of Ministry of University and Research (MUR).
C. T. and F. B. acknowledge financial support under the National Recovery and Resilience Plan (NRRP), Mission 4, Component 2, Investment 1.1, funded by the European Union – NextGenerationEU– Project Title "Symmetry-broken HEterostructurEs for Photovoltaic applications - SHEEP" – CUP B53D23028580001 - Grant Assignment Decree No. 1409 adopted on 14-09-2022 by the Italian Ministry of Ministry of University and Research (MUR).
Research at SPIN-CNR has been funded by the
European Union - NextGenerationEU under the Italian Ministry of University and Research (MUR) National Innovation Ecosystem grant ECS00000041 - VITALITY, C. T. acknowledges Università degli Studi
di Perugia and MUR for support within the project
Vitality.
G.P. acknowledges the European Union-NextGenerationEU under the Italian Ministry of University and Research (MUR) National Innovation Ecosystem Grant No. ECS00000041 VITALITY-CUP E13C22001060006 for funding the project.
C. T and G.P. acknowledge support from CINECA Supercomputing Center through the ISCRA project and Laboratori Nazionali del Gran Sasso for computational resources.
|
http://arxiv.org/abs/2409.03754v1 | 20240905175932 | Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution | [
"Marga Don",
"Stijn Pinson",
"Blanca Guillen Cebrian",
"Yuki M. Asano"
] | cs.CV | [
"cs.CV"
] |
Evaluation of few-shot semantic segmentation for river pollution
M. Don et al.
University of Amsterdam, Amsterdam, The Netherlands The Ocean Cleanup, Coolsingel 6 Rotterdam, The Netherlands Currently at Technical University of Nuremberg
[email protected], [email protected],
{stijn.pinson, b.guillencebrian}@theoceancleanup.com
Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution
Marga Don10009-0001-5435-8935
Stijn Pinson 2 Blanca Guillen Cebrian 2
Yuki M. Asano 1,3
September 9, 2024
===============================================================================================
§ ABSTRACT
Foundation models (FMs) are a popular topic of research in AI. Their ability to generalize to new tasks and datasets without retraining or needing an abundance of data makes them an appealing candidate for applications on specialist datasets. In this work, we compare the performance of FMs to finetuned pre-trained supervised models in the task of semantic segmentation on an entirely new dataset. We see that finetuned models consistently outperform the FMs tested, even in cases were data is scarce. We release the code and dataset for this work https://github.com/TheOceanCleanup/RiverTrashSegmentationhere.
content/1_intro
content/2_relatedwork
content/3_dataset
content/4_methods
content/5_experiments
content/6_discussion
content/7_conclusion
splncs04
content/99_supplementary
|
http://arxiv.org/abs/2409.02551v1 | 20240904091816 | Deep Learning for Multi-Country GDP Prediction: A Study of Model Performance and Data Impact | [
"Huaqing Xie",
"Xingcheng Xu",
"Fangjia Yan",
"Xun Qian",
"Yanqing Yang"
] | econ.GN | [
"econ.GN",
"q-fin.EC"
] |
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Kangkai Zhang,
Shiming Ge, Senior Member, IEEE,
Ruixin Shi,
and Dan Zeng, Senior Member, IEEE
Kangkai Zhang is with the Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100084, China, and with Baidu Inc., Beijing 100080, China. Email: [email protected].
Shiming Ge and Ruixin Shi are with the Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100084, China, and with School of Cyber Security at University of Chinese Academy of Sciences, Beijing 100049, China. Email: {geshiming, shiruixin}@iie.ac.cn.
Dan Zeng is with the Department of Communication Engineering, Shanghai
University, Shanghai 200040, China. E-mail: [email protected].
Shiming Ge is the responding author. Email: [email protected].
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
GDP is a vital measure of a country's economic health, reflecting the total value of goods and services produced. Forecasting GDP growth is essential for economic planning, as it helps governments, businesses, and investors anticipate trends, make informed decisions, and promote stability and growth. While most previous works focus on the prediction of the GDP growth rate for a single country or by machine learning methods, in this paper we give a comprehensive study on the GDP growth forecasting in the multi-country scenario by deep learning algorithms. For the prediction of the GDP growth where only GDP growth values are used, linear regression is generally better than deep learning algorithms. However, for the regression and the prediction of the GDP growth with selected economic indicators, deep learning algorithms could be superior to linear regression. We also investigate the influence of the novel data – the light intensity data on the prediction of the GDP growth, and numerical experiments indicate that they do not necessarily improve the prediction performance. Code is provided at https://github.com/Sariel2018/Multi-Country-GDP-Prediction.git.
§ INTRODUCTION
Gross Domestic Product (GDP) is a critical measure of a country's economic health, reflecting the total value of all goods and services produced over a specific period. It serves as a comprehensive indicator of a nation's economic activity, influencing government policy, investment decisions, and international comparisons. GDP is a key metric for assessing economic performance, guiding fiscal and monetary policy, and shaping economic development strategies.
Forecasting GDP growth rate is crucial for economic planning and decision-making. It provides valuable insights into the future direction of an economy, helping governments, businesses, and investors make informed decisions. Accurate GDP growth predictions allow policymakers to anticipate economic trends, adjust fiscal and monetary policies accordingly, and implement measures to promote stability and growth. For businesses, understanding expected economic conditions helps in strategic planning, resource allocation, and risk management. Investors rely on GDP growth forecasts to assess market conditions and make investment choices. Thus, GDP growth rate predictions are essential for shaping economic strategies and fostering sustainable development.
Machine learning has advanced GDP forecasting by building on Dynamic Factor Models (DFMs), first applied by <cit.> for real-time GDP forecasting, influencing Federal Reserve policy. Central banks like the ECB <cit.> and institutions such as the World Bank <cit.> have since adopted similar models.
The integration of machine learning has further enhanced forecasting accuracy. <cit.> showed that a machine learning model using over 600 variables outperformed traditional AR and DFM models in forecasting New Zealand's GDP, particularly during the COVID-19 pandemic. <cit.> also found that machine learning models using Google data significantly improved GDP growth rate forecasts for OECD countries. <cit.> highlighted these benefits by applying machine learning models like gradient boosting trees, random forests, and kernel ridge regression to forecast China's GDP, demonstrating their superior performance over traditional methods.
There are many works on the prediction of GDP growth by deep learning methods as well. In <cit.>, four key economic indicators were utilized to predict Indonesia's quarterly GDP growth using Multiple Linear Regression (MLR), K-Nearest Neighbours (K-NN), and Artificial Neural Network (ANN) models, with MLR outperforming the others based on RMSE values. Similarly, <cit.> employed ANN regression models to forecast GDP growth rates of 15 industrialized economies between 1996 and 2016, demonstrating that ANN provided more accurate and flexible predictions compared to linear models, especially in capturing time trends. In <cit.>, various models, including SARIMA, Holt-Winters, dynamic linear models, and neural networks, were compared for Brazilian GDP forecasting. The Multilayer Perceptron (MLP) outperformed others in both in-sample and out-sample predictions, effectively capturing GDP growth rates.
To address both linear and nonlinear components in GDP forecasting, <cit.> introduced hybrid models combining ARIMA and ANN techniques for Nepal's GDP prediction, where the hybrid models achieved superior accuracy over standalone approaches. Focusing on deep learning methods, <cit.> applied Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) architectures to predict Indonesia's GDP fluctuations during the COVID-19 pandemic, achieving accuracy rates between 80% and 90%, thus highlighting their effectiveness in handling sudden economic changes.
In <cit.>, autoregressive deep learning networks were developed to model Türkiye's GDP and per capita income from 1960 to 2021 using past values of the target variables, resulting in high accuracy as evidenced by performance metrics like R^2, MAE, MAPE, and RMSE. Exploring machine learning algorithms further, <cit.> conducted a comprehensive comparison between models such as BART, GLMNET, GBM, and XGBoost against traditional time series methods like ARIMA and VAR across multiple economies, concluding that multivariate VAR models were generally superior, though XGBoost provided valuable insights by emphasizing different influential variables.
For scenarios with limited data, <cit.> demonstrated that K-Nearest Neighbour regression could effectively predict Indonesia's GDP during the 1998 economic crisis, outperforming both backpropagation neural networks and MLR models. Addressing variable selection, <cit.> utilized stepwise regression techniques on 14 sub-sectors of Pakistan's economy to construct an appropriate time series model for GDP growth prediction, resulting in a model with nine significant predictors validated through various diagnostic tests.
<cit.> applied a multi-layer ANN model to forecast the COVID-19 pandemic's impact on the GDP of eight major economies for the April–June 2020 quarter, achieving forecasting errors of less than 2% and revealing substantial GDP declines necessitating urgent policy responses. In <cit.>, a novel multimodal approach combined historical GDP data with Twitter activity using a two-stage architecture involving a multi-task autoencoder and a multimodal network, providing timely and accurate regional GDP predictions in Spain and effectively capturing the economic effects of the COVID-19 pandemic.
Among these works, most of them aimed to predict the GDP growth rates of one single country. For the multi-country case, <cit.>, <cit.>, and <cit.> adopted machine learning algorithms, while <cit.> and <cit.> predicted the GDP growth by MLP, but only previous GDP growth rates were used there.
There are also some works on the time series forecasting. Some of them are the variants of the Transformer model, such as Informer <cit.>, Autoformer <cit.>, FEDformer <cit.>, PatchTST <cit.> et al. There are also some time series forecasting works based on large language models (LLMs). For instance, LLMTime <cit.>, Time-LLM <cit.>, GPT4TS <cit.>. There are also some foundation models for time series forecasting which are pretrained on a large corpus of time series data, such as Lag-Llama <cit.>, TimesFM <cit.>, Chronos <cit.>, and UniTS <cit.>.
In this paper, we study the prediction of GDP growth in the multi-country regime by deep learning algorithms with multiple economic indicators and novel data. We propose the representation transformer model for the regression of the GDP growth which is a combination of representation in LLM and transformer. For the regression of the annually GDP growth, MLP has
comparable performance with linear regression, but for the regression of the quarterly GDP growth, MLP is better than the linear regression. For the regression of the annually GDP growth, representation transformer is worse than MLP. However, representation transformer can deal with the case where the number of economic indicators of data is variable, while MLP and linear regression could not.
For the prediction of the quarterly GDP growth by autoregression where only GDP growth values are used, with filtered data by some selected economic indicators, LSTM has comparable performance with linear regression. But linear regression is better than LSTM for all GDP growth data. TimeFM,Time-LLM, and PatchTST are generally worse than linear regression.
For the prediction of the quarterly GDP growth with multi-indicator data, LSTM is generally better than linear regression. Time-LLM and PatchTST are actually comparable and are both better than LSTM generally, but they actually could not characterize the impacts of these economic indicators during the inference time.
In all cases, the light intensity data do not necessarily improve the prediction performance.
§ PROBLEM FORMULATION
Denote the economic indicator at time t as x_i^t ∈, and the GDP growth at time t as y^t. There are two regimes for the prediction of GDP growth rate. One is using { x_i^t }_i=1^n to predict y^t, which is also called regression. The other is predicting the GDP growth y^t by previous economic indicators x_i^k and/or previous GDP growth y^k for k<t.
In this paper, we consider these two regimes respectively. In Section <ref>, we study the regression of both annually and quarterly GDP growth. In Section <ref>, we study the prediction of the quarterly GDP growth y^t by using (y^t-h, ..., y^t-1), where h is the sequence length. Denote
z^t (x_1^t, ..., x_n^t, y^t)^⊤∈^n+1.
In Section <ref>, we study the prediction of the quarterly GDP growth y^t by utilizing (z^t-h, ..., z^t-1). In all these cases, we also study the influence of the novel data – the light intensity. The nighttime lights data is derived from monthly night-time remote sensing images captured by the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the NPP satellite. This data has a spatial resolution of 0.004 degrees and covers the period from the launch of the NPP satellite in 2012 to the present. The data has undergone stray light correction, which allows for an accurate representation of temporal changes in local lighting brightness. Nighttime lights data encompasses comprehensive information such as population, transportation, and economic development levels, making it suitable for various development economics research. The raw data is stored in the widely used GeoTIFF format for geographic information and is converted to brightness values using the Rasterio library of Python.
For all the experiments, we divide the train and test sets by time. The test set for the period 2013–2019 consists of the data in the last year, and for the other periods consists of the data of the last two years. For all the deep learning algorithms, we use k-fold cross-validation method and get two checkpoints. One checkpoint is the best validation checkpoint, which is the best of the k checkpoints with the best average validation loss. The other checkpoint is the checkpoint which is trained on the full train set with all the hyperparameters used by the best validation checkpoint. We use k=5 in this paper. For all the deep learning algorithms, we search the best hyperparameters by grid search. For all models, the input data are normalized by the minimum and maximum values of the train and test sets.
§ REGRESSION FOR THE GDP GROWTH
In this section, we study the regression for the annually and quarterly GDP growth, i.e., predicting y^t by (x_1^t, ..., x_n^t). We first describe the data we used.
Based on the 2023 GDP rankings from the World Bank, we selected 21 countries: the United States, China, Germany, Japan, India, the United Kingdom, France, Brazil, Italy, Canada, Russia, Mexico, Australia, South Korea, Spain, Indonesia, the Netherlands, Turkey, Saudi Arabia, Switzerland, and Poland.
We manually selected 13 annual economic indicators from the WEO database of the IMF. These indicators are: `Rural population growth (annual %)', `General government final consumption expenditure (annual % growth)', `Consumer price index (2010 = 100)', `Exports of goods and services (annual % growth)', `Urban population growth (annual %)', `GDP growth (annual %)', `Population growth (annual %)', `Inflation, GDP deflator (annual %)', `Imports of goods and services (annual % growth)', `Final consumption expenditure (annual % growth)', `Unemployment, total (% of total labor force) (national estimate)', `Inflation, consumer prices (annual %)', `Gross fixed capital formation (annual % growth)' and `Households and NPISHs Final consumption expenditure (annual % growth)'.
For the quarterly data, we selected 20 economic indicators, including `Export Value', `Industrial Added Value', `Stock Market Capitalization', `Balance of Payments - Financial Account Balance', `Balance of Payments - Current Account Balance', `Balance of Payments - Current Account Credit', `Balance of Payments - Current Account Debit', `Balance of Payments - Capital Account Balance', `Balance of Payments - Capital Account Credit', `Balance of Payments - Capital Account Debit', `Overall Balance of Payments', `International Investment Position - Assets', `International Investment Position - Liabilities', `Net International Investment Position', `Import Value', `Nominal Effective Exchange Rate', `Retail Sales', `CPI (Consumer Price Index)', `Unemployment Rate' and `Central Bank Policy Rate'. The data was sourced from the financial institution WIND, with the original sources traceable to the World Bank, IMF, and national statistical offices.
We use there models for the regression of the GDP growth rate, i.e., the linear regression, MLP, and representation transformer(RT). We introduce the representation transformer model next.
Representation Transformer.
Large language models have significant influence in the field of artificial intelligence since the emergence of ChatGPT <cit.>. LLMs can provide good representations for the text. For the limited data in the prediction of the GDP growth, the information contained in the meaning of variable names could be used by the adoption of LLMs. Hence, we first write a description for every selected economic indicator, and then input the text description into a LLM to get a representation vector. Since the representation by LLMs is usually a high dimensional vector, we use a projection layer to reduce the dimension. Then we also concatenate the value of the corresponding variable with multiple times to the projected vector. At last, we use transformer <cit.> to obtain the predicted GDP growth. The detailed neural network architecture of RT is described as follows.
For each economic indicator x_i^t, we write a text description Text_i^t and input it to the InternVL-Chat-V1-5[https://huggingface.co./OpenGVLab/InternVL-Chat-V1-5. We use VLM since it can handle images as well in case we may need to deal with vision data.] model, whose dimension of representation vectors is 6144. For instance, the description for `Exports of goods and services' is: The index “Volume of exports of goods and services" with the unit “Percent change" measures the percentage change in the quantity of goods and services that a country sells to other nations over a specific period, usually a year, compared to the previous period. In this year, the index is {value} percent change. The “{value}" in the description is replaced by x_i^t. The descriptions of all the economic indicators are in the Appendix. We choose the representation vector of the last token in Text_i^t before the last fully-connected layer as Rep_i^t. Then,
v_i^t = W_1 Rep_i^t + b_1,
u_i^t = (x_i^t, ..., x_i^t)^⊤∈^dim,
c_i^t = concat(v_i^t, u_i^t) + PositionEmbedding,
(o_1^t, ..., o_n^t) = TransformerEncoder(c_1^t, ..., c_n^t),
O^t = mean(o_1^t, ..., o_n^t),
y^t = W_2 O^t + b_2,
where dim is a hyperparameter.
The regression of the GDP growth rates for linear regression, MLP, and RT are in Tables <ref> and <ref>. For the light column, × denotes the light intensity data were not used. The “sum" refers to the total light intensity of all pixels within a specific country or region. The “mean" indicates the average light intensity, while the “std" represents the standard deviation of light intensity values. For each time period, we calculate the average values of these light indicators. Therefore, “every month mean" refers to the average light intensity for every month.
From Table <ref>, we can see that for the regression of the annually GDP growth, MLP has comparable results with linear regression, but for the regression of the quarterly GDP growth, MLP is better than the linear regression. Furthermore, the usage of the light intensity data do not always make the results better. Table <ref> shows that for the regression of the annually GDP growth, representation transformer is worse than MLP. However, representation transformer can deal with the case where the number of economic indicators of data is variable, while MLP and linear regression could not. Moreover, the performance of RT could be improved by the development of LLMs, and by finetuning the part at the beginning of the LLM like Time-LLM <cit.>, which could be future work.
§ PREDICTION OF THE QUARTERLY GDP GROWTH BY AUTOREGRESSION
In this section, we predict the quarterly GDP growth y^t by previous GDP growth rates (y^t-h, ..., y^t-1). For the period between 2013 and 2019, we also add the light intensity dimension to investigate the influence of the light data. We use five models in this section, i.e., linear regression, LSTM, TimesFM <cit.>, Time-LLM[We use GPT-2 for the LLM in Time-LLM in our experiments.] <cit.>, and PatchTST <cit.>. For the GDP growth sequence data, we consider two scenarios. The first scenario involves directly using the GDP data from the multidimensional dataset, which allows for comparison with the results obtained using multidimensional data. The second scenario involves using all historical GDP data to capture the cyclical changes and long-term trends inherent in GDP itself. Due to the availability of data from other dimensions, the data used in the first scenario is a subset of the data used in the second scenario. When the light data are used, the data type is actually the same as that in Section <ref>, and the setups are the same as that in Section <ref> as well. For simplicity, we omit the setups here.
Table <ref> shows the prediction of the quarterly GDP growth results for linear regression and LSTM. We can see that for the first scenario, LSTM has comparable performance with linear regression, but linear regression is better than LSTM in the second scenario. Table <ref> is the performance of TimesFM by zero shot. TimesFM works for the univariable time series, hence the light data were not used. In the data type column, “LSTM data" refers to using the data LSTM used as the input. “Continuous data" refers to using all historical data before the label as the input. We can see that in all cases, TimesFM works best with all historical data before the label, but it is still worse than linear regression.
Table <ref> shows the prediction of the quarterly GDP growth results for Time-LLM and PatchTST. We can see that generally Time-LLM and PatchTST are worse than linear regression. Tables <ref>, <ref>, and <ref> also show that the light data do not necessarily improve the prediction performance.
§ PREDICTION OF THE QUARTERLY GDP GROWTH WITH MULTI-INDICATOR DATA
In Section <ref>, we predict the quarterly GDP growth by autoregression, but models there could not characterize the impacts of previous economic indicators on GDP growth. Hence, in this section, we predict the quarterly GDP growth by previous economic indicators, i.e., predict y^t by (z^t-h, ..., z^t-1), where z^i is defined in (<ref>). We use four models in this section, i.e., linear regression, LSTM, Time-LLM <cit.>, and PatchTST <cit.>. For linear regression, LSTM, and PatchTST, we actually predict z^t by vector autoregression. Since the goal is predicting y^t, hence we give more weight to the GDP growth in the loss function for LSTM and PatchTST. The loss function actually has the following form
f_loss = ∑_i=1^n (x_i^t - x̂_i^t)^2 + W_GDP (y^t - ŷ^t)^2,
where (x̂_1^t, ..., x̂_i^t, ŷ^t) is the prediction for z^t and W_GDP > 0 is the weight for the GDP growth which is a hyperparameter. For the validation loss for LSTM, Time-LLM, and PatchTST, we only calculate the GDP growth part.
Table <ref> shows the prediction of the quarterly GDP growth with multi-indicator data results for linear regression and LSTM. It indicates that LSTM is generally better than linear regression in the multi-indicator case.
Table <ref> shows the prediction of the quarterly GDP growth with multi-indicator data results for Time-LLM and PatchTST. They are actually comparable and are both better than LSTM generally. It should be noticed that, Time-LLM deals with each channel independently and PatchTST has an individual head for each channel and also deal with each channel independently, hence they actually could not characterize the impacts of economic indicators during the inference phase. Tables <ref> and <ref> also indicate that the light data do not necessarily improve the prediction performance.
unsrtnat
PART:
*Appendix
§ DESCRIPTIONS USED BY RT AND TIME-LLM
|
http://arxiv.org/abs/2409.03615v1 | 20240905151946 | Triple trouble with PSR J1618-3921: Mass measurements and orbital dynamics of an eccentric millisecond pulsar | [
"K. Grunthal",
"V. Venkatraman Krishnan",
"P. C. C. Freire",
"M. Kramer",
"M. Bailes",
"S. Buchner",
"M. Burgay",
"A. D. Cameron",
"C. -H. R. Chen",
"I. Cognard",
"L. Guillemot",
"M. E. Lower",
"A. Possenti",
"G. Theureau"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Mass measurements and orbital dynamics of PSR J1618-3921
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
Laboratoire de Physique et Chimie de l'Environnement et de l'Espace, Université d'Orléans, CNRS, F-45071 Orléans, France
Nançay Radio Astronomy Observatory, Observatoire de Paris, Université PSL, CNRS, Université d’Orléans, 18330 Nançay, France,
Laboratoire Univers et Théories, Observatoire de Paris, Université PSL, Université de Paris Cité, CNRS, F-92190 Meudon, France
INAF-Osservatorio Astronomico di Cagliari, via della Scienza 5, I-09047 Selargius, Italy
Australia Telescope National Facility, CSIRO, Space and Astronomy, PO Box 76, Epping, NSW 1710, Australia
South African Radio Astronomy Observatory, 2 Fir Street, Black River Park, Observatory 7925, South Africa
Centre for Astrophysics and Supercomputing, Swinburne University of Technology, PO Box 218, VIC 3122, Australia
ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Swinburne University of Technology, PO Box 218, VIC 3122, Australia
PSR J1618-3921 is one of five known millisecond pulsars (MSPs) in eccentric orbits (eMPSs) located in the Galactic plane, whose formation is poorly understood. Earlier studies of these objects revealed significant discrepancies between observation and predictions from standard binary evolution scenarios of pulsar-Helium white dwarf (HeWD) binaries, especially in the case of PSR J0955-6150, for which mass measurements ruled out most eMSP formation models.
We aim to measure the masses of the pulsar and its companion, as well as constraining the orbital configuration of PSR J1618-3921. This facilitates understanding similarities among eMSPs and could offer hints on their formation mechanism.
We conducted observations with the L-band receiver of the MeerKAT radio telescope and the UWL receiver of the Parkes Murriyang radio telescope between 2019 and 2021. These data were added to archival Parkes and Nançay observations. We perform a full analysis on this joint dataset with a timing baseline of 23 years. We also use the data from recent observations to give a brief account of the emission properties of J1618-3921, including a Rotating Vector model (RVM) fit of the linear polarisation position angle of the pulsar.
From the timing analysis, we measure a small but significant proper motion of the pulsar. The long timing baseline allowed for a highly significant measurement of the rate of advance of periastron of ω = 0.00145(10).
Despite the tenfold improvement in timing precision from MeerKAT observations, we can only report a low significance detection of the orthometric Shapiro delay parameters h_3 = 2.70^+2.07_-1.47 and ς = 0.68^+0.13_-0.09. Under the assumption of the validity of General Relativity (GR), the self-consistent combination of these three parameters lead to mass estimates of the total and individual masses in the binary of M_tot= 1.42^+0.20_-0.19, M_c = 0.20^+0.11_-0.03, and M_p = 1.20^+0.19_-0.20. We detect an unexpected change in the orbital period of Ṗ_ b =-2.26^+0.35_-0.33 × 10^-12, which is an order of magnitude larger and carries an opposite sign to what is expected from the Galactic acceleration and the Shklovskii effect, which are a priori the only non-negligible contributions expected for Ṗ_ b. We also detect a significant second derivative of the spin frequency, f̈. The RVM fit revealed a viewing angle of ζ = 111(1). Furthermore, we report an unexpected, abrupt change of the mean pulse profile in June 2021 with unknown origin.
We propose that the anomalous P_b and f̈ we measure for J1618-3921 indicate an additional varying acceleration due to a nearby mass, i.e., the J1618-3921 binary system is likely part of a hierarchical triple, but with the third component much farther away than the outer component of the MSP in a triple star system, PSR J0337+1715. This finding suggests that at least some eMSPs might have formed in triple star systems.
Although the uncertainties are large, the binary companion mass is consistent with the P_b-M_WD relation, which has been verified for circular HeWD binaries and also for the two HeWDs in the PSR J0337+1715 system. Future regular observations with the MeerKAT telescope will, due to the further extension of the timing baseline, improve the measurement of Ṗ_ b and f̈. This will help us further understand the nature of this system, and perhaps improve our understanding of eMSPs in general.
Triple trouble with PSR J1618-3921: Mass measurements and orbital dynamics of an eccentric millisecond pulsar
K. Grunthal [email protected]
V. Venkatraman Krishnan 1
P. C. C. Freire 1
M. Kramer 1
M. Bailes 8,9
S. Buchner 7
M. Burgay 5
A. D. Cameron 8,9
C.-H.R. Chen 1
I. Cognard 2,3
L. Guillemot 2,3
M. E. Lower 6
A. Possenti 5
G. Theureau 2,3,4
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
As so-called lighthouses in the sky, pulsars are a peerless species of astronomical objects. These highly magnetised neutron stars emit a beam of an electromagnetic radiation along their magnetic poles, which is visible as a steady train of pulses at a radio telescope as the beam periodically sweeps across the observer's line-of-sight. Due to the high accuracy of atomic reference clocks and low-noise receivers in modern radio telescopes, the times-of-arrival (ToAs) of the pulses at the telescope's location are precisely recorded. The motion of the pulsar, the radio emission propagation through the interstellar medium (ISM) as well as the motion of the radio telescope through the Solar System causes the ToAs to deviate from a purely periodic behaviour. Measuring the ToAs and fitting a model to them which accounts for all these possible effects is known as pulsar timing. In particular, the timing of millisecond pulsars (MSPs), a certain sub-population of pulsars (cf. Sec. <ref>), allows uniquely precise measurements of the spin, astrometric and orbital parameters because these pulsars exhibit a uniquely stable rotational behaviour <cit.>.
Timing of pulsars in the Southern hemisphere experienced a step change in precision with the arrival of the MeerKAT telescope: The low system temperature (∼18) of the L-band receiver, its wide spectral coverage from (856 to 1712, thus a bandwidth of 856 MHz) and the high aperture efficiency of its 64×13.5 offset Gregorian dishes (which improve upon the Parkes Murriyang radio telescope gain by a factor of four) make MeerKAT a powerful addition to other existing radio observatories, significantly increasing the radio sensitivity in the Southern hemisphere <cit.>. Furthermore, the ultrawide-low band receiver (UWL) of the Murriyang radio telescope have also significantly increased its spectral coverage and sensitivity.
This work was conducted as part of the “RelBin” project <cit.>, which is one of the core sub-projects of the MeerTime project, a five-year Large Survey Project <cit.>, aiming to use the precision of the MeerKAT telescope to explore fundamental physics via pulsar timing. As outlined in <cit.>, the main aim of “RelBin” is detecting or improving on the measurement of timing parameters related to relativistic effects in the orbital motion of binary systems. Due to the high precision of observations with MeerKAT, this project offers not only a wide range of tests of gravity theories <cit.>, but also improves pulsar population studies by yielding a continuously growing catalogue of precise NS mass measurements and constraining binary evolution theories <cit.>.
The known pulsar population can be split into two large sub-groups based on their rotational behaviour and spin evolution. The so-called millisecond pulsars exhibit a rotational period of less than 30, as well as a relatively low inferred magnetic field strength (∼ 10^8-9). Additionally, about 80% of MSPs are found in binary systems, with main sequence (MS) stars, other neutron stars (NS) or white dwarfs (WD) as their companions, among the latter the Helium white dwarfs (HeWDs) are the most numerous.
In the current binary evolution models, these systems originate from a stellar binary, in which the more massive star already evolved into a NS. As the companion star leaves the MS and becomes a red giant, it fills its Roche Lobe and overflows it. Some of the matter in this so-called Roche-Lobe-overflow (RLO) accretes onto the NS. During this period of 𝒪(), the system is detectable as a low-mass X-ray binary (LMXB). The mass transfer from the red giant to the NS also transfers orbital angular momentum to the NS, leading to a significant spin-up of the NS, such that it becomes a MSP <cit.>. At the end of this stage, the binary consists of a MSP and a stripped stellar core, which depending on its mass evolves either into a NS or a WD <cit.>.
In most observed cases, the companion is a WD; the pulsars in these systems have significantly shorter spin periods, owing to the slower evolution and longer accretion episodes associated with lighter companions. By means of detailed numerical simulations, <cit.> derived a relation between the binary orbital period P_b and the mass of a HeWD companion M_WD (which we will refer to as the TS99 relation). Using catalogues of known MSP-HeWD system masses and comparing them to the latest stage of simulation results, this relation has been reviewed intensely over past decades (see e.g. <cit.> <cit.>) and usually holds for these binaries.
The tidal interactions accompanying the RLO lead to a circularization of the binary orbit <cit.>, as well as to an alignment of both the pulsar's spin axis with the angular momentum axis of the orbit <cit.>. Since the companion then evolves slowly into a WD, this low-eccentricity orbit and the spin alignment should be retained at later stages. In systems where the companion becomes a NS, the mass loss and the kick associated with the supernova event that forms the second NS will cause a significant increase in the eccentricity of the orbit (e), if not outright disruption, and in many cases a misalignment of the spin of the recycled pulsar with the angular momentum of the post-SN orbit <cit.>.
In globular clusters, interactions with passing-by external stars can disturb the circular orbits of MSP - HeWDs, which is confirmed by the large number of eccentric binary MSPs in globular clusters[For a list of pulsars in globular clusters, see <https://www3.mpifr-bonn.mpg.de/staff/pfreire/GCpsr.html>.].
Apart from these cases, the majority of MSP - WD systems in the Galactic disk exhibit the expected small residual eccentricities <cit.>: there are no nearby stars to perturb them, and the evolution of the companion to a WD does not increase e. Nevertheless over the last decade, six systems with low-mass companions (which in one case are confirmed as HeWDs, ), with 0.027 < e < 0.13 and 22 < P_b < 32d have been discovered <cit.>. These systems clearly do not follow the e-P_b relation predicted by <cit.> and became known as eccentric millisecond pulsar binaries (eMSPs). These systems are puzzling; their formation mechanism has not yet been fully understood <cit.>.
A possibility could be the formation in a triple system which became unstable, ejecting one of the components, as proposed for PSR J1903+0327 <cit.> by <cit.> and <cit.>.
Intuitively, such a chaotic process should lead to a diversity of orbital configurations and companion types. However, eMSPs do not only have similar orbits, but also similar companion masses (all consistent with being HeWDs), which is seen by <cit.> and <cit.> as a strong indicator in favour of a deterministic process with a fixed outcome.
For this reason, five competing theories were put forward in order to explain the formation of Galactic eMSPs. They commonly rely on the TS99 relation, but describe various perturbative mechanisms capable of introducing an eccentricity of the binary orbit. A broader introduction to these can be found in <cit.>. Lately, the timing analysis of J0955-6150 <cit.> revealed that this system violates the TS99 relation, which is not compatible with all five theories.
The following analysis of the eMSP PSR J1618-3921 aims to broaden the knowledge about these systems, to find any similarities that could pave the way towards new formation models.
The discovery of PSR J1618-3921 (henceforth J1618-3921, similarly all other J2000 object names refer to pulsars if not indicated otherwise) was reported by <cit.> as part of a 1.4-GHz survey of the intermediate Galactic latitudes with the Parkes radio telescope. It is a recycled Galactic-disk pulsar in a binary orbit with a period of 22.7 and a low-mass companion, presumably accompanied by a low-mass HeWD. With a rotational period of 11.98, but unmeasured period derivative, it was suspected of being an MSP. As a result of the first observations, J1618-3921 stood out from the pulsar population in the Galactic Plane due to its anomalously large orbital eccentricity of 0.027 <cit.>. It is now thought to belong to the eMSP class <cit.>; it is however the pulsar with by far the lowest eccentricity and longest spin period within that sub-population.
After a decade of sporadic observations with Parkes, <cit.> aimed to precisely measure the pulsar's spin, astrometric and orbital parameters via a set of dense observations of the pulsar with the Nançay radio telescope (NRT): 51 h of regular observations spread over three observing campaigns. This resulted in the first ever timing solution for this system, its parameters are given in Tab. 3 of <cit.>; for completeness also shown in the second column Tab. <ref>. This shows that the pulsar is a MSP (from the small period derivative) and confirm the unusual orbital eccentricity. Due to limited precision (this means, a comparably large mean uncertainty in the Nançay ToAs) and timing baseline, the observations were not sufficient to reveal additional timing parameters such as the pulsar's proper motion, the rate of advance of periastron or the Shapiro delay.
After the addition of J1618-3921 to the RelBin program, it has been regularly observed with the MeerKAT radio telescope. In addition to that we have also started observing it regularly with the Parkes Radio Telescope and continued observations at NRT. Using all extent data on this pulsar - adding up to a total baseline of more than 23 years - we derived an updated timing solution that improves on both numerical precision and the number of measured relativistic effects of the binary orbit, including the first estimates of the component masses.
In the course of the paper, we will start with a brief summary of the observations of J1618-3921 in Section <ref>. Section <ref> will cover the profile analysis; Section <ref> contains the timing analyses, where we report our new timing solution, including constraints of additional parameters compared to these reported by <cit.>, which include the constraints on the mass of the system. This will be followed in Section <ref> by a thorough discussion of the current state of knowledge on eMSPs in Section <ref>, with special focus on the combined results from the timing of other eMSPs and our J1618-3921 timing parameters. Finally, we conclude by summarising our results in Section <ref>.
§ OBSERVATIONS AND DATA PROCESSING
§.§ Parkes
The first observations of J1618-3921 at the Parkes Radio telescope date back to the 1999 project P309 <cit.>, followed by observations in 2001 during P360. In total, the pulsar was observed on six days in August 1999 and on three days in 2001, spanning the orbital phase from 0 to 0.3 and 0.5 to 0.7 respectively. Both runs use the central beam of the 13-beam 21 "multi-beam" receiver <cit.>, with a central frequency
of 1374 and a bandwidth of 288. After a change to the CPSR-2 (Caltech-Parkes-Swinburne-Recorder) backend, J1618-3921 was monitored again in the first half of 2003 with a monthly cadence (covering the orbital phase between 0.2 and 0.7) and twice in 2005 with a gap of five days. These observations were now to made simultaneously using two different 64 bands, with central frequencies at 1341 and 1405 respectively. Further technical details of these observations are described in <cit.>.
Making use of the ultra-wide band receiver together with the Medusa-backend <cit.>, observations of J1618-3921 with Parkes resumed in 2019, and continue at the time of writing on a regular basis. The UWL receiver has a bandwidth of 3328 centred around a frequency of 2368. When used in pulsar folding mode, the data have a typical sub-integration length of 30 with a resolution of 128 channels per each of the 26 sub-bands, i.e. each channel has a bandwidth of 1, 1024 phase bins and full polarisation information <cit.>.
§.§ Nançay
As pointed out in <cit.>, J1618-3921 was first observed at Nançay in May 2009 with the Berkeley-Orléans-Nançay (BON) instrument. Due to a lack of detailed information on spin, orbital parameters and the dispersion measure (DM), these first observations were conducted using the "survey" mode. The incoherent de-dispersion and coarse time resolution associated to this mode lead to very large ToA uncertainties. After the change of the Nançay instrumentation to the NUPPI, a clone of the Green Bank Ultimate Pulsar Processing Instrument (GUPPI) in August 2011, J1618-3921 was still observed in survey mode, with a total bandwidth of 512 divided in 1024 channels with a 64 sampling. When a coherent timing solution for J1618-3921 was found, observations were continued using the "timing" mode of NUPPI from December 2014 on. In this mode, NUPPI is able to coherently de-disperse the data and also samples with higher time resolution. This leads to a significant improvement in the quality of the observations, which is visible in the decrease of the mean ToA uncertainty. The observation lengths vary between 1500 and 3400, with sub-integrations that vary between 15 and 30. In the other axes, all data files have the same resolution of 128 frequency bins, 2048 phase bins and full polarisation information.
§.§ MeerKAT
As part the RelBin programme <cit.> at the MeerKAT telescope, J1618-3921 has been observed since March 2019, yielding a total observation time of 28.85 hours. All observations use the L-band receiver (central frequency of 1284 and an effective bandwidth of 776) together with the PTUSE backend. All technical set-up details can be found in <cit.>, <cit.> give a thorough description of the polarisation and flux calibration. The typical sub-integration length is 8, and each observation contains usually 2048 sub-integrations at a frequency resolution of 1024 channels over the full bandwidth, 1024 phase bins and the full polarisation information.
Comparing the details of the MeerKAT observations with the Parkes UWL observations, clearly the former have exceptionally low noise, resulting in outstanding quality of profile measurements. This is evident from the mean ToA uncertainty, which is almost a factor of six lower for the MeerKAT observations than for the Parkes (a full discussion of the timing procedure and ToA derivation will be given in Sec. <ref>). However, the Parkes observations do reveal the structure of the pulse profile at higher frequencies. A summary of all observations is presented in Table <ref>.
§.§ Data processing
Following standard data reduction procedures in pulsar timing, we used the psrchive <cit.> software package. If not explicitly indicated otherwise, all programs or commands referred to in this section are part of this package.
The early Parkes data sets were manually cleaned from radio frequency interference (RFI) using pazi and psrzap. We used the psrpype pipeline[publicly available under <https://github.com/vivekvenkris/psrpype>] for the data reduction of the UWL observations, that have observing lengths between 2048 and 14402 seconds. psrpype uses the clfd software package[publicly available under <https://github.com/v-morello/clfd>] <cit.> RFI cleaning and flux calibration measurements of the Hydra A radio galaxy, returning cleaned and flux calibrated pulsar archives. In order to polarisation calibrate the observations, METM (Measurement Equation Template Matching) <cit.> was performed on the observations, using off-target calibration observations with injected pulses from a noise diode. The calibrated and cleaned UWL-data was folded into 13 frequency sub-bands.
By default, all pulsar fold-mode observations conducted with MeerKAT as part of the RelBin program are put through the meerpipe pipeline, which performs the RFI excision and polarisation calibration. meerpipe is a modified version of coastguard <cit.>. For the polarisation calibration, a calibration observation is performed before each pulsar observation session, from which the Jones matrices used to calibrate the pulsar observations are obtained. For more details, see <cit.>. The cleaned and calibrated files are then decimated in time, frequency and polarisation to the desired resolution, which in the case of this work means a scrunching factor of 116 in frequency, 128 in time and a full scrunch in polarisation. This leaves observations containing 8 frequency channels across the 775.
The NRT data archives went through the full data reduction scheme described in <cit.>. For the final analysis, we re-installed our latest ephemeris to the data and folded each observation completely in time and polarisation. These archives had a sufficient S/N to keep a resolution of four frequency channels across all observations. We used frequency-resolved templates to account for the strong profile evolution across frequency. These were generated by iteratively running paas on the four frequency channels. Then we obtain frequency resolved ToAs via the pat command.
§ RADIO EMISSION PROPERTIES
§.§ Change of profile with frequency
If not otherwise indicated, for all analyses of the pulse profile, the integrated profile was obtained by summing up all observations of J1618-3921 on a backend-wise basis and summing them along the time, frequency and polarisation axes.
The left part of Fig. <ref> shows the profile as seen by MeerKAT's L-band receiver after ∼ 26 hours of integration, the middle part shows the equivalent for Parkes with the UWL receiver after ∼ 29 observing hours and the right part corresponds to the ∼ 50 hours of observations with the Nançay radio telescope. The pulse profile shows a main pulse with a duty cycle of roughly 20%. It consists of two sharper peaks, where the first one exhibits a small sub-peak on its right side. For the MeerKAT observations, the first sub-pulse peaks at ∼6/7 of the peak intensity of the second pulse. The main pulse is preceded by a low-intensity pulse with 1/7 of the main pulse amplitude, which is located at ∼110 beforehand. The shape of that secondary pulse is somewhat different than that of the main pulse, with a plateau-like feature on its left-hand side and a wider peak. Although it has a duty cycle of only around 15%, due to its low amplitude and shape plateau it appears more smeared out than the main pulse.
In all plots in Fig. <ref>, the heat map in the lower sub-figure resolves the pulse into the different frequency bands, a brighter colour indicating a larger intensity. Clearly, the intensity of the pulse decreases with increasing frequency, meaning that the pulsar has a steep spectrum. <cit.> found a spectral index of -2.28(4). At the same time the profile is broader at lower frequencies. For the main pulse this means that the two sharp peaks almost merge into one single broad peak at the lowest frequencies. In light of the template matching used in pulsar timing to create the ToAs, this might be a significant impairment of the ToA precision in the lower frequency bands.
§.§ Polarisation properties
Fig. <ref> shows the polarisation profile of J1618-3921 as recorded with the MeerKAT L-band receiver and corrected for the Rotation Measure given in <cit.>, as well as the evolution of the position angle (PA) across the pulsar's phase. The PAs are measured in the so-called “observer's convention”. The PA displays sudden jumps at the edges of the main pulse that are coincident with the sharp drops in the total linear polarisation. These features are consistent with arising from orthogonal polarisation modes <cit.>, a phenomena that is either intrinsic to the emission of the pulsar <cit.>, or result from propagation effects in the pulsar magnetosphere <cit.>.
At the right edge of the pulse we find a jump of clearly less than 90, with an offset of only 60–70 from the nominal PA swing. This indicates that these jumps do not originate purely from linear modes, but most likely from magnetospheric propagation effects creating circular modes as well <cit.>.
We can draw information about the geometry of J1618-3921 from the highly resolved swing of the polarisation angle across the main pulse. This can be explained by means of the Rotating Vector Model (RVM) <cit.>: The emitted electromagnetic waves are polarised along the magnetic field lines, which point radially outwards along the pulsar's cone. As the beam moves across the line of sight, the observer sees these field lines under an ever changing angle <cit.>.
Exploiting basic geometric considerations, the RVM yields, for the position angle ψ:
tan(ψ-ψ_0) = sinαsin(ϕ-ϕ_0)/sin(α+β)cosα - cos(α+β)sinαcos(ϕ-ϕ_0)
where α is inclination angle of the magnetic axis relative to the spin axis and ζ is the angle between the line of sight and the spin axis of the pulsar. This is connected to β (the minimum distance between the magnetic axis and the line of sight) via ζ = α+β <cit.>. This minimum distance happens at spin phase ϕ_0; this is where ψ has the steepest slope, the corresponding PA of the linear polarisation is ψ=ψ_0. The angles in Eq. <ref> are defined as in <cit.>, i.e. “RVM/DT92” convention <cit.>. With the polarisation angle measurements from the MeerKAT observations (all data points in Fig. <ref>), we determine the RVM parameter posteriors in their joint parameter space following the method outlined in <cit.>. The model also accounts for the possibility of OPM jumps and includes the corrected values in the fit. Keeping in mind the caveats associated with the RVM model, see e.g. <cit.>, the results from the best fit model are shown in terms of corner plots in Fig. <ref>. Following <cit.>, the results are presented using the RVM/DT92 convention.
We obtain α=62.27^+0.26_-0.25 and ζ = 110.63^+1.02_-0.93, quoting the 68% confidence levels on the posteriors.
§.§ Change of the profile with time
While inspecting the timing residuals we encountered an intriguing feature in the MeerKAT observations: starting with the observation from 2021-07-06, all residuals are offset by about 1 with respect to all residuals before that in the data set, while the MeerKAT residuals from observations before July 2021 align with the residuals from the other telescopes after fitting for a jump between them.
We found a change of the mean pulse profile to be the reason for the jump in the residuals. In Fig. <ref>, we show the summed profiles from all MeerKAT observations before the jump occurred, with a total of 26 hours, and from the 7 hours of observations since July 2022 that lead to the jumped ToAs respectively. In the following we will refer to the first one as the "pre-change profile" and to the latter one as the "post-change profile". Both profiles are generated by integrating the respective archives in time, frequency and polarisation. The first panel in Fig. <ref> contains their difference ("residual profile"), calculated by matching the pre- and post-change profile with the subroutine from the python interface of psrchive <cit.> and subtracting the re-scaled version of the latter one from the first one[This numerical output and graphical display is similar to running the command.]. The underlying method of alignment is a χ^2-fit of the Fourier-transformed profiles to each other to determine the respective phase shift and scale offset. The re-scaling process consists of applying the phase-rotation and overall intensity scaling of the fitting process to the latter profile. It is clearly visible, that the profiles significantly differ from each other.
To reassure ourselves that the change we see was actually occurring in June 2022, we performed a set of control analyses. To this end, we split the frequency and polarisation scrunched data from the pre- and post-change archives into two observations each. Then we repeated the subtraction procedure with these observations for all possible combinations. As expected, the fitting amongst each own data set (pre with pre and post with post) yielded flat residual profiles in both cases. When cross-correlated (pre with post and vice versa), the shape of the deviation was reproduced when correlating the profiles between the two data sets. These results indicate that we are dealing with a genuine change in the mean pulse profile from July 2021.
A few of these profile changes have been reported in the literature over the past years. One prominent example of a DM-related profile change is found in the observations of J1713+0747 <cit.>, which was originally associated with a DM-change. A characteristic for a DM-related profile change is a f^-2 frequency dependence, i.e. this effect should dominate in the lower frequency bands. In contrast to that, the frequency dependence of the profile change of J1643-1224 <cit.> excluded a DM-origin. Here, <cit.> point to changes in the emission region of the pulsar as being accountable for a change in the emission profile. These changes in the pulsar itself are responsible for profile changes. As a DM or magnetospheric origin of the change are difficult to distinguish, we investigated the MeerKAT observations further.
We performed a qualitative analysis of the frequency dependence by repeating the fitting and subtraction procedure on a per-sub-band basis. In doing so, we are unfortunately limited by the S/N of the observations. As we split all MeerKAT observations into eight sub-bands, we chose to display the frequency dependence at the same resolution as in Fig. <ref>. Evidently the deviation dominates in the lower frequency bands, but the nature of the change and the available S/N prevent us to confirm or refute a f^-2 dependence.
The maximum frequency resolution feasible was sixteen sub-bands, where the deviations were most strongly visible in bands 0 to 2, weaker in bands 3 and 4, and absent from band 5 onward.
If the profile change were purely DM-related, we should be able to reproduce it by suitably altering the DM on the total pre-jump archive with the highest frequency resolution (928 channels) accordingly. After we scrunch this archive in frequency, it should give a similar residual profile as seen in Fig. <ref> when compared to the pre-jump profile with the original DM. By fitting for DM and spin frequency on a per-observation basis, we retrieve the effective change from variations in the profile. By visually inspecting the resulting DM evolution, the profile change caused an alteration of around -0.01 in the dispersion measure. We interpret this change as not physical, but caused by the impact on the fit of the profile change. Surprisingly, a reduction of the DM in the archive header by 0.01 in the reverse engineering scheme laid out above, did not reproduce the profile change we show in Fig. <ref>. This is a strong indicator that the profile change is caused by magnetospheric changes, rather than by the ISM.
A change in the magnetosphere or the viewing geometry might also alter the polarisation properties of the radio beam. Thus, we assessed the difference of the PA across the total profile prior to and after the jump. We did not find any indications of a change.
Putting everything together, the frequency-resolved analysis of the jump points towards a non-ISM-related profile change, as we were not able to reproduce the profile change by introducing an artificial DM change for the pre-change observations (before July 2022). We point out that we could not investigate if the change could be caused by a strong scattering event, as our spectral analysis is limited by the steep spectral index and the subsequently low S/N in the upper bands.
Since July 2021 observations of J1618-3921 were not only conducted at MeerKAT, but also with the Parkes and Nançay radio telescopes, thus we inspected the other data sets for further traces of the timing jump. With only one observation from NRT in that time span we cannot make a meaningful statement concerning any impact of the profile change. In contrast, we have several observations at the Parkes radio telescope before and after the profile change. The summed profile resulting from the Parkes observations after July 2021 does not show any significant differences to the summed profile of the observations before that date. However, the mean ToA uncertainty of the Parkes observations is much larger than the size of the respective jump needed for the MeerKAT data set. Thus we will treat these ToAs jointly.
§ TIMING ANALYSIS
§.§ Generating Times-of-Arrival
We produced the ToAs for all data sets using the standard template matching technique employed in pulsar timing: The ToAs are calculated by correlating a standard profile against a profile the actual each observation archive over polarisation and a suitable amount of time and frequency channels. The time and frequency resolution for each telescope is chosen in a trade-off against the resulting ToA precision, resulting in the number of frequency channels specified in Tab. <ref>. For frequency-resolved ToAs we created a frequency resolved standard profile by iteratively running paas on the integrated profile in each frequency channels. The ToAs were obtained via the pat command. The significant decrease of intensity in the higher frequency channels for the MeerKAT and Parkes observations result in large ToA uncertainties in these bands. For the timing analysis, we carefully discarded these ToAs in order to reduce to computational load of the analysis without altering the fit results. At most MJDs we are still left with a frequency resolution of up to 9 (7) channels for the Parkes (MeerKAT) data, which is a large improvement to the previous work <cit.>. Due to the low S/N of the earlier data from Parkes and the NRT, those ToAs were generated using the fully integrated observations, i.e. one frequency channel per observation.
§.§ Fitting timing models
To analyse the final data set containing 1535 ToAs we use the timing software package tempo2 <cit.>, which does a least-square minimisation of the residuals based on the χ^2 statistic as well as a Bayesian noise analysis using the temponest plugin. In contrast to the standard tempo2 usage, the temponest plugin relies on Bayesian parameter estimation, which (among other features) enables the fit for stochastic noise processes such as red timing noise and changes in dispersion measure using power law based models <cit.>.
The different data sets were combined by introducing a jump between each of them, with the MeerKAT data set before July 2022 as the reference data set. These jumps were treated as free fitting parameters in the tempo2 fit, while usually being marginalised over in the temponest analysis. Additionally, parts of the MeerKAT data set were corrected with known jumps.
By default, all ToA timestamps were recorded with an on-site reference clock. To be able to combine measurements from different telescopes, these are then converted to Coordinated Universal Time (UTC). Furthermore, UTC is converted to the main realisation of the terrestrial time (TT), the high-precision coordinate time standard called "International Atomic Time" (TAI, temps atomique international). It is defined via the theoretically elapsed proper time on the Earth's geoid and thus not prone to Earth's rotational variations as UTC is. Finally the ToAs are transformed to the Solar System Barycentre (SSB), by accounting for the relative motion between each telescope and the SSB with JPL's Solar System Ephemeris DE436.
For the binary orbit, tempo2 provides several models based on the calculations by <cit.> which provided a standard orbital model (henceforth "DD"). In this model, the orbital motion is parameterised by five Keplerian parameters (binary orbital period P_b, longitude of periastron ω, time of periastron passage T_0, orbital eccentricity e and orbital semi-major axis projected along the line-of-sight x) and a few additional "post-Keplerian" parameters that quantify, in a theory-independent way, the relativistic deviations from the Keplerian orbital motion. Relevant here are: the rate of change of the orbital period Ṗ_b, the rate of periastron advance ω̇, the Einstein delay γ (which quantifies the effects of the the varying gravitational redshift and special-relativistic time dilation) as well as the Shapiro delay, which affects the propagation time of the radio waves to Earth. In the DD model, the latter effect is parameterised using the "range" r and "shape" parameters s. In GR, these are related to the companion mass M_c and the sine of the orbital inclination angle ι respectively <cit.>.
Upon deriving a timing solution for J1618-3921 we analysed the ToAs with the theory-independent DDH model developed by <cit.>, which differs from the DD model only in the parameterisation of the Shapiro delay: the new PK parameters (h_3 and ς) are less correlated than r and s, especially for systems with small orbital inclinations like J1618-3921.
In addition, in a later stage of the analysis, we used the "DDGR" model, which unlike the "DD" and "DDH" is not theory-independent but assumes that general relativity is the correct gravity theory, where no PK parameters are fit, only the total system mass and the companion mass. Due to the geometry of the system, the DDH model allowed for a more stable fit than the DDGR model.
After obtaining a first timing solution, which phase-connected the ToAs across the complete timing baseline, we updated the ephemeris in all available observations. With the new ephemeris installed, we repeated the entire process to obtain better profiles and standard templates. With these updated standards we then re-calculated the ToAs.
§.§ Bayesian timing and noise models
After deriving a final stable fit in tempo2 with the DDH model, we performed a Bayesian non-linear fit of the timing model by means of the temponest software package. This plugin relies on Bayesian parameter estimation, which (among other features) enables the fit for stochastic noise processes such as white noise, red timing noise and changes in dispersion measure using power law based models <cit.>. Using the parameters from the tempo2 output ephemeris as the input for temponest, we derived a timing solution which additionally accounted for the commonly known noise parameters: Unrecognised systematics in the ToA uncertainties are modelled by the white noise parameters EFAC F and EQUAD Q on a per-backend basis. Therefore the uncertainty σ_ToA, old of each ToA is re-scaled as σ_ToA,new = √(Q^2 + F^2σ_ToA, old) <cit.>. For the chromatic models, we obtained an amplitude A and a spectral index γ <cit.>.
In order to find the best-fitting chromatic noise model, we proceeded in a two-fold way: On the one hand we compared the evidence returned by the sampler Multinest <cit.> for different combinations of noise models (Red noise (RN) only, DM noise only, Red and DM noise). On the other hand we also varied the number of noise model coefficients between 45, 60 and 100, and compared the resulting time-domain realisation between the different models. The realisations were produced using the methods of the La Forge github repository[Freely accessible via <https://github.com/nanograv/la_forge>] <cit.> adapted for the relevant models at hand. The most favoured models were the 60 and 100 coefficient DM-only models, with a difference in the log-evidences of 29. From comparing 100 averaged realisations of both noise models to the ToAs, we decided to chose the 100 coefficient model, as it visibly reflected the ToA changes more precise than the 60 coefficient model. The respective time-domain noise realisations are shown as the blue lines in the lower plot of Fig. <ref>. temponest accounts for the DM noise in terms of a power law model <cit.>, where for the chosen model we have an amplitude of A_DM=-10.37 and a slope of γ_DM=0.94. This slope is exceptionally shallow for a noise process whose slope is usually expected to be of the order of 2. From Fig. <ref> we can deduce that the residuals exhibit some significant small-timescale variations which might give rise to the shallow slope. Nevertheless, the time-domain noise realisation in the lower plot of Fig. <ref> shows that the noise model seems to match the visible trends in the data, hence we regard the noise model as satisfactory.
As the data set exhibits large gaps in the beginning of the observations, we also investigated the covariance between the jumps and the timing parameters setting up a temponest analysis, where the jumps are also treated as free parameters. We did not find a significant change in any of the timing parameters.
The timing parameters of the best-fit solution from temponest using the DDH model are presented in the third column in Tab. <ref>. Each parameter is quoted as the maximum of the marginalised posterior together with the respective left and right 39% confidence limits. The timing residuals achieved from this solution are shown in Fig <ref>. Tab. <ref> also shows the corresponding parameters reported by <cit.>, with blank entries when the parameter was fit for for the first time in the scope of this work. In Fig. <ref> we show both the 2D-correlation contours and the 1D posterior distributions resulting from the temponest analysis for a chosen subset of fitted parameters we attribute a higher relevance in this work.
In the following, we will present the individual timing parameters in greater detail and and discuss their implications for the binary system based on the numeric values derived from the best-fit temponest solution.
§.§ Position and proper motion
As usual, the timing solution provides the pulsar's position with very high accuracy.
With a location at RA (J2000) 16h 18' 18.824940(38)” and DEC (J2000) -39 21' 01.815(10)”, we searched the second data release of the DECam Plane Survey (DECaPS2) <cit.>, a five-band optical and near-infrared survey of the southern Galactic plane, using the Aladin Lite web interface[<https://aladin.cds.unistra.fr/>] <cit.>. The corresponding excerpt from the survey image with a field of view of about 17 around the pulsar's position is shown in Fig. <ref>. At the position of the pulsar (indicated by the purple hair-cross on the image), we cannot identify any counterpart for either the pulsar or its companion. This implies that the electromagnetic emission of both bodies is below the detection thresholds of this survey, which are quoted to 23.7, 22.7, 22.2, 21.7, and 20.9 mag in the grizY bands <cit.>.
We are able to measure both the proper motion in Right Ascension μ_α=1.24^+0.14_-0.13 and Declination μ_δ=-2.37(35). This leads to a total proper motion of -2.5(3).
Furthermore, combining the timing model value of the dispersion measure DM with models of the electron distribution of the Galaxy, we infer a distance to the pulsar of 2.7 to 5.5 . For the lower boundary to the distance window we apply the NE2001 model <cit.>, the upper boundary is based on the YMW16 model <cit.>. Using the distance from the NE2001 model, we translate the measured proper motions into the heliocentric velocity of the binary system of v_T = 33(4).
§.§.§ Spin-down and higher frequency derivatives
An important quantity describing a pulsar's properties is the intrinsic spin down Ṗ_int. For a pulsar at a distance d moving with a relative proper motion μ, any time-related measurement is influenced by the change in the Doppler shift. Thus we correct the precisely measured period derivative Ṗ = -5.37620(68)e-20 to
Ṗ_int/P = Ṗ_obs/P + Ḋ/D,
where D is the Doppler factor caused by the unknown radial velocity of the pulsar and Ḋ its derivative. Although neither D or Ḋ are known, their ratio can be estimated as:
Ḋ/D = - 1/c[K⃗_0·(a⃗_PSR - a⃗_SSB) + V_T^2/d] = - a/c - μ^2d/c,
where the first term holds the contribution of the line-of-sight acceleration a by projecting the difference between the Galactic acceleration at the position of the pulsar a⃗_PSR and the solar system barycenter (SSB) a⃗_SSB onto the unit vector K⃗_0 pointing from the Earth to the pulsar. The second term, which depends on the transverse velocity V_T and the distance to the pulsar d, is the Shklovskii term <cit.>.
We obtain suitable values of the Galactic acceleration at the SSB and the position of the pulsar using the Milky Way mass model presented by <cit.>. For the position and velocity of the solar system barycenter we assumed R_⊙ = 8.275±0.034 and V_⊙ = 240.5±4.1<cit.>.
Using the NE2001 distance estimate, the Shklovskii effect contributes Pμ^2d/c=6.2e-22. The Galactic acceleration field partly compensates this effect with an excess period change of Pa/c=-1.4e-22. We therefore arrive at an intrinsic spin-down of Ṗ_int=5.33326e-20, which is only slightly smaller than Ṗ_obs (Ṗ_int = 0.991 Ṗ_obs).
Furthermore, with f= -1.0(2)e-27 we find a non-zero value of the second derivative of the spin frequency. This value is multiple orders of magnitude larger than what is expected from a pure spin-down (𝒪(10^-33), assuming a characteristic age of 10 and a braking index of 3) and among the very few values of f measured for the 333 pulsars with P < 30. Outside of globular clusters, only 9 measurements of f̈ have been made <cit.>, mostly for highly energetic gamma-ray MSPs, where timing noise could be happening, additionally some of these systems are in "black widow" binaries with strong outgassing.
In one case (J1024-0719), the pulsar is known to have a distant companion, a K dwarf <cit.>, in another case, J1903+0327, the system is thought to have formed in a triple system that later became unstable <cit.>; perhaps the third object was not fully ejected and is still somewhere in the vicinity of the system. A comparison of the timing residuals for the timing models with and without this parameter is shown in the upper plot of Fig. <ref>. Higher derivatives of f are likely to originate from a varying acceleration along the line of sight of the binary system. The implications of the measurement of f on the nature of the system and other timing parameters will be discussed in more detail in Sec. <ref>.
§.§ Post-Keplerian parameters
§.§.§ Rate of advance of periastron
The orbital eccentricity of the system and the long timing baseline allow a highly significant measurement of the rate of advance of periastron, despite the wide orbit: ω̇ =0.00142^+0.00008_-0.00010.
If this effect is purely relativistic, it yields a direct measurement of the total mass of the system, M_ tot.
In order to gauge the reliability and meaning of the measurement of ω̇, we have to consider the possibility of additional non-relativistic effects. The most important of these is a proper motion contribution ω̇_μ. This contribution is given by <cit.>
ω̇_μ = μ/sinιcos(Θ_μ - Ω),
where Θ_μ is the proper motion position angle and Ω the position angle of the line of nodes. Assuming an optimal alignment (cos(Θ_μ - Ω)=1), it contributes at the order of ω̇_μ∼8e-7.
As discussed in <cit.>, a third body in the system can add a contribution to the observed periastron advance:
ω̇_triple = (ẋ/x)_triple2[sin^2θ_3 (5cos^2Φ_3 - 1) -1]/ιsin2θ_3cos(ω+Φ_3).
Including x in the timing model fit yields x=(2±8)×10^-15, which is consistent with a non-detection. Considering that the geometric terms in Eq. <ref> contribute at 𝒪(1), the fit value of x gives an upper limit to the contribution of the periastron advance from the putative third body of ω̇_triple < 3e-7.
Compared to the measured rate of advance of periastron, both contributions are negligibly small, so we conclude that the measured value of ω̇ is within measurement precision, relativistic. The relativistic ω̇ relates to the total mass of the system as
M_tot = 1/T_⊙[ω̇/3(1-e^2)]^3/2(P_b/2π)^5/2,
where T_⊙≡ GM_⊙^N/c^3= 4.9254909476412675… μ s is an exact quantity that follows from the exact definitions of the speed of light c and the solar mass parameter GM_⊙^N<cit.>. From the best-fit parameters, we derive a total mass estimate of 1.42^+0.20_-0.19. Comparing this result with the mass measurements for similar NS-WD binaries [e.g. those listed under <https://www3.mpifr-bonn.mpg.de/staff/pfreire/NS_masses.html>], we find that our measurement lies well within the expected mass range.
§.§.§ Shapiro delay
With J1618-3921 being a pulsar in the RelBin programme, one of the main aims of this work is achieving a significant Shapiro delay measurement by means of the high timing precision that comes along with MeerKAT observations. The rather low flux density, combined with a low inclination angle made a precise measurement of the Shapiro delay difficult. We were able to stabilise the DDH model based tempo2 fit with ToAs gained from a dedicated superior conjunction observation campaign towards a low-significance detection of the Shapiro delay. From the temponest analysis we found h_3 = 2.70^+2.07_-1.47 and ς = 0.68^+0.13_-0.09. In order to convert these measurements and the measurement of ω̇ into constraints on the mass and the inclination angle of the system, we perform a χ^2-grid analysis of the M_PSR-cosι space (cf. Sec. <ref>). The unconstrained inclination angle in the right plot of Fig. <ref> resulting from the analysis demonstrates that we did not arrive at a significant measurement of the Shapiro delay.
§.§.§ Mass measurement
We now estimate the masses with the highly significant detection of ω̇ and the weak Shapiro delay constraints using the analysis technique outlined in <cit.>. At each grid point corresponding to a (M_PSR,M_c)-pair we fix the respective values of M_tot and M_c in a DDGR ephemeris adapted from the actual temponest results, which is then used in a tempo2 fit. With the two mass values, the DDGR model self-consistently accounts for all observed relativistic parameters except for the orbital decay, where we know there are large contributions from other causes. The goodness of the fit is quantified by the χ^2 value of the tempo2 fit, where a lower χ^2 value describes a better fit. The result is a map of χ^2 values across the M_PSR-M_c-grid, which can be translated into credibility contours by subtracting the global minimum value across the map from all map points. The result is displayed in the mass-mass diagram in Fig. <ref>, together with the credibility band from the rate of advance of periastron. With this method we constrain the companion mass to 0.20^+0.11_-0.03, the pulsar mass to 1.20^+0.19_-0.20 and the total mass to 1.42^+0.20_-0.19 (68.3 % confidence limits).
§.§.§ Change of orbital period
The impact of the change of the orbital period on the timing residuals is shown in the lower plot of Fig. <ref>. Similar to the measurement of the spin period derivative, the observed rate of change of the orbital period is the sum of various effects,
( Ṗ_b/P_b)^obs = ( Ṗ_b/P_b)^GW + ( Ṗ_b/P_b)^ṁ
-Ḋ/D,
where apart from the kinematic contributions (Ḋ/D), also emission of gravitational waves (GW) and mass loss from the system (ṁ) might significantly contribute to the measured value. Evaluating the expressions given in <cit.> for the latter two effects, we find (Ṗ_b/P_b)^GW∼-1e-23 and (Ṗ_b/P_b)^ṁ∼4e-28. Compared to our measured value, these contributions are negligible.
Thus, the only significant term comes from -Ḋ/D. Using the value calculated in section <ref>, we obtain Ṗ_b = -Ḋ/D P_b∼ + 0.05 × 10^-12. Surprisingly, the best-fit timing model reveals a measured orbital period change of -2.2^+0.35_-0.33×10^-11. This is not only two orders of magnitude larger than expected, but also carries an opposite sign.
All considered effects are multiple orders of magnitude too small to provide an explanation for the large observed value of Ṗ_b. A possible solution to this tension is the presence of an additional acceleration caused by a third body in the vicinity of the binary, as discussed in Sec. <ref>.
§.§.§ Other parameters
As for similar systems (cf. ), we are not able to obtain a significant measurement of the Einstein delay amplitude γ or any variation of the projected semi-major axis ẋ, since their contributions to the residuals are beyond the current precision of our ToAs; furthermore, given the orbital periods of these pulsars, the timing effect of ẋ and γ are strongly correlated <cit.>. Moreover, we do not detect derivatives of the spin frequency higher than f.
§ DISCUSSION
§.§ Comparison to Octau et al.
In comparison to the work by <cit.>, we use data not only from NRT, but also Parkes and MeerKAT, including observations that span back to 1999. These early observations were available previously, but only the high quality of the MeerKAT observations, together with the observation density achieved by combining three radio telescopes guaranteed a timing solution that was robust enough to extend the timing model back to 1999, through a very sparse set of observations. This large timing baseline, plus the precise recent timing, allows for the measurement of timing parameters that were not previously available: proper motion, of higher order spin and DM derivatives and post-Keplerian parameters. We are also able to significantly improve on the measurement and variation of the DM. In comparison to the four frequency channels obtained from the third NRT observation run, the large-bandwidth observations with the Parkes UWL receiver have a S/N that allows us to separate them into 13 frequency channels with often a reasonable ToA precision. Although we have to discard the ToAs from the high frequency channels, we still achieved a significant refinement in the frequency resolution compared to the previous work.
§.§ Orbital geometry
If the spin of the pulsar in a binary is aligned with the orbital angular momentum, the inclination angle ι coincides with the viewing angle ζ. But upon comparing the timing result for the Shapiro delay parameter to the RVM fit results, there are two major caveats: First, in fitting for the Shapiro delay, we determine sinι. Hence, we cannot distinguish if the corresponding inclination angle is ι or 180-ι. In case of a reliable RVM fit, this ambiguity can be solved by comparing ι to ζ. This can also not be done directly, since the above RVM equation assumes that ψ increases clockwise on the sky, opposite to the astronomical convention, where ψ increases counter-clockwise from the above equation <cit.>. Hence we have to identify the RVM fit value for ζ with 180-ι or ι with 180-ζ respectively <cit.>.
Taking both these aspects into account, we first of all find with the reference angle from the RVM fit of 180-ζ = 69.37^+1.02_-0.93, that sinι translates into ι=66(14). This is also confirmed by performing two further RVM fits in which we restricted the variation of ζ to one of the ranges allowed by the timing results (cf. Sec. <ref>) on sinι respectively.
Although the viewing angle from the RVM fit is consistent with the inclination angle from the timing solution (Tab. <ref>), we cannot make any reasonable statement about an alignment or misalignment of both axes due to the highly unconstrained Shapiro delay.
§.§ What causes the anomalous Ṗ_b and f̈?
The significant deviation between measurement and prediction shows that there is another contribution to the pulsar's acceleration. This additional acceleration completely dominates the expected Galactic gravitational acceleration. Such a strong gravitational field could be produced by a massive nearby object. We can test this hypothesis in a simple way. If the observed Ṗ_b is caused by an unexpected acceleration (and therefore implying a larger than assumed -Ḋ/D term), then we should be able to re-compute the spin-down of the pulsar using this term, as measured by Ṗ_b, and still obtain a positive value. Subtracting Eq. <ref> from Eq. <ref>, and neglecting the GW emission terms, we obtain:
Ṗ_ int = Ṗ_ obs - P ( Ṗ_ b/P_ b)_ obs,
since Ṗ_ b, obs is negative, this has the effect of increasing our estimate of Ṗ_ int to ∼ 1.8(3) × 10^-19, which is ∼3.4 times larger than the observed Ṗ. From this value, we estimate the characteristic age, the spin-down luminosity as well as the surface magnetic field of the pulsar <cit.>. These values can be seen in Table <ref>, there we see how the change in the value of Ṗ_ int between this work and the work from <cit.> lead to significant differences in the values of τ_c and E.
For pulsars at a low Galactic latitudes, this additional acceleration might be caused by massive molecular clouds in their vicinity. With J1618-3921 located at b=7.9, this is unlikely, but not impossible. Another option is that a third body is in a wide orbit around the PSR-WD binary.
The measurement of the second derivative of the spin frequency helps to distinguish between these two scenarios. A molecular cloud would be located at a large distance to the binary, thus its acceleration would appear to be constant; in this case, we would not expect large variations in the line-of-sight acceleration and thus on the ḟ. Instead, we measure a large f of -1.0(2)e-27, which is very likely caused by a variation of the external acceleration. This is a strong indicator that the source of the acceleration is in the vicinity of the binary. Thus we propose that the system is a hierarchical triple system.
This line of arguments is strongly motivated by a similar discussion of the J1024-0719 system <cit.>. Upon its discovery, it was regarded as an isolated pulsar, but the measurement of higher-order spin frequency derivatives led <cit.> to propose a companion in an extremely wide orbit (P_b > 200). This was confirmed by the detection of a nearby star with the same proper motion.
Comparing the measured value f for both pulsars, we find that the value for J1618-3921 is a factor of two smaller than for J1024-0719, thus of a very similar order of magnitude. With the measurement of P_b we even have the advantage of estimating the acceleration of the inner binary system - this is not possible for J1024-0719, because that pulsar is not already in a binary system.
Keeping in mind that most stars are part of multiple systems, it is no surprise that on rare occasions, binaries with a pulsar are actually part of a higher-order stellar system. Due to stability arguments <cit.>, most of these systems are hierarchical triple systems, i.e. they consist of an inner binary, which is in a wider orbit around a third object.
An example is the well-known triple system consisting of the MSP J0337+1715 <cit.>. Detailed timing of this system <cit.> revealed that both orbits of the system are co-planar and circular and the WD masses are as predicted by TS99 relation, as expected from adopting the previously discussed WD-MSP formation scenario <cit.>. On the other hand, <cit.> showed in a broad study on triple systems, that the unique dynamic in these systems also allows for a stable eccentric inner binary. They also point out, that mechanisms such as Lidov-Kozai cycles prevent a synchronisation and circularization of the binary, leading to MSP systems that stand in complete contrast to the formation scenario described by <cit.>.
If PSR J1618-3921 really has a stellar companion, all derivatives of f are expected to eventually converge on a Keplerian orbit for the outer component <cit.>. Here, J1024-0719 again serves as a precedent; we should consider these MSP companions to be also settled in exceptionally wide orbits. Any associated parameter derivative is therefore expected to show up only in data sets with a combination of a long timing baseline and significant timing precision. Determining the orbital configuration of the outer companion would require the knowledge of at least the first five derivatives of f <cit.>[The first derivative of f generally cannot be used as intended by these authors, because of the a priori unknowable pulsar spin-down, but also because, in the system studied in these works (PSR B1620-26), the acceleration caused by the host globular cluster (M4) is also hard to estimate, given the lack of a precise 3-D position of the pulsar relative to M4. However, as mentioned before, in the case of PSR J1618-3921, we have direct access to the acceleration of the system via Ṗ_b, which means that the equations of <cit.> can indeed be used.]. With the knowledge of fewer derivatives, we can only put a few constraints on the orbit <cit.>: P_b relates to the corresponding acceleration from the third body a as a/c∼P_b/P_b. Similarly f relates to the change of the acceleration as a/c∼f/f. From the acceleration and its change, we can place an order-of-magnitude estimate on the orbital period of the third body as P_b,3∼ a/a∼300, given the values form our best-fitting timing solution. This is not unexpected, and also highly in line with the findings from <cit.> in the case of J1024-0719.
§.§.§ Optical counterpart
We consulted the DECaPS2 <cit.> catalogue to search for a spatially resolved object that could be associated with the PSR J1618-3921 system, and thus be identified as the binary companion or the putative third body. As mentioned in Section <ref>, no counterparts are identified near the position of PSR J1618-3921.
The upper mass limit of any companions (either the binary companion to the pulsar, or the more distant object) can thus be estimated with the depth of the catalogue through comparisons with the expected colours and magnitudes from stellar evolutionary models.
We have used the PAdova TRieste Stellar Evolutionary Code <cit.> to obtain the grizY magnitudes in the ABmag system to facilitate comparisons with the DECaPS2 catalogue.
Applying an extinction A_V∼0.2 mag [estimated via Galactic Dust Reddening and Extinction <https://irsa.ipac.caltech.edu/applications/DUST/>.] and adopting a distance of 5.5 kpc, a 0.56M_⊙ dwarf star (∼M0V,)
would have grizY= 23.9, 22.5, 21.7, 21.3, 21.2 mag, respectively.
Such a star would be near the detectability limit of the riz bands in the DECaPS2 survey, given its limiting magnitudes 23.7, 22.7, 22.2, 21.7, and 20.9 mag in grizY bands respectively, and would have been detected in all 5 bands if a smaller distance of 2.7 kpc is adopted. To summarise, any companion at the location of PSR J1618-3921 would have a limiting magnitude detection threshold of 23.5 in the G-band, which at the distance to the pulsar of 5.5 corresponds to an absolute magnitude >9.79. This could be a M-dwarf of mass < 0.56 M_⊙ or a compact object.
§.§.§ Nearby stars, their motions and their gravitational accelerations
We consulted the Gaia DR3 <cit.>[<https://gea.esac.esa.int/archive/>] to search for objects that might have a proper motion similar to that of PSR J1618-3921, i.e., within the ± 3σ error ellipse.
This was, incidentally, how the distant binary companion of PSR J1024-0719 (a K7V star) was identified <cit.>.
No objects with such a proper motion are detected within a radius of 1.4 around PSR J1618-3921. Using the NE2001 distance for a lower limit, this corresponds to a minimum distance of 0.8.
In the deeper DECaPS2 catalogue <cit.> we find three nearby stars; shown in Fig. <ref>, at a distance of 2´´, 4´´ and 2´´ (following the labels 1 to 3) from PSR J1618-3921. Given the depth of this catalogue, these faints stars are not in the Gaia DR3 catalogue, so an association with PSR J1618-3921 cannot be excluded based on proper motion measurements.
Under the assumption that the three objects are stellar type objects and that they are at the same distance as the pulsar, we have extracted their grizY magnitudes from the DECaPS2 catalogue to estimate their masses.
We use Star 1 as an example as it has measurements in all 5 bands: 23.3, 22.0, 21.3, 20.7, 20.5 mag respectively.
These magnitudes are in agreement with those for a 0.6 (or K9V) star with A_V∼0.2 and a distance of 5.5 kpc: 23.4, 22.0, 21.3, 21.0, and 20.9 mag respectively.
For each of the three stars, we make an order-of-magnitude estimate of the line-of-sight (LOS) acceleration a_LOS they exert on the pulsar respectively. Assuming a typical mass of 0.6 for these stars, we derive the estimate via Newton's law a_LOS = GM/d^2sinα, where M denotes the mass of the star, G is the gravitational constant, d is the distance between the pulsar and the star and α is the angle between the vector pointing from the star to the nearest point on the LoS and the vector pointing from the star to the pulsar. Turning the angular distance taken from Aladin into the physical separation, we use the NE2001 distance, as it gives us an upper limit on the acceleration. This inferred separation is the projected distance r between the pulsar and the star, so d=r/cosα. The resulting acceleration curves calculated under the previously outlined assumptions for the three objects marked in Fig. <ref> are shown in Fig. <ref>.
All these objects cause accelerations which are roughly two orders of magnitude smaller than the acceleration a_LoS, P_b=-3.45e-09□ obtained from P_b.
Hence the putative wide-orbit companion of J1618-3921 must be closer than these objects, and must have a luminosity below the DECaPS2 <cit.> limit: as mentioned above, it could be an M-dwarf or a compact object.
§ SUMMARY
This paper presents a comprehensive overview of the latest knowledge about the eccentric millisecond pulsar J1618-3921 using the combined data set from 23 years of observations with Parkes, NRT and MeerKAT radio telescopes and their respective different back-ends.
We present a detailed study on the pulsar's emission properties with two notable results: First we recorded a profile change that happened around June 2021 with the MeerKAT observations. Our analyses favours an intrinsic profile change over an ISM-related influence, but due to the limited S/N in the upper MeerKAT frequency bands, we cannot finally determine the origin of this change. Furthermore we analysed the behaviour of the position angle of the linear polarisation. Assuming a purely dipolar radio emission, with the PA perfectly following the RVM, we constrained the position of the spin axis of the pulsar to 111(1). The uncertainty in the orbital inclination precludes any conclusions on the alignment of the spin axis of the pulsar with the orbital angular momentum.
While in previous publications <cit.>, orbital and then phase-coherent timing solutions were already published, here we not only report the old timing with significantly improved precision, but we provide the first solution including a binary model with an increased number of Post-Keplerian parameters. The stability of the solution is mainly provided by the dense accumulation of data points from joint MeerKAT, Parkes and NRT observations in the recent past. This allowed us to include all available observations up to the very first observations from 1999. This large timing baseline significantly improved the measurement of rate of advance of periastron.
Although the ToAs obtained from monthly observations with the MeerKAT L-band receiver exhibit an outstanding precision compared to ToAs resulting from concurrent observations at the Parkes and Nançay radio telescopes, the low S/N of the pulsar as well as the shallow inclination angle impeded a high-significance detection of the Shapiro delay. Nevertheless we are able to present a first constraint on the orthometric parameters h_3 and ς. Combining the low-significance Shapiro delay detection with the precise measurement of the rate of advance of periastron we are able to present the first ever mass estimates of this system. Unfortunately, the steep spectral index prevents us from obtaining more precise ToAs using the S-band (1.75 to 3.5) receiver at MeerKAT. With a factor of two to three improvement in timing precision, the Shapiro delay should be measurable with useful precision, but this will only be possible with future radio telescopes of even higher sensitivity.
The most remarkable result of the timing analysis is the amount of change of the orbital period and the large second derivative of the spin frequency, which indicate that the pulsar is actually part of a triple system.
The possibility of the evolution of J1618-3921 as a triple system opens the door for similar evolution of other eMSPs. However, there is at the moment no clear evidence that other eMSPs are part of hierarchical triple systems.
Our long-term plan for this pulsar consists of regular observations of J1618-3921 with the L-band receiver at MeerKAT and the UWL receiver at the Murriyang Parkes radio telescope. We expect that the increased timing baseline will significantly improve all currently measured parameters, but also enable the detection of additional parameters such as x (which will constrain the orbital orientation of the system) or higher derivatives of f, which will provide additional information on the companion mass and its orbit.
The authors thank Aurélien Chalumeau, Michael Keith and Aditya Parthasarathy for the fruitful discussions and support regarding the noise model comparison and time domain realisation. All authors affiliated with the Max-Planck-Gesellschaft (MPG) acknowledge its constant support. VVK acknowledges financial support provided under the European Union's Horizon Europe 2022 Starting Grant “COMPACT" (Grant agreement number: 101078094, PI: Vivek Venkatraman Krishnan). The MeerKAT telescope is operated by the South African Radio Astronomy Observatory (SARAO), a facility of the National Research Foundation, which is an agency of the Department of Science and Innovation. SARAO acknowledges the ongoing advice and calibration of GPS
systems by the National Metrology Institute of South Africa (NMISA), as well as the
time space reference systems department of the Paris Observatory. MeerTime data is housed on the OzSTAR supercomputer at Swinburne University of Technology, on which significant parts of the data reduction was performed. Parts of this research were supported by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004.
The Parkes radio telescope (Murriyang) is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. We acknowledge the Wiradjuri people as the traditional owners of the Observatory site.
The Nançay Radio Observatory is operated by the Paris Observatory, associated with the French Centre National de la Recherche Scientifique (CNRS). We acknowledge financial support from the “Programme National de Cosmologie et Galaxies” (PNCG) and
“Programme National Hautes Energies” (PNHE) of CNRS/INSU, France.
Parts of the data set used in this work include archived data obtained through the CSIRO Data Access Portal (http://data.csiro.au). It also made broad use of the NASA Astrophysics Data System (https://ui.adsabs.harvard.edu/).
The analysis done in this publication made use of the open source pulsar analysis packages psrchive <cit.>, tempo2 <cit.> and temponest <cit.>, as well as open source Python libraries including Numpy, Matplotlib, Astropy and Chainconsumer.
APo and MBu acknowledge the support from the research grant "iPeska" (P.I. Andrea Possenti) funded under the INAF national call Prin-SKA/CTA approved with the Presidential Decree 70/2016. APo and MBu acknowledge that part of this work has been funded using resources from the INAF Large Grant 2022 "GCjewels" (P.I. Andrea Possenti) approved with the Presidential Decree 30/2022.
aa
|
http://arxiv.org/abs/2409.02382v1 | 20240904021835 | GGS: Generalizable Gaussian Splatting for Lane Switching in Autonomous Driving | [
"Huasong Han",
"Kaixuan Zhou",
"Xiaoxiao Long",
"Yusen Wang",
"Chunxia Xiao"
] | cs.CV | [
"cs.CV"
] |
Diffusion-limited settling of highly porous particles in density-stratified fluids
Daniel M. Harris
September 3, 2024
==================================================================================
§ ABSTRACT
We propose GGS, a Generalizable Gaussian Splatting method for Autonomous Driving which can achieve realistic rendering under large viewpoint changes. Previous generalizable 3D gaussian splatting methods are limited to rendering novel views that are very close to the original pair of images, which cannot handle large differences in viewpoint. Especially in autonomous driving scenarios, images are typically collected from a single lane. The limited training perspective makes rendering images of a different lane very challenging. To further improve the rendering capability of GGS under large viewpoint changes,
we introduces a novel virtual lane generation module into GSS method
to enables high-quality lane switching even without a multi-lane dataset. Besides, we design a diffusion loss to supervise the generation of virtual lane image to further address the problem of lack of data in the virtual lanes.
Finally, we also propose a depth refinement module to optimize depth estimation in the GSS model. Extensive validation of our method, compared to existing approaches, demonstrates state-of-the-art performance.
§ INTRODUCTION
Novel view synthesis is an essential task in the field of computer vision, with significant potential applications in autonomous driving <cit.>, object detection, and digital human representations. To enhance the robustness of autonomous driving systems, it is imperative to establish a simulation environment for testing these systems effectively. However, the majority of existing datasets are limited to single-lane scenarios. This limitation presents significant challenges in inferring adjacent lane scenarios from the current viewpoint. If lane switching is not supported, the test samples provided to the autonomous driving simulation system will be incomplete, making it impossible to conduct better simulation testing and requiring a significant amount of data collection costs.
The methods based on NeRF <cit.> often rely on lidar to better generate novel views in autonomous driving scenarios. READ <cit.> introduces a new rendering method that adopts a neural rendering approach different from NeRF. It learns neural descriptors of the original point cloud with explicit geometry to render images, instead of learning implicit geometry in NeRF methods. However, the efficiency of training and rendering using these methods is very low.
The efficiency in training and rendering speed, coupled with the high reconstruction quality of 3D Gaussian Splatting <cit.>, contributes to its widespread application of novel view synthesis in autonomous driving. GaussianPro <cit.> introduces multi-view stereo to improve the geometry of generated gaussian splats. DC-Gaussian<cit.> introduces an adaptive image decomposition module to address the impact of glass reflections on the quality of novel view synthesis. However, these methods still cannot have effective novel view synthesis in lane switching, as they do not address the main problem that only single lanes of data are collected.
To address the problem of the sparse view synthesis, many methods have sought to optimize this process using generative models <cit.>. generative models are trained across large amount of scenes to enhance performance in sparse view scenarios. However, the generative model still lacks of data from multi-lanes to learn how to synthesize novel views for other lanes from single lanes data.
Therefore, we propose a virtual lane module into generative Gaussian splatting to address the synthesis of new views involving lane changes, despite the lack of multi-lane training datasets for supervision. In the module, we first use 3D gaussians generated from images in the single lanes using a generative model to predict images from virtual lanes, then use 3D gaussians generated from the virtual images to predict back the image collected in the single lanes. In this way, we can let generative model to learn how to generate the best images in the other lanes even with only single lanes of data. In addition, we propose a diffusion loss from a latent diffusion model <cit.> to virtual generated images to further improve the lane switching of our GGS. Finally, as improving geometry of generated 3D gaussians also improves novel view synthesis in sparse view collections, we employs points from traditional multiview stereo reconstruction to refine the depth estimated in GGS.
The main contributions of this paper can be summarized as follows:
* We propose a novel virtual lanes module into the generative gaussian splatting to improve the quality of lane switching novel view with only single lanes of data.
* We introduce a diffusion loss to directly supervise the image from virtual lanes predicted by GGS to further improve the novel view synthesis from limited collected views.
* We propose to fusing MVS geometry into the generative 3D gaussian splating to improve geometry estimation.
* We conduct extensive experiments on a wide range of scenarios to validate the effectiveness of our algorithm, and achieve state-of-the-art street novel view synthesis even without LiDAR.
§ RELATED WORK
§.§ 3D Gaussian Splatting
3D Gaussian Splatting <cit.> employs a point-cloud-based 3D reconstruction method, which combines the position information of each point with Gaussian distribution to convert point cloud data into a 3D surface. However, the quality of street novel synthesis is still problematic due to limited view collections in the street.
GaussianPro <cit.> has improved 3D Gaussian Splatting by introducing a novel progressive propagation strategy to guide Gaussian densification based on the scene's surface structure. Although improving geometries helps to mitigate the novel synthesis in sparse views, the quality of novel synthesis in another lanes is still low. Deformable 3D Gaussians <cit.> employs a framework for extending 3D Gaussian Splatting in dynamic scenes using a deformation field, enabling the learning of 3D Gaussians in a normalized space. There are also other methods based on 3D Gaussian Splatting such as <cit.>. Although street view synthesis has only been improved on the collected lanes, they have not solved the problem of sparse views, leaving lower levels of novel view synthesis when changing lanes.
§.§ Generalized model
To solve the problem of novel view synthesis in sparse views, some methods propose a generalized model-based approach. PixelNeRF <cit.> employs a generalized model for novel view synthesis based on volume rendering method, which can be trained directly from images without explicit 3D supervision. However, the generation quality is not high and the training efficiency is low.
mvsplat <cit.> introduces an efficient feedforward 3D Gaussian splash model learned from sparse multi view images, and constructs a cost volume to represent the cross view feature similarity of different candidate depths, providing valuable clues for depth estimation. MVSGaussian <cit.> employs a mixture Gaussian rendering method that integrates efficient volume rendering design for novel view synthesis. Compared with the original 3D Gaussian Splatting, MVSGaussian achieves better view synthesis results while reducing training computational costs. However, it cannot be well completed for scenes with obstacles.
§.§ Diffusion Model
For occluded scenes, using generalized models cannot generate better results, so some algorithms introduce diffusion model <cit.> to imagine unknown regions. ReconFusion <cit.> further utilizes the generative capacity of large models to infer unknown areas, and integrates diffusion prior into NeRF's 3D reconstruction process. DrivingDiffusion <cit.> introduces a spatiotemporally consistent diffusion framework, incorporating multi-view attention to generate realistic multi-view videos controlled by 3D layouts. These diffusion model methods only consider a single lane and do not utilize multi-lane features for better completion.
§ METHODOLOGY
Although generalized models can assist in synthesizing novel views in sparse views, insufficient view information leads to inaccurate depth estimation. Our method further optimizes the generalization model. The overall framework diagram of our GGS method is as shown in Figure <ref>. We input four different frame images and introduce neighborhood feature in the Mutli-View Depth Refinement Module to better address scenes with occlusions. And we introduce more global information to optimize the predicted depth map By using MVS. In the Virtual Lane Generation Module, we introduce the concept of virtual lanes and solve the problem of not having a multi-lane dataset by switching back after switching, allowing the model to flexibly switch lanes. In addition, we introduce the Multi-Lane Diffusion Loss to supervise the novel view synthesis.
§.§ Background
MVSplat <cit.> is a generalizable 3D Gaussian Splatting method, which can synthesize novel views from sparse inputs. MVSplat takes transformer based structure and adopts cross view attention strategy to build a cost volume for each input view, then following a U-Net to predict the depth and the parameters of Gaussian primitives for each pixel.
The 3D Gaussian parameters consist of the Gaussian center position x, scale s, rotation angle q, opacity α, and color c. Given the predicted depth map D and the projection matrix P with camera parameters K, pixels located at x are back-projected from the image plane to 3D space as follows:
x_p_x = Π_P^-1(p_x, D),
where Π represents the back-projection operation, and p_x and D represents pixel coordinates and estimated depth, respectively. The opacity α is represented by the matching confidence directly.
The remaining Gaussian parameters scale s, rotation angle q, and color c are decoded from the encoded features as follows:
s_p_x = Softplus(h_s(Γ(p_x))),
q_p_x = Norm(h_q(Γ(p_x))),
c_p_x = Sigmoid(h_c(Γ(p_x))),
where Γ represents the high-dimensional feature vector, p_x represents pixel coordinates, and h_s, h_q, and h_c represent the scaling head, rotation head, and color head, respectively.
§.§ Mutli-View Depth Refinement Module
We enhanced MVSplat by our Mutli-View Depth Refinement Module, i.e. Modified MVSplat. It can produce more accurate 3D gaussian primitives and improve the quality of novel view synthesis. To better infer unknown regions, we incorporated the color feature information of the neighborhood of this view. Our model takes this into consideration. We use the back-projected point cloud map reconstructed through Agisoft Metashape as an additional input color feature for U-Net. The feature representation of the neighborhood is:
F_neighbor_i = {F_m | m ∈ [i-k, i+k]},
where i represents the i-th frame in the video, and F_i represents the color feature of the i-th frame. k represents neighborhood distance.
Neighborhood color features are merged into depth features through concatenation, high-dimensional Gaussian parameter features are output through UNet, decoded using a Gaussian parameter decoder, and finally generate Gaussian parameter representations.
dep_ref = 𝒰(F_neighbor_i, dep_i),
where 𝒰 represents the U-Net. By introducing color information from multiple neighborhood perspectives in this way, the synthesis ability of the generalized model under obstacle occlusion is enhanced.
In addition, to refine the depth, we introduce a confidence based method. The lower the transparency of the predicted 3D Gaussian, the lower the confidence level of the predicted depth. When the confidence level is high, the predicted depth remains unchanged. When the confidence level is low, we correct the predicted depth map by reconstructing the back-projected depth map through Agisoft Metashape <cit.>. The optimized depth value is:
dep_i = {βd̂êp̂_̂î+(1-β)D_i, if α_i<α
d̂êp̂_̂î, if α_i≥α.,
where D_i represents the depth of projected depth map. d̂êp̂_̂î represents the predict depth. α and β represent the transparency threshold and depth threshold, respectively.
§.§ Virtual Lane Generation Module
Previous generalizable 3D gaussian splatting methods are limited to rendering novel views that are very close to the original pair of images, which cannot handle large differences in viewpoint. Especially in autonomous driving scenarios, images are typically collected from a single lane. The limited training perspective makes rendering images of a different lane very challenging. We have obtained a 3D Gaussian using our modified depth refinement module. To further improve the rendering capability of GGS under large viewpoint changes, We introduce the virtual lane approach that enables high-quality lane switching even without a multi-lane dataset, inspired by <cit.>.
The virtual lane converter is used to select the appropriate virtual lane, so that after lane switching, no information can be seen from the virtual perspective due to excessive switching amplitude. Then generate a pose for the virtual lane by performing a vertical translation along the lane. Finally, generate a virtual perspective based on the pose of the virtual lane. After introducing the virtual lane module, our GGS module process mainly includes two stages.
In the first stage, we input a set of N images:
ISet_1 = {I_1, I_2, ...I_N},
then we output the target image through the model:
Î^̂1̂= 𝒢(ISet_1),
where 𝒢 represents GGS module, and ISet_1 is a rendered image without shifting the view, and the rendered view is consistent with the ground truth. The current lane generates a collection of virtual lane rendering images through lane converters. The rendered image of the virtual lane is represented as:
ISet_2 = {𝒱(Î^̂1̂_̂k̂,γ sinθ)|k_f≤ k≤ k_l, θ=ω k},
where 𝒱 represents the virtual lane converter, γ represents the translation coefficient, k_f and k_l represent the index of the first and last frames of the input, respectively. ω represents the switching period angle, and the switching angle of each frame changes periodically in order.
In the second stage, use the virtual lane generated in the first stage as input. Using our model, switch back from the virtual lane to the real lane and output a rendered image of the real lane:
Î^̂2̂= 𝒢(ISet_2),
where 𝒢 represents GGS module. This forms a closed-loop process of switching to a new lane and then switching back. The advantage of doing so is that even without the ground truth of the left and right lanes, we can still enhance the quality of the model's rendering of the left and right lanes by establishing virtual lanes, allowing the model to improve the quality of lane switching, as shown in Figure <ref>.
§.§ Multi-Lane Diffusion Loss
There is no ground truth available for training when switching lanes. When the lane switching amplitude is large, obstacles can obstruct the view during lane changes, making it impossible to collect information about the new lane from the current lane, as shown in Figure <ref>. Therefore, in order to better address this issue, we use diffusion prior knowledge to imagine color information from a novel lane view.
The traditional diffusion model denoising method directly completes the generated image, but due to the diversity of the generated models, it can lead to inconsistent results between frames. Our method calculates the loss of the denoised image and the image before denoising, and generates a new perspective supervised by diffusion. Additionally we construct multi-lane novel view images, instead of utilizing image of the current lane as input for U-Net denoising. This approach helps ensure that the autonomous driving lane remains visible in the image following a change in viewpoint.
Specifically, we adapt the Stable Diffusion framework <cit.>, and use the Variational AutoEncoder <cit.> to encode the multi-lane images into latent code, including the left lane, middle lane and right lane. Then, we perform several denoising steps on the latent code as an initialization parameter for Denoising U-Net, fixing the input text as the autonomous driving label. Generated through the CLIP <cit.>, denoised through several steps, and then decoded into images using the Variational AutoEncoder. These images serve as supervision to guide the synthesis of novel views.
§.§ Loss Function
Our model is trained on a single lane dataset and introduces a method of constructing virtual lanes to generate unknown domains through diffusion models. Therefore, our method mainly includes reconstruction loss, depth loss, virtual lane switching loss, and diffusion loss. The overall loss function is represented as follows:
ℒ=ℒ_recon +ℒ_depth +ℒ_switch +ℒ_diffusion .
Reconstruction loss. Our GGS model is a generative model for novel views on autonomous driving. During the training process, we construct a reconstruction loss function by comparing the rendered image with the ground truth using mean square error loss.
ℒ_recon =1/n∑_i=1^n(y_i-ŷ_i)^2,
where y_i represents the color value in Ground Truth corresponding to a certain pixel, and ŷ_i) represents the color value in the rendered image corresponding to the same pixel.
Depth loss. In most autonomous driving scenarios, lanes are regular and even, so the depth of adjacent pixels should be smooth to avoid abrupt changes. Therefore, we construct the depth loss function as follows:
ℒ_depth =1/n∑_i=1^n (dD_i/dx +dD_i/dy + λ(d^2D_i/dx^2 +d^2D_i/dy^2)),
where dD_i/dx, dD_i/dy, d^2D_i/dx^2 and d^2D_i/dy^2 represents the first and second derivatives of the depth in the x and y-axis directions of the image, respectively. And λ is the depth smoothing adjustment factor.
Lane switching loss. Due to the lack of lane switching data, we train the model by constructing virtual lanes and switching back, and construct a lane switching loss.
ℒ_switch =1/n∑_i=1^n(y_i-Ψ(Φ (ŷ_i)))^2,
where Φ represents constructing virtual lanes and Ψ represents switching from the virtual lane to the current lane.
Multi-lane diffusion loss. When we switch lanes in autonomous driving, changes in view can cause artifacts, so we use denoising methods to eliminate noise.
ℒ_diffusion =𝔼_π, t[β(t) (y-ŷ_π_1+ℒ_lpips(y, ŷ_π))],
where π represents the camera pose of the selected views, y represents the multi-lane images, ŷ_π represents the output images from the denoising model, β(t) is a weight function related to the noise level, and ℒ_lpips represents perceptual loss, which aims to emulate human perception of image similarity to better capture visual differences between images.
§ EXPERIMENTS
We compare GGS with ADOP <cit.>, READ <cit.>, 3DGaussian <cit.>, GaussianPro <cit.>, UC-NeRF <cit.> and DC-Gaussian <cit.>. We use Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), perceptual loss (VGG loss), perceptual metrics, and Learned Perceptual Image Patch Similarity (LPIPS) as evaluation metrics.
§.§ Evaluation on KITTI and BrnoUrban
From Table <ref>, methods based on 3D Gaussian Splatting, such as GaussianPro, and DC-Gaussian, generate slightly better quality than other methods based on neural radiation fields. However, in some scenes, the rendering quality is inferior, and our model performs better.
As illustrated in Figure <ref>, GaussainPro and DC-Gaussian fail to capture details such as tree leaves and utility poles. The rendering quality of the READ is inadequate, and UC-NeRF does not render the white lines in the middle of the road. The comparison methods of different models for lane switching are shown in Figure <ref>. Compared to other models, our method demonstrates excellent overall rendering quality and lane switching quality.
§.§ Assessing Cross-dataset Generalization
Our method GGS has the advantage of generalization in extending to new scenarios outside the distribution. To evaluate the generalization of our model, we conduct two cross-dataset evaluations. Specifically, we train the model on KITTI dataset and test it on Brno Urban dataset <cit.>. Conversely, we train the model on Brno Urban and test it on KITTI, as shown in Figure <ref>.
§.§ Ablation Study
Effect of Virtual Lane Generation module. To demonstrate the effectiveness of the virtual lane generation module, we use FID <cit.> to conduct lane-switching experiments on different models, as shown in Table <ref>. FID@LEFT and FID@RIGHT represents the distance between the rendered images of the left and right lanes and the GT. The qualitative experimental results are illustrated in Figure <ref>. Our model achieves high rendering quality while ensuring that quality remains unaffected during lane switching, with quantitative results shown in Table <ref> and qualitative results shown in Figure <ref>.
Effect of Multi-Lane Diffusion Loss. Due to limited input view information, some unknown areas cannot be synthesized after lane switching. Therefore, a diffusion model is used to imagine the unknown areas and optimize the generation quality, as shown in Table <ref>.
Effect of Depth Refinement Module. The depth refinement module introduces neighborhood feature information to optimize depth estimation in the presence of occluded objects, as shown in Table <ref>. After removing the deep refinement module, each metrics affected slightly.
§ CONCLUSIONS
In this paper, we have proposed a generative framework based on MVS and 3D Gaussian Splatting fusion, which can repair unknown regions to optimize generation quality. By simulating the virtual lanes, our method effectively switches driving lanes in autonomous driving scenarios, suitable for simulation testing of autonomous driving systems. Our method has some limitations, the quality of lane switching generation needs to be improved when encountering dynamic scenes with complex road conditions, multiple people, and mixed vehicles.
|
http://arxiv.org/abs/2409.02196v1 | 20240903180527 | Gravitational instability in a planet-forming disk | [
"Jessica Speedie",
"Ruobing Dong",
"Cassandra Hall",
"Cristiano Longarini",
"Benedetta Veronesi",
"Teresa Paneque-Carreño",
"Giuseppe Lodato",
"Ya-Wen Tang",
"Richard Teague",
"Jun Hashimoto"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
* Department of Physics & Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada
* Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People’s Republic of China
* Department of Physics and Astronomy, The University of Georgia, Athens, GA 30602, USA
* Center for Simulational Physics, The University of Georgia, Athens, GA 30602, USA
* Università degli Studi di Milano, Via Celoria 16, 20133, Milano, Italy
* Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom
* Univ Lyon, Univ Lyon1, Ens de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis,-Laval, France
* Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, the Netherlands
* European Southern Observatory, Karl-Schwarzschild-Str 2, 85748 Garching, Germany
* Academia Sinica, Institute of Astronomy and Astrophysics, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd., Taipei, Taiwan
* Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
* Astrobiology Center, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
* Subaru Telescope, National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan
* Department of Astronomy, School of Science, Graduate University for Advanced Studies (SOKENDAI), Mitaka, Tokyo 181-8588, Japan
[sn-nature]
§ ABSTRACT
The canonical theory for planet formation in circumstellar disks proposes that planets are grown from initially much smaller seeds<cit.>.
The long-considered alternative theory
proposes that
giant protoplanets can be formed directly from collapsing fragments of vast spiral arms<cit.> induced by gravitational instability (GI)<cit.> – if the disk is gravitationally unstable.
For this to be possible, the disk must be massive compared to the central star: a disk-to-star mass ratio of 1/10 is widely held as the rough threshold for triggering GI, inciting significant non-Keplerian dynamics and generating prominent spiral arms<cit.>.
While estimating disk masses has historically been challenging<cit.>, the motion of the gas can reveal the presence of GI through its effect on the disk velocity structure<cit.>.
Here we present kinematic evidence of gravitational instability in the disk around AB Aurigae,
using deep observations of ^13CO and C^18O line emission
with the Atacama Large Millimeter/submillimeter Array (ALMA).
The observed kinematic signals strongly resemble
predictions from simulations
and analytic modeling.
From quantitative comparisons, we infer a disk mass of up to 1/3 the stellar mass enclosed within
1” to 5” on the sky.
We targeted the disk around AB Aurigae (AB Aur), a
2.5-4.4 Myr old<cit.>
Herbig Ae<cit.>
star of intermediate mass (M_⋆ = 2.4 M_⊙)<cit.>
at a distance of 155.9 ± 0.9 pc<cit.>.
AB Aur is at a relatively late stage of
protostellar evolution,
classified as a Class II Young Stellar Object<cit.> (YSO).
To probe the velocity structure of the disk,
we obtained deep ALMA Band 6 observations of molecular emission lines
^13CO (J=2-1) and C^18O (J=2-1)
with high velocity resolution (channel widths of v_ chan=42 m/s and 84 m/s respectively).
The observations were taken in two array configurations with baselines ranging 14 to 2,216 m,
reaching a total
on-source integration time
of 5.75 hours.
Imaging with a Briggs robust value of 0.5 provided image cubes with
a spatial resolution or beam size of 0.237”× 0.175” (beam position angle, PA=1.2^∘) equivalent to 37 × 27 au.
We collapse the 3D image cubes into 2D moment maps to expose the velocity-integrated intensity (moment 0), intensity-weighted line-of-sight velocity (v_ los, moment 1) and emission line width (moment 2). This collection is shown in Extended Data Figure <ref>.
To reveal the
spiral arms in the disk, we apply a high-pass filter<cit.> (see Methods) to the ALMA ^13CO moment maps (Figure <ref>bcd).
In the filtered
line-of-sight velocity (moment 1) map, we observe spiral-shaped disturbances in the gas velocity field throughout the disk (Figure <ref>b).
With the filtered velocity-integrated intensity (moment 0) and line width (moment 2) maps, we visually highlight regions of peak density and temperature (Figure <ref>cd).
Compression and shock-heating are expected to lead to temperature enhancements (and thus localized line broadening) within GI-induced density spirals in self-regulating disks<cit.>.
The VLT/SPHERE H-band scattered light image of AB Aur originally presented in Boccaletti et al. (2020)<cit.>
is shown for comparison (Figure <ref>a).
Scattered light comes from the disk surface, probing the distribution of (sub-)micron-sized dust usually well-coupled with the gas. Previous simulations have shown that GI-induced density spirals are prominent in scattered light<cit.>.
At least seven spiral structures (S1-S7) have been previously identified in the H-band image<cit.>,
though not all occupy the same radial region and
some may be branches of adjacent arms<cit.>.
The disk rotates counter-clockwise (the spiral arms are trailing), and the south side is the near side, tilted toward us<cit.>.
To provide a qualitative comparison to the ALMA observations, we run 3D smoothed-particle hydrodynamic (SPH) simulations of a gravitationally unstable disk (see Methods).
The simulations were post-processed with radiative transfer and then further processed to have the same viewing angle, sensitivity, spectral and angular resolution as the AB Aur data.
To place the disk comfortably within the gravitationally unstable regime (M_ disk/M_⋆≳ 0.1), we set the total gas mass to 0.3× the mass of the star.
For sustained spiral arms, we set the cooling timescale to 10× the local dynamical timescale (β = 10).
The simulated GI disk shows spiral structures in all three moment maps, resembling those in the AB Aur disk
(Extended Data Figures <ref> and <ref>).
Overall, the AB Aur disk hosts a global architecture
of spiral arms at 100 to 1,000 au scales across all azimuths in multi-wavelength observations tracing different disk components and quantities, strongly indicating ongoing gravitational instability.
One characteristic kinematic feature in the AB Aur disk can be found in the isovelocity curve at the systemic velocity v_ sys
in the moment 1 map —
Figure <ref>a shows
a sinusoidal pattern at v_ los=v_ sys (along the minor axis; white color), more prominent towards the south.
This signature, known as a “minor axis GI wiggle”<cit.>,
has been predicted in hydrodynamic simulations<cit.> and analytic theory<cit.> as a clear kinematic signature of gravitational instability (Figure <ref>bc).
It is one of a global set of GI wiggles in isovelocity curves
we observe
throughout the AB Aur disk (Extended Data Figure <ref>).
These wiggles are generated by self-gravitating spiral arms, which constitute local minima in the gravitational potential field and induce corresponding oscillations in the gas velocity field.
The
synthetic moment 1 map of the SPH GI disk simulation shows
a minor axis GI wiggle
with similar morphology as the observed one (Figure <ref>c),
completely distinct from the linear pattern found in
a disk undergoing Keplerian rotation with no radial motions (Figure <ref>bc insets).
Among all GI wiggles, the minor axis GI wiggle has been known and targeted in past studies for its convenience in quantitative analysis<cit.>.
Due to projection effects, only the radial and vertical components of the disk velocity field (v_r or v_z)
contribute to v_ los at the systemic velocity traced by this wiggle.
In the case of GI-induced velocity perturbations, the v_r contribution is expected to dominate<cit.>.
As we show with 2D analytic calculations of gravitationally unstable disks (see Methods),
a self-gravitating spiral arm induces radial motion convergent on itself, appearing as a wiggle in the moment 1 map at v_ sys
where the spiral crosses the minor axis (c.f. Extended Data Figure <ref>).
The filtered moment 1 map in Figure <ref>b displays red- and blue-shift patterns corresponding to convergent flows toward spiral S5 (visible in both scattered light and ^13CO moment 0 and 2; Figure <ref>acd), supporting the interpretation that the GI wiggle along the southern minor axis in Figure <ref>a is generated by a self-gravitating spiral arm.
Having identified evidence of gravitational instability in disk kinematics and in the detections of spirals across multiple tracers and moment maps, we now quantitatively analyze the GI wiggle along the southern minor axis to constrain the disk mass.
We extract the ^13CO and C^18O emission spectra along the southern disk minor axis (Figure <ref>ab) and
detect the wiggle in position-velocity space (hereafter referred to as the “PV wiggle”), which is a different view of the position-position wiggle in Figure <ref>a. Slicing the 3D image cubes this way more comprehensively exposes the gas velocity structure and enables us to quantify the perturbation
in units of velocity.
We measure the emission line centers by performing a quadratic fit to the spectrum in each spatial pixel of the image cube<cit.>.
This method achieves sub-spectral resolution precision on the line center and yields statistically meaningful and robust uncertainties<cit.>.
We find remarkably similar sinusoidal
morphology between the PV wiggles in ^13CO and C^18O emission (Figure <ref>a).
Theoretical studies have shown that the dynamical response of a disk
to its own self-gravity is sensitive to the disk-to-star mass ratio and the cooling rate<cit.>.
Specifically, the amplitude of the induced radial velocity perturbations is proportional to (M_ disk/M_⋆)^2 and β^-1/2 (Eqns. <ref> & <ref> in Methods).
This allows us to use the observed minor axis PV wiggle to infer the disk mass once we make assumptions on the disk cooling rates.
Following Longarini et al. (2021)<cit.>, we employ a statistical metric to quantify the `magnitude' of the minor axis PV wiggle,
defined as the standard deviation of the line center velocities over a radial range. Bounded by the inner central cavity and outer edge of recovered C^18O emission, our radial range spans 1” to 5” (155 to 780 au).
We find a magnitude of 37.4 ± 2.9 m/s
for the southern minor axis PV wiggle in ^13CO and
44.2 ± 1.3 m/s
in C^18O (Figure <ref>b).
For comparison, the gravitationally unstable disk in the SPH simulation has a southern minor axis PV wiggle in ^13CO emission
with quantitatively similar amplitude and sinusoidal morphology (Figure <ref>c),
and a magnitude of 39.1± 1.8 m/s (Extended Data Figure <ref>a).
Quantifying the minor axis PV wiggle magnitude as above, we perform
comparisons against analytic models to identify the combinations of disk mass (M_ disk/M_⋆) and cooling timescale (β)
that satisfy the AB Aur observations. A proof of concept of this technique with the SPH simulation is shown in Extended Data Figure <ref>b.
Using the analytic modeling code [<http://doi.org/10.5281/zenodo.10205110>] of Longarini et al. (2021)<cit.> (Methods), we calculate the minor axis PV wiggle magnitude
in gravitationally unstable disk models
for 60×60 combinations of M_ disk/M_⋆ and β, letting each vary within the ranges
0.0 ≤ M_ disk/M_⋆≤0.4 and 10^-2≤β≤ 10^2.
A demonstrative analytic curve for the minor axis PV wiggle
from the same
model shown in Figure <ref>b
is underlaid in Figure <ref>a for qualitative comparison.
Figure <ref>c shows the resulting map of 60×60 analytic minor axis PV wiggle magnitudes.
Overlaying contours in this map at the magnitude values measured for the AB Aur ^13CO and C^18O southern minor axis PV wiggles,
we find a disk mass in the gravitationally unstable regime with 0.1 ≲ M_ disk/M_⋆≲ 0.3 for a cooling timescale of 0.1<β<10.
This result is robust to plausible variations in the analytic model parameter choices (Extended Data Figure Figure <ref>).
This disk mass range is broadly consistent with the observed spiral morphology — a lower disk mass
may result in a large number of more tightly wound spirals
than we observe, and vice versa<cit.>.
To demonstrate that the implied cooling timescales are compatible with the constrained disk mass values, Figure <ref>c also
displays
ranges of β derived from independent radiative cooling prescriptions
(see Methods).
The detection of GI in the disk around AB Aur, a
Class II YSO<cit.>, demonstrates that gravitational instability can take place during later evolutionary stages.
This result, together with previous reports of multiple protoplanet candidates in and amongst spiral arms
in the system<cit.> (Extended Data Figure <ref>),
provides a direct observational connection between gravitational instability and planet formation.
Looking forward, the AB Aur system can be an ideal testbed for understanding how planet formation is facilitated by GI-induced spiral arms – whether by fragmentation into gas clumps enabled by rapid cooling<cit.> (β≲ 3), or by dust collapse of solids concentrated within spiral arms sustained by slow cooling<cit.> (β≳ 5).
§ REFERENCES
10
url<#>1#1urlprefixURL
chiang-youdin-2010-review
authorChiang, E. & authorYoudin, A. N.
titleForming Planetesimals in Solar and Extrasolar
Nebulae.
journalAnnual Review of Earth and Planetary
Sciences volume38, pages493–522
(year2010).
johansen-lambrechts2017-review
authorJohansen, A. & authorLambrechts, M.
titleForming Planets via Pebble Accretion.
journalAnnual Review of Earth and Planetary
Sciences volume45, pages359–387
(year2017).
ormel2017-review
authorOrmel, C. W.
editorPessah, M. & editorGressel, O.
(eds) titleThe Emerging Paradigm of Pebble Accretion.
(eds editorPessah, M. & editorGressel,
O.) booktitleFormation, Evolution, and Dynamics of Young
Solar Systems, Vol. volume445 of
seriesAstrophysics and Space Science Library,
pages197 (year2017).
liu-ji-2020-review
authorLiu, B. & authorJi, J.
titleA tale of planet formation: from dust to planets.
journalResearch in Astronomy and Astrophysics
volume20, pages164 (year2020).
drazkowska2023-ppvii
authorDrążkowska, J. et al.
editorInutsuka, S., editorAikawa, Y.,
editorMuto, T., editorTomida, K. &
editorTamura, M. (eds) titlePlanet Formation
Theory in the Era of ALMA and Kepler: from Pebbles to Exoplanets.
(eds editorInutsuka, S., editorAikawa,
Y., editorMuto, T., editorTomida, K. &
editorTamura, M.) booktitleProtostars and
Planets VII, Vol. volume534 of
seriesAstronomical Society of the Pacific Conference
Series, pages717 (year2023).
2203.09759.
boss1997
authorBoss, A. P.
titleGiant planet formation by gravitational
instability.
journalScience volume276,
pages1836–1839 (year1997).
gammie2001
authorGammie, C. F.
titleNonlinear Outcome of Gravitational Instability in
Cooling, Gaseous Disks.
journal volume553,
pages174–183 (year2001).
rice2003
authorRice, W. K. M. et al.
titleSubstellar companions and isolated planetary-mass
objects from protostellar disc fragmentation.
journal volume346,
pagesL36–L40 (year2003).
zhu2012-challenges-clumps
authorZhu, Z., authorHartmann, L.,
authorNelson, R. P. & authorGammie, C. F.
titleChallenges in Forming Planets by Gravitational
Instability: Disk Irradiation and Clump Migration, Accretion, and Tidal
Destruction.
journal volume746,
pages110 (year2012).
deng2021-magnetic-fragmentation
authorDeng, H., authorMayer, L. &
authorHelled, R.
titleFormation of intermediate-mass planets via
magnetically controlled disk fragmentation.
journalNature Astronomy
volume5, pages440–444
(year2021).
cadman2021
authorCadman, J., authorRice, K. &
authorHall, C.
titleAB Aurigae: possible evidence of planet formation
through the gravitational instability.
journal volume504,
pages2877–2888 (year2021).
lodato-rice-2004
authorLodato, G. & authorRice, W. K. M.
titleTesting the locality of transport in
self-gravitating accretion discs.
journal volume351,
pages630–642 (year2004).
cossins2009
authorCossins, P., authorLodato, G. &
authorClarke, C. J.
titleCharacterizing the gravitational instability in
cooling accretion discs.
journal volume393,
pages1157–1173 (year2009).
dipierro2014
authorDipierro, G., authorLodato, G.,
authorTesti, L. & authorde Gregorio Monsalvo, I.
titleHow to detect the signatures of self-gravitating
circumstellar discs with the Atacama Large Millimeter/sub-millimeter Array.
journal volume444,
pages1919–1929 (year2014).
kratter-lodato-2016
authorKratter, K. & authorLodato, G.
titleGravitational Instabilities in Circumstellar
Disks.
journal volume54,
pages271–311 (year2016).
dong2015-GIspirals-scatteredlight
authorDong, R., authorHall, C.,
authorRice, K. & authorChiang, E.
titleSpiral Arms in Gravitationally Unstable
Protoplanetary Disks as Imaged in Scattered Light.
journal volume812,
pagesL32 (year2015).
hall2016-continuumGIspirals
authorHall, C. et al.
titleDirectly observing continuum emission from
self-gravitating spiral waves.
journal volume458,
pages306–318 (year2016).
hall2019-temporalGIspiralsALMA
authorHall, C. et al.
titleThe Temporal Requirements of Directly Observing
Self-gravitating Spiral Waves in Protoplanetary Disks with ALMA.
journal volume871,
pages228 (year2019).
panequecarreno2021-elias27
authorPaneque-Carreño, T. et al.
titleSpiral Arms and a Massive Dust Disk with
Non-Keplerian Kinematics: Possible Evidence for Gravitational Instability in
the Disk of Elias 2-27.
journal volume914,
pages88 (year2021).
veronesi2021-elias27
authorVeronesi, B. et al.
titleA Dynamical Measurement of the Disk Mass in Elias
227.
journal volume914,
pagesL27 (year2021).
stapper2023-herbig-gas-masses
authorStapper, L. M. et al.
titleConstraining the gas mass of Herbig disks using CO
isotopologues.
journalarXiv e-prints
pagesarXiv:2312.03835 (year2023).
hall2020
authorHall, C. et al.
titlePredicting the Kinematic Evidence of Gravitational
Instability.
journal volume904,
pages148 (year2020).
longarini2021
authorLongarini, C. et al.
titleInvestigating Protoplanetary Disk Cooling through
Kinematics: Analytical GI Wiggle.
journal volume920,
pagesL41 (year2021).
terry2022-diskmass
authorTerry, J. P. et al.
titleConstraining protoplanetary disc mass using the GI
wiggle.
journal volume510,
pages1671–1679 (year2022).
vandenancker1997
authorvan den Ancker, M. E. et al.
titleHIPPARCOS data on Herbig Ae/Be stars: an
evolutionary scenario.
journal volume324,
pagesL33–L36 (year1997).
dewarf2003
authorDeWarf, L. E., authorSepinsky, J. F.,
authorGuinan, E. F., authorRibas, I. &
authorNadalin, I.
titleIntrinsic Properties of the Young Stellar Object SU
Aurigae.
journal volume590,
pages357–367 (year2003).
beck2019-H2
authorBeck, T. L. & authorBary, J. S.
titleA Search for Spatially Resolved Infrared
Rovibrational Molecular Hydrogen Emission from the Disks of Young Stars.
journal volume884,
pages159 (year2019).
garufi2024-destinys-taurus
authorGarufi, A. et al.
titleThe SPHERE view of the Taurus star-forming region.
journalarXiv e-prints
pagesarXiv:2403.02158 (year2024).
rodrigues2014-abaur-outflow
authorRodríguez, L. F. et al.
titleAn Ionized Outflow from AB Aur, a Herbig Ae Star
with a Transitional Disk.
journal volume793,
pagesL21 (year2014).
guzman-diaz2021-herbig-study
authorGuzmán-Díaz, J. et al.
titleHomogeneous study of Herbig Ae/Be stars from
spectral energy distributions and Gaia EDR3.
journal volume650,
pagesA182 (year2021).
gaiaDR3-2023
authorGaia Collaboration et al.
titleGaia Data Release 3. Summary of the content and
survey properties.
journal volume674,
pagesA1 (year2023).
henning1998
authorHenning, T., authorBurkert, A.,
authorLaunhardt, R., authorLeinert, C. &
authorStecklum, B.
titleInfrared imaging and millimetre continuum mapping of
Herbig Ae/Be and FU Orionis stars.
journal volume336,
pages565–586 (year1998).
bouwman2000
authorBouwman, J., authorde Koter, A.,
authorvan den Ancker, M. E. & authorWaters,
L. B. F. M.
titleThe composition of the circumstellar dust around the
Herbig Ae stars AB Aur and HD 163296.
journal volume360,
pages213–226 (year2000).
perez2016-elias27
authorPérez, L. M. et al.
titleSpiral density waves in a young protoplanetary
disk.
journalScience volume353,
pages1519–1521 (year2016).
boccaletti2020-abaursphere
authorBoccaletti, A. et al.
titlePossible evidence of ongoing planet formation in AB
Aurigae. A showcase of the SPHERE/ALMA synergy.
journal volume637,
pagesL5 (year2020).
dong16protostellar
authorDong, R., authorVorobyov, E.,
authorPavlyuchenkov, Y., authorChiang, E. &
authorLiu, H. B.
titleSignatures of Gravitational Instability in Resolved
Images of Protostellar Disks.
journal volume823,
pages141 (year2016).
hashimoto2011
authorHashimoto, J. et al.
titleDirect Imaging of Fine Structures in Giant
Planet-forming Regions of the Protoplanetary Disk Around AB Aurigae.
journal volume729,
pagesL17 (year2011).
fukagawa2004
authorFukagawa, M. et al.
titleSpiral Structure in the Circumstellar Disk around AB
Aurigae.
journal volume605,
pagesL53–L56 (year2004).
lin2006-possible-molecular-spiralarms
authorLin, S.-Y. et al.
titlePossible Molecular Spiral Arms in the Protoplanetary
Disk of AB Aurigae.
journal volume645,
pages1297–1304 (year2006).
perrin2009
authorPerrin, M. D. et al.
titleThe Case of AB Aurigae's Disk in Polarized Light: Is
there Truly a Gap?
journal volume707,
pagesL132–L136 (year2009).
teague2018-robust-linecentroids
authorTeague, R. & authorForeman-Mackey, D.
titleA Robust Method to Measure Centroids of Spectral
Lines.
journalResearch Notes of the American Astronomical
Society volume2, pages173
(year2018).
teague2019-statistical-uncertainties
authorTeague, R.
titleStatistical Uncertainties in Moment Maps of Line
Emission.
journalResearch Notes of the American Astronomical
Society volume3, pages74
(year2019).
lodato-rice-2005
authorLodato, G. & authorRice, W. K. M.
titleTesting the locality of transport in
self-gravitating accretion discs - II. The massive disc case.
journal volume358,
pages1489–1500 (year2005).
oppenheimer2008-abaur
authorOppenheimer, B. R. et al.
titleThe Solar-System-Scale Disk around AB Aurigae.
journal volume679,
pages1574–1581 (year2008).
tang2017-abaur12COspirals
authorTang, Y.-W. et al.
titlePlanet Formation in AB Aurigae: Imaging of the Inner
Gaseous Spirals Observed inside the Dust Cavity.
journal volume840,
pages32 (year2017).
currie2022-abaurb
authorCurrie, T. et al.
titleImages of embedded Jovian planet formation at a wide
separation around AB Aurigae.
journalNature Astronomy
volume6, pages751–759
(year2022).
rice2004
authorRice, W. K. M., authorLodato, G.,
authorPringle, J. E., authorArmitage, P. J. &
authorBonnell, I. A.
titleAccelerated planetesimal growth in self-gravitating
protoplanetary discs.
journal volume355,
pages543–552 (year2004).
longarini2023b
authorLongarini, C., authorArmitage, P. J.,
authorLodato, G., authorPrice, D. J. &
authorCeppi, S.
titleThe role of the drag force in the gravitational
stability of dusty planet-forming disc - II. Numerical simulations.
journal volume522,
pages6217–6235 (year2023).
booth-clarke2016-dustySG
authorBooth, R. A. & authorClarke, C. J.
titleCollision velocity of dust grains in
self-gravitating protoplanetary discs.
journal volume458,
pages2676–2693 (year2016).
rowther2024-dustconcentration-GI
authorRowther, S. et al.
titleThe role of drag and gravity on dust concentration
in a gravitationally unstable disc.
journal volume528,
pages2490–2500 (year2024).
[sn-nature]
§ METHODS
Additional information on the source.
AB Aur is accreting from the disk at a rate Ṁ∼10^-7 M_⊙ yr^-1 (ref.<cit.>),
within the range
expected for modest GI-driven accretion (10^-7 - 10^-6 M_⊙ yr^-1)<cit.>.
This accretion rate, taken together with the current age t_0=2.5-4.4 Myr<cit.>, implies a high “latent disk mass”: M_ disk^ latent=Ṁ(t_0)× t_0=0.25-0.44 M_⊙,
or M_ disk^ latent/M_ star∼0.1-0.2.
M_ disk^ latent provides an accretion rate-based
assessment of disk mass, assuming
a constant stellar accretion rate Ṁ and we are observing the system mid-way through the disk's lifetime<cit.>. This is a conservative estimate as the accretion rate at earlier epochs is likely higher<cit.>.
In millimeter continuum observations, the disk shows a dust ring at ∼1” and a cavity inside<cit.>, likely caused by the trapping of millimeter-sized dust at a pressure bump. The dust ring is located inside the main spirals in both the scattered light and gas emission.
Late infall from above or below the main disk plane<cit.>
is likely encouraging GI by providing a source of mass to maintain a high M_ disk/M_⋆ value<cit.>.
ALMA observations.
We observed AB Aur with ALMA in April, May and September 2022 under ALMA program ID 2021.1.00690.S (PI: R. Dong). Measurements were taken with the Band 6 receivers<cit.> in array configurations C-3 (2 execution blocks) and C-6 (6 execution blocks).
In total, the 8 execution blocks reached an on-source integration time of 5.75 hours, making this the longest fine-kinematics (v_ chan<100 m/s) program toward a single protoplanetary disk to date. Extended Data Table <ref> provides details of the observations.
We centered one spectral window (SPW) at the ^13CO J=2-1 molecular emission line transition rest frequency (220.3986 GHz),
covering a bandwidth of 58.594 MHz with 1920 channels, resulting in the highest achievable spectral resolution of 41.510 m/s after default spectral averaging with N=2 by Hanning smoothing within the correlator data processor. A second SPW was centered at the C^18O J=2-1 rest frequency (219.5603 GHz)
covering the same bandwidth with half as many channels (960 channels; due to sharing a baseband with another SPW), achieving a 83.336 m/s spectral resolution.
To enable self-calibration, our correlator setup sampled the continuum in another SPW centered at 233.012 GHz with 128 channels each 15.625 MHz in width, obtaining the full available 2.0 GHz bandwidth.
Using the continuum data, all execution blocks were aligned to a common phase center in the uv-plane.
We performed a series of phase-only self-calibration iterations,
and avoided combining by SPW in the first two rounds to remove any potential per-SPW phase offsets.
We also carried out one round of amplitude and phase self-calibration.
Finally, we applied the phase center realignments and calibration gain tables (that we generated with the continuum data) to the line data. We performed continuum subtraction in the uv-plane using the task.
All imaging was performed with the CASA task.
We used the multiscale
deconvolution algorithm<cit.> with
(Gaussian) deconvolution scales [0.02”, 0.1”, 0.3”, 0.6”, 1.0”].
We did not image with a Keplerian mask so as not to restrict our ability to observe non-Keplerian emission.
After experimentation with CASA's auto-multithresh masking algorithm<cit.>, we adopted
an imaging strategy similar to PHANGS-ALMA<cit.>,
in which we clean conservatively, with a broad mask ( and ), forcing frequent major cycles[The ^13CO robust 0.5 cube underwent 198 major cycles and the C^18O cube underwent 76.].
To achieve frequent major cycles we set
the maximum number of minor cycle iterations per channel to ,
the minor cycle threshold to and ,
and the maximum assigned clean component to times the peak residual.
We adopted a Briggs robust weighting scheme, and generated two sets of image cubes; one with a robust value of 0.5 and a second with robust 1.5.
The corresponding beam sizes for ^13CO are 237×175 mas, 1.2^∘ for robust 0.5 and 390×274 mas, -1.4^∘ for robust 1.5.
We imaged with a FOV out to the primary beam FWHM (38”) with 0.02” pixels (9 or 12 pixels per synthesized beam minor or major axis, respectively).
We imaged in LSRK velocity channels at 42 m/s for ^13CO and 84 m/s for C^18O respectively (nearly native channel spacing).
The threshold was set to 5× the rms noise measured in 20 line-free channels
of the dirty image cube.
We applied JvM correction<cit.> and primary beam correction. The rms noise in the resulting ^13CO cubes imaged with robust 0.5 and robust 1.5 is 2.0 mJy/beam and 1.2 mJy/beam respectively, and 0.6 mJy/beam in the C^18O cube imaged with robust 1.5.
We used the robust 0.5 image cubes for our position-position analysis (moment maps; Figures <ref> & <ref>) and the robust 1.5 cubes for our position-velocity analysis (PV diagrams and line centers; Figures <ref> & <ref>).
We made the moment 0, 1 and 2 maps
using the <cit.> methods `collapse_zeroth, `collapse_first', and `collapse_percentiles', respectively. We note that we calculate our “moment 2” maps as the average of the red- and blue-shifted line widths about the intensity-weighted median line center (i.e., as the average of the and maps). Mathematically this is a different approach to find the line width than the classic moment 2 approach, though in
our case
we find the two yield nearly identical outcomes.
We applied sigma-clipping at 5× the rms noise and performed no spectral smoothing.
Geometric properties.
We used the Python package <cit.> to infer geometric properties of the disk, namely to constrain the disk center x_0, y_0, the disk inclination i, the position angle PA, the systemic velocity v_ sys, and the dynamical stellar mass M_⋆.
We performed an MCMC to fit the C^18O moment 1 map (Extended Data Figure <ref>) with a geometrically thin Keplerian disk rotation profile:
v_0 = √(G M_⋆/r)·sini·cosϕ + v_ sys ,
where
r is the disk radius, ϕ is the azimuthal angle around the disk, and G is the gravitational constant.
Following convention, we fix the inclination to the value found from fitting the continuum, i=23.2^∘ (ref.<cit.>), and the distance to 155.9 pc (Gaia DR3<cit.>).
We assumed flat priors for all values and spatially downsampled the rotation map to the beam FWHM prior to the likelihood calculation so that only spatially independent pixels were considered.
The calculation of the posterior distributions was run with 128 walkers and an initial burn-in period of 10,000 steps before the posterior distributions were sampled for additional 10,000 steps.
The resulting posterior distributions were
x_0= -5 ± 7 mas,
y_0= -17 ± 7 mas,
PA = 236.7 ± 0.3 ^∘,
M_⋆= 2.23 ± 0.02 M_⊙, and
v_ sys = 5858 ± 5 m/s,
where we report the uncertainties represented by the 16th and 84th percentiles about the median value.
The latter three values are consistent with constraints from previous observations<cit.>.
Hydrodynamic simulations and synthetic ALMA observations.
We performed 3D global smoothed-particle hydrodynamic (SPH) simulations with the PHANTOM code<cit.>
using 1 million SPH particles.
We assumed a central star mass of 2.4 M_⊙ (ref.<cit.>),
represented by a sink particle<cit.> with accretion radius set to 60 au. The initial inner and outer disk radii were set to r_ in,SPH=80 au and r_ out,SPH=500 au, respectively.
We set the initial gas mass to 0.7 M_⊙, corresponding to M_ disk/M_⋆=0.29.
The surface density profile follows Σ∝ r^-p (where the power-law index p=1.0), and the sound speed profile follows c_ s∝ r^-q (where q=0.25). The initial disk aspect ratio was set to H/r=0.05 at 80 au.
We set α_ SPH such that α_ min≤α_ SPH≤α_ max, with α_ min = 0.001 and α_ max=1.0, with the value of α_ SPH set by the Cullen & Dehnen (2010)<cit.>
switch that increases viscosity only in the case of converging flows. This results in a Shakura-Sunyaev viscosity of α_ SS≈ 0.01 throughout the disk.
We assumed an adiabatic equation of state, with heating from compressional P dV work and shock heating.
The disk cools by Gammie cooling<cit.> (a.k.a. β-cooling) where the cooling timescale is proportional to the local dynamical time by the factor β, such that t_ cool(r) = β Ω^-1(r), where Ω(r)=(G M_⋆ / r^3)^1/2 is the Keplerian frequency. We set β=10, a typical value used or found in simulations<cit.>.
We let the simulation evolve for five orbital periods of the outermost particle, at which point the disk settles into a state in which the Toomre Q parameter is between 1 and 2 from r_ in,SPH to 1.1r_ out,SPH.
We computed the disk thermal structure and ^13CO (J=2-1) model line cubes using the Monte Carlo radiative transfer code MCFOST<cit.>.
We assumed the ^13CO molecule is in local thermodynamic equilibrium (LTE) with its surroundings and that the dust is in thermal equilibrium with the gas (T_ gas=T_ dust).
We set the ^13CO/H_2 abundance to 7×10^-7 (ref.<cit.>) and we used
≈ 10^7 photon packets to calculate T_ dust.
Voronoi tesselation was performed on 990,972 SPH particles which corresponded to 99% of the mass in the simulation.
We set the total dust mass to 1% of the total SPH gas mass and used a dust grain population with 50 logarithmic bins ranging in size from 0.1 μm to 3.0 mm.
The dust optical properties are computed using Mie theory.
The central star was represented as a sphere of radius 2.5 R_⊙ radiating isotropically
at an effective temperature T_ eff=9770 K,
set to match AB Aur<cit.>.
The disk was given an inclination of 23.2^∘, a position angle of 236.7^∘ (where PA is measured east of north to the red-shifted major axis), and placed at a distance of 155.9 pc, all consistent with the AB Aur system.
We used the same PHANTOM simulation to create both the GI and Keplerian model line cubes shown in Figure <ref>c and <ref>. We created the Keplerian counterpart with MCFOST, using the flags and to force the radial and vertical velocities to be zero, and to force the azimuthal velocities to be Keplerian.
Both ^13CO model line cubes were generated with MCFOST, binned at the observed spectral resolution of 42 m/s,
and gridded in the image plane to have 2048×2048 pixels of size 0.02”.
We assumed a turbulent velocity of 0.05 km/s.
We generated synthetic ALMA image cubes from the ^13CO model line cubes using [<https://github.com/richteague/syndisk>] to match the properties of the observed AB Aur ^13CO image cubes (robust 0.5 and 1.5).
In the latter case the model line cube was convolved with a beam of size 0.390”× 0.274” and PA -1.4^∘. Correlated noise was added with an rms of 1.2 mJy/beam.
The model data were then smoothed with a Hanning spectral response function with a resolution of 42 m/s. Effects associated with interferometric or spatial filtering are not captured by this process, and our synthetic ALMA image cubes are effectively fully-sampled in the uv-plane. The synthetic cubes were collapsed into moment maps following the same procedure as the AB Aur data (Extended Data Figure <ref>).
Analytic modeling.
We analytically compute the velocity fields of gravitationally unstable disks using the [<http://doi.org/10.5281/zenodo.10205110>] package developed by Longarini et al. (2021)<cit.>.
Working in 2D polar coordinates (r, ϕ), considers a geometrically thin disk with surface density profile Σ_0 ∝ r^-p and inclination i, centered on a star of mass M_⋆. It computes the projected line-of-sight velocity field as
v_ los = (v_rsinϕ + v_ϕcosϕ) sini + v_ sys ,
where v_r and v_ϕ are the radial and azimuthal components of the disk velocity field.
The basic state of the disk
(i.e., considering only the gravitational potential contribution from the central star)
is assumed to be Keplerian: v_r=0 and v_ϕ=v_ Kep.
The scheme of the model is to determine the perturbations in
v_r and v_ϕ generated by gravitational instability
by taking into account the additional gravitational contribution from the disk,
which is initialized as marginally unstable
and
imprinted with global spiral density perturbations.
The model computes the velocity field under the assumption that the disc is self-regulated.
This state is imposed by assuming a balance between heating (by compression and shocks within the spiral arms) and cooling (by radiative processes).
As such,
the amplitude of the spiral density perturbations A_Σ_ spir/Σ_0
saturated to a finite value proportional to the
cooling timescale β:<cit.>
A_Σ_ spir/Σ_0 = χβ^-1/2 ,
where the proportionality factor χ is of order unity<cit.>.
The imprinted spiral density perturbation is assumed to be small relative to the background surface density, so that all the relevant quantities (density Σ, gravitational potential Φ, velocities v_r and v_ϕ, and enthalpy h) can be written as a linear sum of the basic state and the perturbation:
X(r, ϕ) = X_0(r) + X_ spir(r, ϕ) .
The spiral perturbation in density is given the form
Σ_ spir(r, ϕ) = [ A_Σ_ spir e^j (m ϕ + ψ(r))] ,
where j=√(-1) (as we are using i to represent the disk inclination), and m is the azimuthal wavenumber. The “shape function” ψ(r) is described by m and the spiral pitch angle α_ pitch as:
ψ(r)= m/tanα_ pitchlog r ,
which is related to the radial wavenumber k by d ψ / dr = k. The spiral density perturbation necessarily
introduces a corresponding perturbation to the gravitational potential:
Φ_ spir(r, ϕ) = -2 π G/|k| Σ_ spir(r, ϕ) .
The negative proportionality Φ_ spir∝ - Σ_ spir is the definition of self-gravitating spiral arms.
As a result, corresponding perturbations in the azimuthal and radial velocities are driven:
v_r(r, ϕ) = [ A_v_r(r) · e^j (m ϕ + ψ(r))] ,
v_ϕ(r, ϕ) = [ A_v_ϕ(r) · e^j (m ϕ + ψ(r))] + r Ω ,
where we note r Ω≠ v_ Kep because the angular frequency Ω includes super-Keplerian rotation from the disk mass contribution:
Ω^2 = G M_⋆/r^3 + 1/r∂Φ_ disk/∂ r .
By assuming the disk is marginally unstable, and by maintaining the self-regulated state condition, the amplitude of the radial and azimuthal velocity perturbations A_v_r(r) and A_v_ϕ(r) are determined:<cit.>
A_v_r(r) = 2 j m χβ^-1/2( M_ disk(r)/M_⋆)^2 v_ Kep(r) ,
A_v_ϕ(r) = -1/2 j χβ^-1/2( M_ disk(r)/M_⋆) v_ Kep(r) ,
where M_ disk(r) is the disk mass enclosed within radius r.
With a surface density profile Σ_0(r) ∝ r^-p, then M_ disk(r) ∝ r^-p +2, and
the amplitude of the radial perturbation is described by A_v_r(r) ∝ r^-2p + 7/2. For p<7/4, A_v_r(r) is an increasing function of radius.
The factor of imaginary number j in Eqn. <ref>
has important physical consequences:
when the real component of A_v_r(r) is taken (Eqn. <ref>), the radial velocity perturbation is π/2 out of phase with the spiral density perturbation (Eqn. <ref>), and
convergent
at the locations where Σ_ spir takes a maximum.
Explicitly,
v_r(r, ϕ)|_ϕ=π/2∝ - sin(mπ/2 + ψ(r) ) ,
Σ_ spir(r, ϕ)|_ϕ=π/2∝cos(mπ/2 + ψ(r) ) .
For qualitative visual comparison with the AB Aur moment 1 map in Figure <ref>a, we
compute the projected line-of-sight velocity field of a gravitationally unstable disk with β=10 and M_ disk/M_⋆=0.3
in Figure <ref>b.
We set m=3 and α_ pitch=15^∘ to approximately match the ^13CO spirals in the AB Aur disk (Figure <ref>), and assume
p=1.0 and χ=1.0 (ref.<cit.>).
The dominant azimuthal wavenumber is expected to be inversely related to the disk-to-star mass ratio q, roughly obeying m ∼ 1/q (ref.<cit.>), so our choice of m=3 is consistent with M_ disk/M_⋆≈ 0.3.
Revealing global spiral structure.
We obtain the residual moment maps shown in Figure <ref> using a variation on the conventional high-pass filtering (a.k.a. unsharp masking) technique. The conventional method is to convolve the image with a Gaussian kernel and subtract the blurred image from the original. It is a common technique to increase the visual contrast of variations in an image and has been used successfully to reveal spiral structure disks (e.g.<cit.>). Here,
we perform the convolution with a radially expanding kernel[<https://github.com/jjspeedie/expanding_kernel>] – that is, with a Gaussian kernel whose FWHM, w, increases with radial distance from the image center (i.e., with disk radius) with a simple power-law dependence:
w(r) = w_0· (r/r_0)^γ ,
where w_0 is the kernel width at r_0=1”.
A radially expanding kernel provides a way to highlight variations more evenly throughout the disk, given the spatial scales of the variations –which are expected to track with the local scale height and increase with radius– and the dynamical range of the variations, which fall with radius. After experimentation we adopt w_0=0.3” and γ=0.25, though we emphasize this is a qualitative choice and
the key spiral features, such as their locations, are robust against a variety of choices in kernel parameters.
The high-pass filter technique is also flexible to the disk emission surface morphology, and can capture global scale deviations from Keplerian rotation in the background disk. Extended Data Figure <ref> compares the residual moment 1 maps in ^13CO and C^18O obtained after subtracting the axisymmetric geometrically thin Keplerian model (Eqn. <ref>) vs. after subtracting a blurred version of the moment 1 map made with the expanding kernel filter.
The Keplerian residuals (panels c and h) show signs of global scale deviation from Keplerian: the east (west) side is generally blue-shifted (red-shifted), hinting at
super-Keplerian rotation, signatures of disk mass contributing to the total mass of the system.
While spiral structure is indeed also visible in the Keplerian residuals,
the expanding kernel residuals (panels e and j) reveal the underlying spiral structure in a spatially even manner, indicating that the expanding kernel background model (panels d and i) more successfully captures the quasi-local background disk velocity.
We note that this background model is non-axisymmetric;
it displays excess blueshifted velocity in the southeast quadrant of the disk such that
the contour of v_ los=v_ sys diverges westward from the minor axis south of the star, possibly indicative of a global disk warp. This
is what necessitates a detrending of the line centers to isolate the sinusoidal component of the southern minor axis PV wiggle in Figure <ref>a (see section “Measuring the magnitude of AB Aur’s minor axis PV wiggle”).
Filtered moment maps for the synthetic ALMA observations of the simulated SPH GI disk are shown in Extended Data Figure <ref>.
Global kinematics of self-gravitating spiral arms.
Radially convergent motion (as in Figure <ref>bcd insets) serves as a kinematic signature for the location of self-gravitating spiral arms at disk azimuths where the radial velocity perturbation contributes sufficiently strongly to the observed velocity field, and thus cannot be a fully unambiguous locator at disk azimuths away from the minor axis.
Extended Data Figure <ref>c and g
provide maps of velocity residuals from Keplerian for the 2D analytic GI disk model and the SPH GI disk simulation.
The convergent motion toward the spiral spines is visible for a range of azimuths around the minor axis, but becomes progressively less clear moving toward the major axis as the azimuthal velocity –super-Keplerian rotation– contributes progressively more to the line-of-sight.
However, high-pass filtering (panel h) captures and removes the background super-Keplerian rotation, leaving a residual map that resembles the isolated radial component (panel d).
Extended Data Figure <ref>i-l overlays the locations of ^13CO spirals in the AB Aur disk (from filtered moment 0/2; Figure <ref>cd) onto the filtered moment 1 maps, in order to illustrate where convergent motion does or does not serve as a locator throughout the disk.
Ambiguity occurs around the major axis, which is a location of transition in the sign of v_rsinisinϕ (first term of Eqn. <ref>), and when two spirals are not well separated and their motions superimpose.
Three of the seven spiral structures in VLT/SPHERE scattered light appear to be spatially associable
with those in ^13CO (S1, S5, S7; panel l inset).
Offsets in the southeast quadrant of the disk (S2, S3, S4) may be further indication of a disk warp (Extended Data Figure <ref>di), or other non-trivial phenomena (e.g., vertical density and temperature gradients, projection effects<cit.>).
The kinematic signatures observed in the present ALMA dataset –probing disk scales ∼100 to 1,000 au– are recognizably different from what is expected for planet-driven perturbations.
Planetary wakes are dampened and become nearly circular as they propagate away from the planet<cit.>, whereas GI-driven spirals maintain their modest pitch angles with radius and the amplitude of the induced velocity perturbations depends on the enclosed disk mass (Eqns. <ref> & <ref>).
In the planetary case, the density and radial velocity perturbations are in phase (their peaks spatially coincide),
and the pattern of motion within an arm along a radial cross-section is divergent<cit.>.
Overall, the essential characteristic of GI-induced spirals is that they occur globally<cit.> (c.f. Figure <ref>, Extended Data Figures <ref>, <ref> & <ref>).
In previous datasets probing smaller spatial scales –within the AB Aur disk's central cavity–
planetary candidates P1/f1 (ref.<cit.>), P2/b (ref.<cit.>), and f2 (ref.<cit.>) are known to be associated with
–or driving–
spiral arms, as observed in VLT/SPHERE scattered light and/or ALMA ^12CO emission.
As shown in Extended Data Figure <ref>,
due to their small separations (≲ 0.7”),
kinematic signatures from these candidates are
inaccessible to our ALMA observations.
Clump-like signals `c' and `d'
seen by HST/STIS (ref.<cit.>)
at wide separations (∼ 2.75” and ∼ 3.72” respectively)
are in locations
tentatively suggestive of constituting spiral arm fragments
and may warrant further investigation.
Position-velocity analysis.
We use the robust 1.5 image cubes for our position velocity analysis to maximize the recovery of emission at large disk radii.
Owing to the clear association with a self-gravitating spiral arm (Figure <ref>bcd insets), we target the wiggle on the southern minor axis. A clear spiral arm in moment 0/2 crossing the northern minor axis is also observed, but at the outer edge of the recovered ^13CO and C^18O emission
(∼ 3”; c.f. Extended Data Figure <ref>kl).
We obtain the position-velocity diagrams
shown in Figure <ref> using
<cit.>
to extract spectra from pixels within
a 0.5^∘-wide
wedge-shaped mask
oriented 90^∘ clockwise of the red-shifted major axis (shown in Figure <ref> insets).
Our quantitative analysis of the minor axis PV wiggles is performed with maps of the line centers made
using the quadratic method of
<cit.>,
which fits a quadratic curve to the spectrum in each pixel of the cube:
I(v) = a_0 + a_1 (v-v_ peak) + a_2 (v - v_ peak)^2 ,
where v_ peak is the channel of peak intensity in the spectrum.
We select this approach over the traditional intensity-weighted mean velocity (moment 1) method specifically for its ability to provide well characterized, statistically meaningful uncertainties on the line center, σ_v los (ref.<cit.>).
The statistical uncertainty on each line center is computed as:
σ_v los = √(σ_I^2/8(3/a_2^2 + a_1^2/a_2^4)) ,
where σ_I is the rms noise of the intensities (see ref.<cit.> for a derivation).
The quadratic method also has the advantage of being unaffected by sigma-clipping and of automatically distinguishing the front side of the disk from the back side<cit.>.
Prior to the quadratic fitting we spectrally smooth the data with a Savitzky-Golay filter of polynomial order 1 and filter window length of 10 channels (420 m/s) in the case of ^13CO and 3 channels (252 m/s) in the case of C^18O. The former was also applied to the two synthetic ALMA ^13CO image cubes generated from the SPH simulations.
We
extract the values from the resulting line center and line uncertainty maps within the same wedge mask described above. The extracted line center values are shown as yellow points in Figure <ref> and the uncertainties are shown as yellow shaded regions in Figure <ref>a.
Measuring the magnitude of the minor axis PV wiggle.
Following Longarini et al. (2021)<cit.>, we measure the `magnitude' of a minor axis PV wiggle as the standard deviation of the line center values over a radial range.
Bounded by the inner central cavity and the outer edge of C^18O emission, we adopt a radial range of 1.0” to 5.0”.
We estimate the uncertainty on the magnitude measurement using a resampling procedure: we take 10,000 draws from Gaussian distributions centered on the observed line centers with standard deviation σ_v los (Eqn. <ref>) to create 10,000 instances of the minor axis PV wiggle; we compute their magnitudes; and then report the uncertainty as the standard deviation of those 10,000 magnitude estimates.
In addition to the wiggle, the ^13CO and C^18O emission on the southern disk minor axis also exhibit an underlying monotonic blueward trend with disk radius, seen in Figure <ref>ab as a subtle downward bend with radius of the line centers, or equivalently in Figure <ref>a as a westward or clockwise shift in the contour of v_ los=v_ sys. We earmark this feature as a possible disk warp (Extended Data Figure <ref>di), and adopt a least-squares fitting approach to isolate the sinusoidal component of the PV wiggle.
This approach yields the background trendline that minimizes the standard deviation of the residuals, thus providing the most conservative estimate for the magnitude of the detrended PV wiggle.
We fit a quadratic trendline (Extended Data Figure <ref>a) as it more closely resembles the high-pass filter background curve than a linear one (Extended Data Figure <ref>bc).
We show the quadratically-detrended PV wiggles in Figure <ref>a and report their magnitudes in Figure <ref>b.
We find very similar magnitudes for both the ^13CO and C^18O wiggles, despite C^18O likely tracing lower optical depths in the AB Aur disk. This empirically
substantiates comparisons with the 2D analytic model (next section).
Performing the same procedure outlined above on the synthetic ^13CO minor axis PV wiggle of the GI disk in the SPH simulation,
we find a wiggle magnitude of 39.1± 1.9 m/s
(Extended Data Figure <ref>).
Constraining disk mass with quantitative comparisons to analytic models.
We perform quantitative comparisons between the observed ^13CO and C^18O minor axis PV wiggles
and the projected radial velocity component in our analytic model, v_r
sini (ref.<cit.>). From Eqns. <ref> and <ref>, the projected radial velocity on the minor axis (ϕ=π/2) is:
v_r(r, ϕ)|_ϕ=π/2·sini = -2 m χβ^-1/2(M_ disk(r)/M_⋆)^2 v_ Kep(r) sin(mπ/2 + ψ(r) ) ·sini .
This curve reflects the disk mass enclosed within the inner and outer radii of the model, which we set to span the same projected radial range as the observed PV wiggles (1” to 5”).
We compute 3600 of these curves for a 60×60 grid of models with (total enclosed) M_ disk/M_⋆ linearly spaced ∈ [0.0, 0.4] and β logarithmically spaced ∈ [10^-2, 10^2].
Again we set m = 3 and α_ pitch = 15^∘ to match the AB Aur disk,
and assume p = 1.0 and χ = 1.0 (ref.<cit.>).
For qualitative comparison, we plot an example analytic minor axis PV wiggle behind the data in Figure <ref>a; the model has β=10 and M_ disk/M_⋆=0.3.
We show in Extended Data Figure Figure <ref> that m=3 reproduces the observed wiggles better than other choices, and that p=1.5 could also provide a satisfying match, while p=2.0 is too steep. Since the wiggle amplitude is independent of α_ pitch (Eqn. <ref>), the magnitude is constant with α_ pitch when sampled over the same range in phase (not shown).
We measure the minor axis PV wiggle magnitude of the 3600 models
and
present the resulting magnitude map in Figure <ref>c.
By drawing contours in the Figure <ref>c map at the magnitude values measured for AB Aur (37.4 ± 2.9 m/s in ^13CO and 44.2 ± 1.3 m/s in C^18O),
we find every combination of M_ disk/M_⋆ and β that satisfy the observations.
Repeating this procedure with our synthetic ALMA observations of the SPH GI disk simulation shown in Figure <ref>c, we find that
this technique
successfully recovers the disk mass set in the underlying SPH simulation (Extended Data Figure <ref>).
For independent physical estimates of
plausible β values between 1” to 5” (155 to 780 au), we rely on
radiative cooling prescriptions<cit.>.
From Equation 39 of Zhang & Zhu (2020)<cit.>, β is a function of r and depends on M_ disk through the surface density Σ.
We assume T = (ϕ L_⋆/8 π r^2 σ_SB)^1/4, where σ_SB is the Stefan-Boltzmann constant, L_⋆ = 59 L_⊙ is the stellar luminosity of AB Aur<cit.>, and ϕ=0.02 represents the flaring angle<cit.>.
We use the DSHARP Rosseland mean opacity<cit.>
κ_ R=κ_ R(T, a_ max) for a power-law grain size distribution truncated at a_ max.
We set a_ max to 0.1 mm and
the dust-to-gas mass ratio to f=0.1%,
based on radial drift arguments and lack of (sub-)mm emission at these large radii.
We compute a β(r) profile for each M_ disk/M_⋆∈ [0.0, 0.4] and extract the values at 1” and 5”. We overlay the resulting β(M_ disk/M_⋆) ranges as white shaded regions in Extended Data Figure <ref> (where the dependence on p arises from the dependence on Σ), and in Figure <ref> as white horizontal bars at a selection of M_ disk/M_⋆ values.
For example, for M_ disk/M_⋆=0.2 and p=1.0, we find β(1”)=5.3 and β(5”)=3.6× 10^-2.
While knowledge of cooling in disks is very limited, these estimates help to emphasize that not all values of β are equally likely.
§.§ Data availability
All observational data products presented in this work are available through the https://www.canfar.net/en/docs/digital_object_identifiers/CANFAR Data Publication Service at <https://doi.org/10.11570/24.0087>. This includes final reduced and calibrated ALMA measurement sets, image cubes and moment maps, and processed SPHERE data.
All simulated data products including hydrodynamic simulations and synthetic ALMA data are available at <https://doi.org/10.5281/zenodo.11668694>.
The raw ALMA data are publicly available via the ALMA archive <https://almascience.nrao.edu/aq/> under project ID
2021.1.00690.S.
The raw VLT/SPHERE data are publicly available via the ESO Science Archive Facility <https://archive.eso.org/eso/eso_archive_main.html> under programme 0104.C-0157(B).
§.§ Code availability
ALMA data reduction and imaging scripts are available at <https://jjspeedie.github.io/guide.2021.1.00690.S>.
The Python packages used in this work are available:
(<https://github.com/richteague/bettermoments>),
(<https://github.com/richteague/eddy>),
v0 (<http://doi.org/10.5281/zenodo.10205110>),
PHANTOM (<https://github.com/danieljprice/phantom>),
MCFOST (<https://github.com/cpinte/mcfost>).
* We thank our referees for their careful and insightful comments that improved the manuscript.
We thank Kaitlin Kratter for enlightening discussions and valuable suggestions.
J.S. thanks Ryan Loomis, Sarah Wood and Tristan Ashton at the North American ALMA Science Center (NAASC) for providing science support and technical guidance on the ALMA data
as part of a Data Reduction Visit to the NAASC, which was funded by the NAASC.
The reduction and imaging of the ALMA data was performed on NAASC computing facilities.
J.S. thanks Christophe Pinte, Daniel Price and Josh Calcino for support with MCFOST, Luke Keyte and Francesco Zagaria for discussions on self-calibrating ALMA data, and Chris White for sharing perceptually uniform colormaps.
J.S. acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Canada Graduate Scholarships Doctoral (CGS D) program.
R.D. acknowledges financial support provided by the Natural Sciences and Engineering Research Council of Canada through a Discovery Grant, as well as the Alfred P. Sloan Foundation through a Sloan Research Fellowship.
C.L. and G.L. acknowledge funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement # 823823 (RISE DUSTBUSTERS project). C.L. acknowledges funding from UK Science and Technology research Council (STFC) via the consolidated grant ST/W000997/1.
B.V. acknowledges funding from the ERC CoG project PODCAST No 864965.
Y.W.T. acknowledges support through NSTC grant 111-2112-M-001-064- and 112-2112-M-001-066-.
J.H. was supported by JSPS KAKENHI Grant Numbers 21H00059, 22H01274, 23K03463.
This paper makes use of the following ALMA data: ADS/JAO.ALMA#2021.1.00690.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
Based on data products created from observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 0104.C-0157(B).
This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon.
This research used the Canadian Advanced Network For Astronomy Research (CANFAR) operated in partnership by the Canadian Astronomy Data Centre and The Digital Research Alliance of Canada with support from the National Research Council of Canada the Canadian Space Agency, CANARIE and the Canadian Foundation for Innovation.
Author Contributions
R.D. led the ALMA proposal.
J.S. processed the ALMA data.
J.H. processed the VLT/SPHERE data.
C.H. performed the SPH simulations.
J.S. performed the radiative transfer calculations.
C.L. and G.L. developed the analytic model.
J.S. performed all presented analyses.
J.S. and R.D. wrote the manuscript.
All co-authors provided input to the ALMA proposal and/or the manuscript.
Competing Interests The authors declare that they have no competing financial interests.
Correspondence Correspondence and requests for materials should be addressed to:
J.S. (email: [email protected]),
R.D. (email: [email protected]).
§ EXTENDED DATA
§ REFERENCES
10
url<#>1#1urlprefixURL
salyk2013-accretion
authorSalyk, C. et al.
titleMeasuring Protoplanetary Disk Accretion with H I
Pfund .
journal volume769,
pages21 (year2013).
rice-armitage-2009
authorRice, W. K. M. & authorArmitage, P. J.
titleTime-dependent models of the structure and stability
of self-gravitating protoplanetary discs.
journal volume396,
pages2228–2236 (year2009).
vandenancker1997
authorvan den Ancker, M. E. et al.
titleHIPPARCOS data on Herbig Ae/Be stars: an
evolutionary scenario.
journal volume324,
pagesL33–L36 (year1997).
dewarf2003
authorDeWarf, L. E., authorSepinsky, J. F.,
authorGuinan, E. F., authorRibas, I. &
authorNadalin, I.
titleIntrinsic Properties of the Young Stellar Object SU
Aurigae.
journal volume590,
pages357–367 (year2003).
beck2019-H2
authorBeck, T. L. & authorBary, J. S.
titleA Search for Spatially Resolved Infrared
Rovibrational Molecular Hydrogen Emission from the Disks of Young Stars.
journal volume884,
pages159 (year2019).
garufi2024-destinys-taurus
authorGarufi, A. et al.
titleThe SPHERE view of the Taurus star-forming region.
journalarXiv e-prints
pagesarXiv:2403.02158 (year2024).
hartmann1998
authorHartmann, L., authorCalvet, N.,
authorGullbring, E. & authorD'Alessio, P.
titleAccretion and the Evolution of T Tauri Disks.
journal volume495,
pages385–400 (year1998).
dong2018-gi-or-planets
authorDong, R., authorNajita, J. R. &
authorBrittain, S.
titleSpiral Arms in Disks: Planets or Gravitational
Instability?
journal volume862,
pages103 (year2018).
sicilia-aguilar2010
authorSicilia-Aguilar, A., authorHenning, T. &
authorHartmann, L. W.
titleAccretion in Evolved and Transitional Disks in CEP
OB2: Looking for the Origin of the Inner Holes.
journal volume710,
pages597–612 (year2010).
tang2012-abaur
authorTang, Y. W. et al.
titleThe circumstellar disk of AB Aurigae: evidence for
envelope accretion at late stages of star formation?
journal volume547,
pagesA84 (year2012).
nakajima-golimowski1995-palomar
authorNakajima, T. & authorGolimowski, D. A.
titleCoronagraphic Imaging of Pre-Main-Sequence Stars:
Remnant Envelopes of Star Formation Seen in Reflection.
journal volume109,
pages1181 (year1995).
grady1999-hst
authorGrady, C. A. et al.
titleHubble Space Telescope Space Telescope Imaging
Spectrograph Coronagraphic Imaging of the Herbig AE Star AB Aurigae.
journal volume523,
pagesL151–L154 (year1999).
riviere2020-rosetta1
authorRivière-Marichalar, P. et al.
titleAB Aur, a Rosetta stone for studies of planet
formation. I. Chemical study of a planet-forming disk.
journal volume642,
pagesA32 (year2020).
fukagawa2004
authorFukagawa, M. et al.
titleSpiral Structure in the Circumstellar Disk around AB
Aurigae.
journal volume605,
pagesL53–L56 (year2004).
hall2019-temporalGIspiralsALMA
authorHall, C. et al.
titleThe Temporal Requirements of Directly Observing
Self-gravitating Spiral Waves in Protoplanetary Disks with ALMA.
journal volume871,
pages228 (year2019).
ediss2004-band6
authorEdiss, G. A. et al.
editorNarayanan, G. (ed.) titleALMA
Band 6 Cartridge: Design and Performance.
(ed.editorNarayanan, G.)
booktitleFifteenth International Symposium on Space
Terahertz Technology, pages181–188 (year2004).
cornwell2008
authorCornwell, T. J.
titleMultiscale CLEAN Deconvolution of Radio Synthesis
Images.
journalIEEE Journal of Selected Topics in Signal
Processing volume2, pages793–801
(year2008).
kepley2020-automultithresh
authorKepley, A. A. et al.
titleAuto-multithresh: A General Purpose Automasking
Algorithm.
journal volume132,
pages024505 (year2020).
leroy2021-phangsalma
authorLeroy, A. K. et al.
titlePHANGS-ALMA Data Processing and Pipeline.
journal volume255,
pages19 (year2021).
JvM1995-correction
authorJorsater, S. & authorvan Moorsel, G. A.
titleHigh Resolution Neutral Hydrogen Observations of the
Barred Spiral Galaxy NGC 1365.
journal volume110,
pages2037 (year1995).
czekala2021-maps2
authorCzekala, I. et al.
titleMolecules with ALMA at Planet-forming Scales (MAPS).
II. CLEAN Strategies for Synthesizing Images of Molecular Line Emission in
Protoplanetary Disks.
journal volume257,
pages2 (year2021).
teague2018-bettermoments
authorTeague, R. & authorForeman-Mackey, D.
titlebettermoments: A robust method to measure line
centroids.
howpublishedZenodo (year2018).
teague2018-robust-linecentroids
authorTeague, R. & authorForeman-Mackey, D.
titleA Robust Method to Measure Centroids of Spectral
Lines.
journalResearch Notes of the American Astronomical
Society volume2, pages173
(year2018).
teague2019-eddy
authorTeague, R.
titleeddy: Extracting Protoplanetary Disk Dynamics with
Python.
journalThe Journal of Open Source Software
volume4, pages1220 (year2019).
tang2017-abaur12COspirals
authorTang, Y.-W. et al.
titlePlanet Formation in AB Aurigae: Imaging of the Inner
Gaseous Spirals Observed inside the Dust Cavity.
journal volume840,
pages32 (year2017).
gaia-mission-2016
authorGaia Collaboration et al.
titleThe Gaia mission.
journal volume595,
pagesA1 (year2016).
gaiaDR3-2023
authorGaia Collaboration et al.
titleGaia Data Release 3. Summary of the content and
survey properties.
journal volume674,
pagesA1 (year2023).
pietu2005
authorPiétu, V., authorGuilloteau, S. &
authorDutrey, A.
titleSub-arcsec imaging of the AB Aur molecular disk and
envelope at millimeter wavelengths: a non Keplerian disk.
journal volume443,
pages945–954 (year2005).
price2018-phantom
authorPrice, D. J. et al.
titlePhantom: A Smoothed Particle Hydrodynamics and
Magnetohydrodynamics Code for Astrophysics.
journal volume35,
pagese031 (year2018).
bate1995
authorBate, M. R., authorBonnell, I. A. &
authorPrice, N. M.
titleModelling accretion in protobinary systems.
journal volume277,
pages362–376 (year1995).
cullendehnen2010
authorCullen, L. & authorDehnen, W.
titleInviscid smoothed particle hydrodynamics.
journal volume408,
pages669–683 (year2010).
gammie2001
authorGammie, C. F.
titleNonlinear Outcome of Gravitational Instability in
Cooling, Gaseous Disks.
journal volume553,
pages174–183 (year2001).
hall2020
authorHall, C. et al.
titlePredicting the Kinematic Evidence of Gravitational
Instability.
journal volume904,
pages148 (year2020).
terry2022-diskmass
authorTerry, J. P. et al.
titleConstraining protoplanetary disc mass using the GI
wiggle.
journal volume510,
pages1671–1679 (year2022).
longarini2021
authorLongarini, C. et al.
titleInvestigating Protoplanetary Disk Cooling through
Kinematics: Analytical GI Wiggle.
journal volume920,
pagesL41 (year2021).
panequecarreno2021-elias27
authorPaneque-Carreño, T. et al.
titleSpiral Arms and a Massive Dust Disk with
Non-Keplerian Kinematics: Possible Evidence for Gravitational Instability in
the Disk of Elias 2-27.
journal volume914,
pages88 (year2021).
pinte2006
authorPinte, C., authorMénard, F.,
authorDuchêne, G. & authorBastien, P.
titleMonte Carlo radiative transfer in protoplanetary
disks.
journal volume459,
pages797–804 (year2006).
pinte2009
authorPinte, C. et al.
titleBenchmark problems for continuum radiative transfer.
High optical depths, anisotropic scattering, and polarisation.
journal volume498,
pages967–980 (year2009).
pinte2018-hd97048
authorPinte, C. et al.
titleKinematic Evidence for an Embedded Protoplanet in a
Circumstellar Disk.
journal volume860,
pagesL13 (year2018).
li2016-rstar
authorLi, D. et al.
titleAn Ordered Magnetic Field in the Protoplanetary Disk
of AB Aur Revealed by Mid-infrared Polarimetry.
journal volume832,
pages18 (year2016).
hillenbrand1992-Teff
authorHillenbrand, L. A., authorStrom, S. E.,
authorVrba, F. J. & authorKeene, J.
titleHerbig Ae/Be Stars: Intermediate-Mass Stars
Surrounded by Massive Circumstellar Accretion Disks.
journal volume397,
pages613–643 (year1992).
natta2001-Teff
authorNatta, A. et al.
titleA reconsideration of disk properties in Herbig Ae
stars.
journal volume371,
pages186–197 (year2001).
currie2022-abaurb
authorCurrie, T. et al.
titleImages of embedded Jovian planet formation at a wide
separation around AB Aurigae.
journalNature Astronomy
volume6, pages751–759
(year2022).
lodato2008
authorLodato, G.
titleClassical disc physics.
journalNew Astronomy Reviews
volume52, pages21–41
(year2008).
cossins2009
authorCossins, P., authorLodato, G. &
authorClarke, C. J.
titleCharacterizing the gravitational instability in
cooling accretion discs.
journal volume393,
pages1157–1173 (year2009).
lodato-rice-2004
authorLodato, G. & authorRice, W. K. M.
titleTesting the locality of transport in
self-gravitating accretion discs.
journal volume351,
pages630–642 (year2004).
dong2015-GIspirals-scatteredlight
authorDong, R., authorHall, C.,
authorRice, K. & authorChiang, E.
titleSpiral Arms in Gravitationally Unstable
Protoplanetary Disks as Imaged in Scattered Light.
journal volume812,
pagesL32 (year2015).
boccaletti2020-abaursphere
authorBoccaletti, A. et al.
titlePossible evidence of ongoing planet formation in AB
Aurigae. A showcase of the SPHERE/ALMA synergy.
journal volume637,
pagesL5 (year2020).
rosotti2020-hd100453
authorRosotti, G. P. et al.
titleSpiral arms in the protoplanetary disc HD100453
detected with ALMA: evidence for binary-disc interaction and a vertical
temperature gradient.
journal volume491,
pages1335–1347 (year2020).
perez2016-elias27
authorPérez, L. M. et al.
titleSpiral density waves in a young protoplanetary
disk.
journalScience volume353,
pages1519–1521 (year2016).
meru2017-elias27
authorMeru, F. et al.
titleOn the Origin of the Spiral Morphology in the Elias
2-27 Circumstellar Disk.
journal volume839,
pagesL24 (year2017).
zhang2023-destinys
authorZhang, Y. et al.
titleDisk Evolution Study Through Imaging of Nearby Young
Stars (DESTINYS): Diverse outcomes of binary-disk interactions.
journal volume672,
pagesA145 (year2023).
norfolk2022-hd100546
authorNorfolk, B. J. et al.
titleThe Origin of the Doppler Flip in HD 100546: A
Large-scale Spiral Arm Generated by an Inner Binary Companion.
journal volume936,
pagesL4 (year2022).
ginski2016-scatteredlight
authorGinski, C. et al.
titleDirect detection of scattered light gaps in the
transitional disk around HD 97048 with VLT/SPHERE.
journal volume595,
pagesA112 (year2016).
goodman-rafikov2001
authorGoodman, J. & authorRafikov, R. R.
titlePlanetary Torques as the Viscosity of Protoplanetary
Disks.
journal volume552,
pages793–802 (year2001).
rafikov2002
authorRafikov, R. R.
titleNonlinear Propagation of Planet-generated Tidal
Waves.
journal volume569,
pages997–1008 (year2002).
ogilvie-lubow2002
authorOgilvie, G. I. & authorLubow, S. H.
titleOn the wake generated by a planet in a disc.
journal volume330,
pages950–954 (year2002).
bollati2021-theory-kinks
authorBollati, F., authorLodato, G.,
authorPrice, D. J. & authorPinte, C.
titleThe theory of kinks - I. A semi-analytic model of
velocity perturbations due to planet-disc interaction.
journal volume504,
pages5444–5454 (year2021).
hilder2023-wakeflow-joss
authorHilder, T., authorFasano, D.,
authorBollati, F. & authorVandenberg, J.
titleWakeflow: A Python package for semi-analytic models
of planetary wakes.
journalThe Journal of Open Source Software
volume8, pages4863 (year2023).
zhou2023-abaurb
authorZhou, Y. et al.
titleUV-Optical Emission of AB Aur b is Consistent with
Scattered Stellar Light.
journalarXiv e-prints
pagesarXiv:2308.16223 (year2023).
biddle2024-pabeta-ABAur
authorBiddle, L. I., authorBowler, B. P.,
authorZhou, Y., authorFranson, K. &
authorZhang, Z.
titleDeep Pa Imaging of the Candidate
Accreting Protoplanet AB Aur b.
journal volume167,
pages172 (year2024).
currie2024-pabeta-ABAur
authorCurrie, T.
titleDirect Imaging Detection of the Protoplanet AB Aur b
at Wavelengths Covering Pa.
journalResearch Notes of the American Astronomical
Society volume8, pages146
(year2024).
zhu2015-cooling
authorZhu, Z., authorDong, R.,
authorStone, J. M. & authorRafikov, R. R.
titleThe Structure of Spiral Shocks Excited by
Planetary-mass Companions.
journal volume813,
pages88 (year2015).
zhang-zhu2020-SG-beta
authorZhang, S. & authorZhu, Z.
titleThe effects of disc self-gravity and radiative
cooling on the formation of gaps and spirals by young planets.
journal volume493,
pages2287–2305 (year2020).
dullemond2018-dsharp6
authorDullemond, C. P. et al.
titleThe Disk Substructures at High Angular Resolution
Project (DSHARP). VI. Dust Trapping in Thin-ringed Protoplanetary Disks.
journal volume869,
pagesL46 (year2018).
birnstiel2018-dsharp-opac
authorBirnstiel, T. et al.
titleThe Disk Substructures at High Angular Resolution
Project (DSHARP). V. Interpreting ALMA Maps of Protoplanetary Disks in Terms
of a Dust Model.
journal volume869,
pagesL45 (year2018).
hashimoto2011
authorHashimoto, J. et al.
titleDirect Imaging of Fine Structures in Giant
Planet-forming Regions of the Protoplanetary Disk Around AB Aurigae.
journal volume729,
pagesL17 (year2011).
|
http://arxiv.org/abs/2409.02462v1 | 20240904061407 | State Space Kriging model for emulating complex nonlinear dynamical systems under stochastic excitation | [
"Kai Chenga",
"Iason Papaioannoua",
"MengZe Lyub",
"Daniel Straub"
] | math.DS | [
"math.DS"
] |
organization=Engineering Risk Analysis Group, Technical University of Munich,
addressline=Theresienstr. 90,
city= Munich,
postcode=80333,
country=Germany
organization=College of Civil Engineering, Tongji University,
addressline=Siping Rd. 1239,
city=Shanghai,
postcode=200092,
country= China
§ ABSTRACT
We present a new surrogate model for emulating the behavior of complex nonlinear dynamical systems with external stochastic excitation. The model represents the system dynamics in state space form through a sparse Kriging model. The resulting surrogate model is termed state space Kriging (S2K) model. Sparsity in the Kriging model is achieved by selecting an informative training subset from the observed time histories of the state vector and its derivative with respect to time. We propose a tailored technique for designing the training time histories of state vector and its derivative, aimed at enhancing the robustness of the S2K prediction. We validate the performance of the S2K model with various benchmarks. The results show that S2K yields accurate prediction of complex nonlinear dynamical systems under stochastic excitation with only a few training time histories of state vector.
Stochastic dynamical system; Surrogate model; Active learning; Gaussian process; Sparse learning.
§ INTRODUCTION
Dynamical systems are widely used in modern engineering and applied science for modeling complex underlying physical phenomena <cit.>. With the increase of computational power, numerical simulation offers a feasible way to study and predict the behavior of complex dynamical systems. However, the response of complex dynamical systems is governed by uncertainties, due to the stochastic external excitations, uncertain boundary conditions, and natural variability of system properties <cit.>. To obtain effective prediction, these uncertainties must be accounted for. To this end, uncertainty quantification of stochastic dynamical systems has gained particular interest in the last few decades <cit.>.
In the context of uncertainty quantification, sampling-based simulation methods, i.e., Monte Carlo simulation (MCS) and various variance reduction methods <cit.>, are generally used to propagate uncertainties from system inputs to system response quantities of interest. These methods are robust, but they are infeasible when only a small number of expensive simulations are affordable or available. To address this issue, surrogate models have been widely used to construct computationally efficient approximations of the expensive computational model. Various surrogate modelling techniques have developed, including: Gaussian process regression (aka, Kriging) <cit.>, support vector regression (SVR) <cit.>, polynomial chaos expansion (PCE) <cit.>, neural networks <cit.>. These surrogate modelling techniques are powerful for approximating the behavior of traditional static “black-box”models, but exhibit difficulties when applied to dynamical systems under stochastic excitation <cit.>. In these problems, the number of input parameters due to discretization of the stochastic external excitations, often represented in terms of a white noise process, can be extremely high. Most common surrogate models suffer from the “curse of dimensionality”, since effective learning typically requires at least twice as many samples as the number of input parameters.
To address this issue, a standard technique is to insert existing surrogate modelling techniques into the framework of nonlinear auto-regressive with exogenous input (NARX) modelling <cit.>. The NARX model is a powerful system identification technique, which is established based on the principle of causality, i.e., it assumes that the system response quantity of interest at the current time instant is only affected by its previous several response values and the current and past several values of external excitations. Based on this cause-consequence effect, the current response quantity of interest is assumed to be a function of the response values at past multiple time instants and the input excitation at the current and previous multiple instants. This function can be emulated through application of existing surrogate modelling techniques. Although the NARX model has proved its effectiveness in several structural dynamical problems <cit.>, it is only an empirical model, and there is no rigorous way to select the model hyper-parameters, e.g., the time lags of both input and output. In addition, the NARX model struggles to emulate the response of complex highly nonlinear dynamical problems. Recently, a manifold NARX (mNARX) model <cit.> has been proposed to address the above problems, in which a transformation function is adopted to map the input into a problem-aware manifold, which is expected to be more suitable for constructing the NARX model than in original input space. However, physical information of the dynamical system under investigation is required to construct such input manifold in mNARX. If no physical information is available, the mNARX degenerates to the traditional NARX.
In the present work, we propose to emulate complex nonlinear dynamical systems under stochastic excitation in their state space form with the Kriging model, termed as the state space Kriging (S2K) model. The state space representation of a stochastic dynamical system can be considered as a multi-input multi-output (MIMO) function, in which the input is the state vector and the external excitation, and the output is the derivative of the state vector. The Kriging model is utilized to learn every component of this MIMO function separately. To overcome the inefficiency of the Kriging surrogate for large training data set, we introduce an active learning algorithm to select a reduced training set from the training time histories, resulting in a sparse Kriging model. By learning the state space form of a dynamical system, the S2K model avoids the curse of dimensionality
resulting from the discretization of the stochastic external excitation. Numerical examples demonstrate that the S2K model is effective for emulating various complex nonlinear dynamical systems with stochastic excitation with only a few training time histories of the state quantities, and outperforms the NARX model.
The layout of this paper is as follows. In section 2, we review the fundamental definitions of nonlinear dynamical systems under stochastic excitation and the basic idea of the NARX model. The S2K model is presented in Section 3, together with the active learning algorithm and the technique for designing the training time history of state vector. In Section 4, several benchmarks are used to assess the performance of our method. The paper concludes with final remarks in Section 5.
§ BACKGROUND
A dynamical system is generally represented by a high-order differential equation. It can be equivalently transformed into multiple coupled first-order equation systems by introducing new state variables, a form known as the state space representation.
dx/dt = f_1(x)
d^2 x/dt = f_2 (x,dx/dt)
⋮
d^n x/dt = f_n(x,dx/dt,...,d^n-1x/dt^n-1)
In the present work, we consider a general nonlinear dynamical system under stochastic excitation in its state space form <cit.>, which can be generally expressed as
(t) = ((t),(t)), with(0) = _0,
where (t)=[X_1(t),...,X_n(t)]^ T∈R^n is the state vector at time t; _0 is the initial condition; (t) is the derivative of (t) with respect to t; (t)=[U_1(t),...,U_m(t)]^ T∈R^m is the external stochastic excitation vector acting on the structure; (·) is the n-dimensional nonlinear vector function .
For every realization of the time history (t) of (t), the time history of the state vector (t) of Eq. (<ref>) exists and uniquely depends on the initial conditions _0, and one can apply various numerical discretization methods, e.g., the Runge-Kutta method, to find the solutions (t_i)(i=1,...,N_t) at N_t time steps within the time period of interest [0,T]. In general, (·) is a computational expensive "black-box" model, which makes UQ of complex dynamical systems under stochastic excitation a computationally demanding task.
The NARX model is a popular method for UQ of dynamical system under stochastic excitation <cit.>. It is a system identification technique developed based on the discrete-time step representation of the dynamical system. Given a discrete time history of the state vector (t_1),...,(t_N_t) corresponding to a discrete realization of the stochastic excitation (t), namely, (t_1),...,(t_N_t), the NARX model represents the response quantity of interest x(t_i)∈R at the current time instant as a function of its past values and the input excitation values at the current and previous instants as
x(t_i)= g((t_i),(t_i-1),...,(t_i-n_u),x(t_i-1),...,x(t_i-n_x)) + ϵ_t,
where n_u and n_x represent the maximum excitation and response time lags; ϵ_t is the residual of the NARX model; g(·) is the underlying model to be learned. For dynamical systems, the function g(·) is usually learned by polynomial model or Kriging model <cit.>.
The NARX model can be interpreted as an empirical model that tries to capture the behaviour of the stochastic dynamical system with a low-dimensional auto-regressive surrogate model. Its accuracy highly depends on the choice of the time lags n_u and n_x, and there is no rigorous way to determine them. For problems with long memory, both n_u and n_x become large, and the NARX model is high-dimensional. Moreover, NARX gives poor prediction for strongly nonlinear dynamical systems, which we also confirm in Section <ref>.
§ METHODOLOGY
In this section, we introduce the proposed algorithm for learning dynamical systems under stochastic excitation in state space form. We also present the active learning algorithm for selecting the informative training set and the technique for designing the training time history of state vector and its derivative.
§.§ Kriging model for learning dynamical systems
By denoting (t)=(t), the original dynamical system in Eq. (<ref>) can be expressed as
(t) = ((t),(t))
.
Note that (·): R^m+n→R^n is a deterministic function, which maps the state vector (t) ∈R^n and the external excitation (t) ∈R^m to (t) ∈R^n at time instant t. The input dimension of (·) depends on the dimension of the state vector n of the associated dynamical system and the cardinality of the external excitation vector m. For every realization of the time history (t) of the excitation (t), the time history of the corresponding state vector (t) and its derivative (t) over the time period of interest [0,T] can be estimated numerically.
In this work, we employ the Kriging model <cit.> to approximate the state space representation (·). Note that other existing surrogate modelling techniques, such as PCE, SVR and neural network could also be implemented in this context.
As in the ordinary Kriging model, it is assumed that every component y_i(t) of (t) is a Gaussian process with constant mean β_i:
y_i(t) = β_i + σ_i^2 Z((t)), for i =1,...,n,
where (t)=[(t),(t)]^
T∈R^n+m, σ_i^2 is the process variance, and Z((t)) is a stationary Gaussian process with zero mean and unit variance. The correlation function of Z((t)) is given by
R((t),(t');_i), which describes the correlation between (t) and (t') at time t and t', with the correlation lengths in various coordinate directions being controlled by the hyper-parameter vector _i∈R^m+n. Moreover, we assume that y_i(t) is independent of y_j(t) for all i ≠ j, which allows learning (t) component-wisely.
Given the experimental design _t=[(t_1),...,(t_N)]∈R^(n+m)× N, the joint distribution y_i(t) and _i=[y_i(t_1),...,y_i(t_N)]^
T∈R^N corresponding to _t is multivariate Gaussian given by
[[ y_i(t); _i ]] ∼𝒩([[ β_i; β_i F ]],σ_i^2[[ 1 r^ T((t)); r((t)) R ]] ),
where =[1,...,1]^ T∈R^N and
r((t)) := [R((t_i_1),(t);_i)]_i_1∈R^N,
R :=[R((t_i_1),(t_i_2);_i)]_i_1,i_2∈R^N× N.
The predictive distribution of y_i(t) given the observation _i=_i is still Gaussian, i.e.,
ŷ_i(t) ∼𝒩(μ_i(t), s_i^2(t)),
where the predictive mean and predictive variance are given by
μ_i(t) = β_i +r^T((t))R^-1(_i-β_i) ,
and
s_i^2(t) = σ_i^2 (1-r^T((t))R^-1r((t))+(1-^TR^-1r((t)))^2/(^TR^-1)).
The predictive mean is the kriging surrogate model prediction, and the predictive variance is used for measuring the predictive uncertainty.
The prediction (t) of the vector (t) is obtained by stacking ŷ_i(t)(i=1,...,n), namely, (t)=[ŷ_1(t),...,ŷ_n(t)]^ T. (t) is a Gaussian vector, with mean m(t) = [m_1(t),..., m_n(t)]^ T and diagonal covariance matrix with diagonal vector s^2(t) = [s_1^2(t),...,s_n^2(t)]^ T, namely
(t) = {[ ŷ_1(t); ⋮; ŷ_n(t) ]}∼𝒩({[ μ_1(t); ⋮; μ_n(t) ]}, [[ s_1^2(t) ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ s_n^2(t) ]] ).
In the current work, the Matérn-5/2 correlation function is used, which is given by
R((t),(t');_i) =
(1+√(5)r +5/3r^2) exp(-√(5)r),
where r=√(((t) -(t'))^ TΛ((t) -(t'))), and Λ is a diagonal matrix with diagonal element _i.
The hyper-parameter β_i, σ_i^2 and _i for i-th surrogate model are determined by the maximum likelihood estimation method <cit.>. The likelihood function of the hyper-parameters given data reads
f(β_i,σ_i^2,_i)=[ detR (_i)]^-0.5/√((2πσ_i^2)^N) exp(-1/2σ_i^2(_i-β_i)^ TR^-1(_i)(_i-β_i)).
After taking the logarithm of Eq. (<ref>), the optimal values of β_i and σ_i^2, depending on hyper-parameter , can be derived analytically. This yields
β_i(_i) = (F^TR^-1(_i)F)^-1F^TR^-1(_i)_i,
σ_i^2(_i) = 1/N(_i-β_iF)^ TR^-1(_i)(_i-β_iF).
There is no closed-form solution for the optimal hyper-parameter _i, and one has to use numerical optimization algorithms to determine its value. Substituting σ_i^2(_i) in Eq. (<ref>) into the likelihood function in Eq. (<ref>), we are left with minimizing the following _i-dependent objective function
ℓ(_i)=Nlnσ_i^2(_i) + ln[ detR(_i)].
The objective function in Eq. (<ref>) is generally highly nonlinear and multimodal <cit.>, and thus a global optimization algorithm is required to find the hyper-parameter. In the present work, we use the multi-starts gradient-free “Hooke & Jeeves” pattern
search method <cit.> for determining the optimal hyper-parameter vector. Note that the single-start “Hooke & Jeeves” pattern
search method has been implemented in the popular DACE toolbox <cit.>, and here we adapt it to multi-starts case for improving the robustness of Kriging model.
§.§ Prediction with S2K model
Once the S2K model is trained, it can be used to predict the time history of system response (t) for arbitrary realization of the time history (t) of (t) by solving the following approximate dynamical system in state space form
(t) = ((t),(t))
,
where (·) is the Kriging approximation of (·).
In general, the Runge-Kutta method can be used to estimate the time history of the state vector (t_i)(i=1,...,N_t) at N_t discretized time steps within the time period of interest [0,T].
At each time step, the Kriging model provides a Gaussian prediction (t) given a realization of the current predicted state vector (t) and external excitation (t). However, the whole time-history of (t) and (t) over the time period of interest is non-Gaussian. This is due to the recursive nature of the dynamical system, i.e., the output quantities at current time instant will be used as input to predict the output quantities at the next time step, and carry their predictive uncertainty with them. Since the surrogate model (·) is nonlinear, the predictive distribution of the entire time history is generally
intractable. In practice, one can adopt MCS to approximate the predictive distribution of the state vector over the time period of interest given a realization of the excitation. This can be achieved by emulating the time history of state vector N_MC times based on the trained S2K model, and use these time history trajectories to estimate the empirical predictive distribution.
In Fig. <ref>, we show an example of the probabilistic prediction results of the time history of the displacement of the nonlinear Bouc-Wen hysteretic oscillator (see Section <ref>), where MCS with N_MC=50 randomly predicted trajectories of the displacement time histories are used. It is found that the 95% confidence interval
§.§ Sparse Kriging with active learning
We now analyze the computational cost of the S2K model. When training a Kriging model with N training samples, one needs to determine the hyper-parameter _i by minimizing the likelihood function in Eq. (<ref>). To numerically evaluate this function, one needs to invert the correlation matrix R∈^N× N. When a Cholesky factorization is used to decompose R, the corresponding computational costs are 𝒪(N^3). The computational costs for training the S2K model are therefore 𝒪(nN^3), which tends to be very time-consuming for large number of samples N and large n.
In the Kriging model, the computational cost for making a single prediction is 𝒪(N) for the mean in Eq. (<ref>), and 𝒪(N^2) for the variance in Eq. (<ref>).
To estimate the predictive distribution of the entire time history of the state vector, one needs to predict every component of the n-dimensional state vector (t) at N_t time steps, N_MC times for every realization of the time history (t) of (t). The associated operations are 𝒪(nNN_tN_MC) for the mean and 𝒪(nN^2N_tN_MC) for the variance. In general, the sample size N, the time history length N_t and the MCS sample size N_MC are all quite large, which makes it time-consuming to train the S2K model and to estimate the predictive distribution of the entire time history of the state vector corresponding to a single realization of the time history (t).
To improve the training and prediction efficiency, we suggest constructing a sparse Kriging <cit.> model by selecting a training subset of size N_s(N_s ≪ N) from the available training set for every component of (t). This selection is performed adaptively, through first training a Kriging model with a few initial training samples uniformly selected from the whole sample set, and then updating the Kriging model by enriching informative samples selected sequentially through maximization of the following mean square error criterion
L_i(t) = (m_i(t) - y_i(t))^2 + s_i^2(t),
where the first term represents the square of the predictive bias, and the second term denotes the predictive variance. The bias term prefers points with large local predictive error, and the predictive variance term tends to select points with large global predictive uncertainty. The acquisition function in Eq. (<ref>) therefore accounts for both the bias for local exploitation and the variance for global exploration.
The proposed method selects one sample in each active learning step until the maximum of L_i(t)/σ^2_y is less than a threshold δ_i (e.g., 10^-5), where σ^2_y is the variance of observed model response in the training set.
The convergence threshold δ_i is a user-specified parameter, depending on the complexity of the dynamical system to be learned. With a small δ_i, many samples will be selected to construct the S2K model, which improves the prediction accuracy, but decreases the prediction efficiency; with a large δ_i, only a small portion of the training set will be selected to construct the S2K model, which leads to a very sparse model, but the accuracy may be low.
Based on our numerical experiments, we suggest setting δ_i∈ [10^-3, 10^-7]. A higher threshold values should be chosen when the nonlinearity of f_i(·) is expected to be high, and vice versa.
§.§ Design of training time history
The Kriging model is powerful for interpolation, but its performance degenerates when extrapolating away from the training data. To ensure the accuracy of the S2K model, the training data should cover the whole parameter space of both the state vector and the external excitation. However, for a black-box model, the parameter space of state vector is unknown beforehand, since one cannot know the minimum and maximum values of the time histories of the state vector associated with future realizations of external excitation.
To address this issue, one can collect many observed time histories of state vector corresponding to different realizations of external excitation, and select a subset of the time histories (e.g., requiring that the maximum value exceeds a pre-defined threshold) to train the surrogate model, as has been proposed for training the NARX model <cit.>. However, this is infeasible when only a few time histories of the state vector are available. In the current work, we suggest collecting the time histories of a pseudo state vector resulting from pseudo external excitation with magnified variability. By magnifying the variability of the excitation, the state vector will exhibit stronger variability, hence, it is expected that a few pseudo time histories of state vector will sufficiently populate its parameter space under the true excitation. To this end, one can magnify the standard deviation of the stochastic excitation at every time instant by a magnification factor σ .
In Fig. <ref>, a time history of state variables (displacement and velocity) of the Bouc-Wen hysteretic oscillator (see Section <ref>) with white noise ground acceleration is presented, where the state variables are obtained with both the true excitation and a pseudo excitation with magnified variability (σ=2). It is found that the peaks and the valleys of the pseudo responses (both displacement and velocity) largely exceed the ones of the original response. Consequently, the pseudo time history is more likely to cover the whole parameter space of the state vector under the original excitation. By training the Kriging model with the pseudo training time history, extrapolation in the prediction stage can be potentially avoided, thereby improving the accuracy and robustness of the S2K model.
Note that the magnification factor σ is a problem-dependent parameter. A small σ cannot guarantee full coverage of the parameter space of the state vector, while a large σ will lead to a less accurate Kriging model, since many samples will be enriched during the active learning procedure to explore the unnecessary region. Numerical investigation reported in Section <ref> show that σ∈ [1.5, 2] is a good choice when only one training time history of the state vector is available. However, when one can afford multiple training time histories, it is beneficial to set different σ values for each training time history. In this work, we suggest setting
σ_k=1+k-1/n_t-1,
k=1,...,n_t
where n_t(n_t≥ 2) is the number training time histories.
In doing so, the mixture of different training time histories of state vector should better cover the entire parameter space, leading to more accurate prediction, as demonstrated in Section <ref>.
For problem with white noise excitation with intensity S discretized from 0 to T with step length t, namely, f(t_i)=√(2π S/ t)ζ(t_i), ζ(t_i)∼𝒩(0,1), one can magnify the variability of the excitation by sample from ζ(t_i)∼𝒩(0,σ^2), where σ is the magnification factor.
§.§ Summary of the S2K algorithm
The S2K algorithm for emulating the response of dynamical systems under stochastic excitation is summarized in Algorithm <ref>.
§.§ Multi-degrees of freedom dynamical system
For problems with many degrees-of-freedom, the state vector is high-dimensional, which hinders the application of Kriging model for learning the state space representation of the dynamical system. To address this issue, we first project the original dynamical system into a latent low-dimensional space identified with proper orthogonal decomposition (POD), and then learn the dynamics in the latent space.
Denoting the POD basis as , the original state vector can then be expressed as _r(t)=^T(t), and the original dynamical system can be reformulated as
_r(t) = ^T(_r(t),(t)), _r(0) = ^T_0,
§ NUMERICAL INVESTIGATIONS
We investigate the effectiveness of S2K model on four example applications. To assess the accuracy of the S2K model, the relative error of a whole time history of the state vector corresponding to a realization of the stochastic excitation is used to define the accuracy metric as
ϵ_i =∑_j=1^N_t(x_i(t_j) - x̂_i(t_j) )^2/∑_j=1^N_t(x_i(t_j) - x̅_i )^2,
where x_i(t_j) and x̂_i(t_j) are true and predicted i-th state quantity of interest at time instant t_j, and x̅_i is the mean over the whole time history. In addition, we use the mean value ϵ̅_i of ϵ_i
corresponding to N_MC different realizations of the excitation to measure the global accuracy of i-th state, namely
ϵ̅_i = 1/N_MC∑_j=1^N_MCϵ_i^(j),
where ϵ_i^(j) is the relative error of i-th state quantity of interest corresponding to j-th realization of the stochastic excitation. We set N_MC=1000.
For the first three example applications, we compare the performance of the S2K model with the NARX model in Eq. (<ref>). In the NARX model, the polynomials are used as bases, and the least angle regression technique <cit.> is used to select the most relevant basis terms. In addition, the maximal time lags of both excitation and response n_u and n_x are chosen equal to twice
the number of degrees of freedom of the considered dynamical system <cit.>. In these examples, both S2K and NARX model are trained 10 times independently, and the relative errors are depicted by boxplots, whereby the central mark indicates the median, and the bottom and
top edges of the box indicate the 25th and 75th percentiles, respectively.
In both NARX and S2K model, the training time history of the state vector (t) and its derivative are evaluated at equidistant time instants t_i=i t(i=1,...,T/ t) over the time period of interest [0,T] by the Matlab solver ode89, where t is the step length. Note that while equidistant discretization of the training time history is required in the NARX model, it is not essential for the S2K model. However, we apply it here for consistency.
§.§ Quarter car model
We first consider a quarter car model represented by a
nonlinear two degree-of-freedom system <cit.> depicted in Fig. <ref>. The displacements of
the masses are governed by the following system of ordinary differential equations (ODEs), namely,
m_s x_1(t) = -k_s(x_1(t)-x_2(t))^3 - c(ẋ_1(t)-ẋ_2(t)),
m_u x_2(t) = k_s(x_1(t)-x_2(t))^3 - c(ẋ_1(t)-ẋ_2(t)) + k_u(u(t)-x_2(t)),
where the sprung mass m_s=22.7 kg and the unsprung mass m_u=42 kg are connected by a nonlinear spring of stiffness k_s = 1897.02 N/m^3 and a linear damper with damping coefficient c=601.8 N· s/m. An external excitation is applied to m_u through a linear spring of stiffness k_u = 1771.4 N/m. x_1(t) and x_2(t) are the displacements of m_s and m_u, respectively. In this work, the excitation is modeled by a stochastic process as u(t) = A sin(b t), where the parameters A, b follow uniform distributions, A ∼ U(0.09,0.11) (m) and b ∼ U(1.8π,2.2π) (rad/s).
Denoting (t)=[x_1(t),ẋ_1(t),x_2(t),ẋ_2(t)]^ T and (t)=(t), one can express the ODEs in Eq. (<ref>) in state space form as
y_1(t) =ẋ_1(t),
y_2(t) = -k_s/m_s(x_1(t)-x_2(t))^3 - c/m_s(ẋ_1(t)-ẋ_2(t)),
y_3(t) =ẋ_2(t),
y_4(t) = k_s/m_u(x_1(t)-x_2(t))^3 - c/m_u(ẋ_1(t)-ẋ_2(t)) + k_u/m_u(u(t)-x_2(t)).
We use the S2K model to construct a surrogate model (·): R^5→R^4 for the state space representation in Eq. (<ref>), in which the input is (t) = [(t), u(t)]^ T∈R^5, and the output is (t) ∈R^4.
In this example, we draw a time history of length 10s of the excitation randomly (A=0.0964,b=6.5248), and the time history of the state vector and its derivative are estimated with time step t = 0.002s. We use this time history of the state vector and the corresponding derivative to construct the four Kriging models, in which 5 equidistant samples selected from the whole time history are used to train the initial Kriging models, and the active learning algorithm introduced in Section <ref> is utilized to select the informative samples sequentially with the convergence threshold δ=10^-6. The corresponding results are depicted in Fig. <ref>, where a test time history is also presented. The results demonstrate that the S2K model is highly accurate. Indeed, the relative errors of the four Kriging models corresponding to the four state quantities of the test time histories depicted in Fig. <ref> are ϵ_1=6.09× 10^ -4, ϵ_2=4.54× 10^ -4, ϵ_3=6.02× 10^ -4, and ϵ_4=4.50× 10^ -4, respectively. In addition, it is observed that only a small portion of the samples are selected from the whole time history (a time history consists 5001 discrete time instants in total) after the active learning procedure. Specifically, the final sample sizes of the four Kriging models are 11, 24, 7, and 23, respectively. That is, the corresponding degrees of sparsity are 0.22%, 0.48%, 0.14%, and 0.46%.
Fig. <ref> presents the average relative error ϵ̅_1 of x_1(t) predicted by the S2K model with varying the number of time histories and various convergence thresholds. It shows that the S2K model with only one training time history of the state vector already yields very accurate results. As expected, the accuracy can be further improved by increasing the number of the training time histories as well as by decreasing the convergence threshold. In addition, the training sample sizes of the four Kriging models corresponding to the four components of the state space representation in Eq. (<ref>) are shown in Fig. <ref>. It is observed that only a limited number of samples are retained in the four Kriging models for various settings, which confirms the sparsity of the S2K model.
The comparisons of relative error between the S2K (δ=10^-5) and the NARX model (maximum polynomial order set to 3 <cit.>) is shown
in Fig. <ref>. It shows that both methods provide very accurate prediction, but the S2K outperforms the NARX model, especially when more training time histories are available.
§.§ Duffing oscillator
The second example is a Duffing oscillator <cit.> subjected to random loading for a duration of T=10s. The oscillator is governed by the following ODE
x(t) + 2ζω_nẋ(t) + ω^2_nx(t) +β x^3(t) = u(t),
where ζ=0.05, ω_n=10 rad/s, and β=2000 m^-2s^-2.
The random loading u(t) is modeled by a white noise ground acceleration discretized
in the frequency domain as <cit.>
u(t) = √(2Sω)∑_i=1^d/2 [ϑ_i cos(ω_it) + ϑ_d/2+isin(ω_it) ],
where ω_i=i ω with ω=30π/d, and ϑ_i∼𝒩(0,1), for i=1,...,d. In the current work, the spectral intensity of the Gaussian white noise is set to S=0.1 m^2/s^3, and we choose d=150 <cit.> .
Denoting (t)=[x(t),ẋ(t)]^ T and (t)=(t), the original differential equation in Eq. (<ref>) can be expressed by its state space representation as
y_1(t) =ẋ(t),
y_2(t) = u(t) - 2ζω_nẋ(t) - ω^2_nx(t) -β x^3(t).
The S2K model is used to learn the dynamics of the Duffing oscillator in Eq. (<ref>). In this example, we use only one time history of the state vector estimated for the total duration T = 10 s with time step t = 0.002 s to train the two Kriging models separately.
The results are depicted in Fig. <ref>, where a test time history is also presented. One can see that the S2K model is highly accurate in emulating the Duffing model, with relative errors ϵ_1=9.68× 10^-6 for x(t) and ϵ_2=9.09× 10^-6 for ẋ(t). Again, only a few informative samples are enriched during the active learning procedure, which leads to highly sparse Kriging models.
The average relative error ϵ̅_1 of x(t) predicted by the S2K model by varying the training time histories with different variability magnification factors of excitation are presented in Fig. <ref>, in which σ∈ [1,2] signifies that different magnification factors are assigned to different training time histories according to Eq. (<ref>). The variability of the excitation is magnified by amplifying the standard deviation of ϑ_i(i=1,...,150) in Eq. (<ref>).
The S2K model with only one training time history of the state vector already provides prediction with less than 1% relative error, and its accuracy can be improved by about two orders of magnitude by magnifying the variability of the excitation by σ=1.5 or σ=2. The mixture of 3 training time histories of state vector with different σ values yields the optimal results for n=3.
The training sample sizes of the two Kriging models corresponding to the two components of the state space representation in Eq. (<ref>) after the active learning procedure are depicted in Fig. <ref>. As expected, more samples are selected to train the Kriging model when using a larger magnification factor σ. However, both Kriging models remain sparse for all settings.
In Fig. <ref>, we compare the relative errors of x(t) obtained with the S2K model (δ=10^-6,σ=1.5) and the NARX model (maximum polynomial order set as 3 <cit.>) by varying the number of training histories. Due to the strong variability of the state quantity x(t) over the time domain, the NARX model gives poor prediction.
§.§ Nonlinear Bouc-Wen hysteretic oscillator
In the third example, we consider a Bouc-Wen hysteretic oscillator under random external excitation <cit.>, described by the following differential equation
m x(t) + cẋ(t) + k [α x(t) +(1-α)x_y z(t)] = m u(t),
ż(t) = 1/x_y [Aẋ(t) -β |ẋ(t)|| z(t)|^d-1z(t) -γẋ(t) |z(t)|^d ],
where the mass, stiffness and damping of the oscillator are
m=6×10^4kg, k=5×10^6N and c = 2mζ√(k/m), respectively, with ζ=0.05. The degree of
hysteresis is defined by α, which is chosen as 0.5. In addition, we set x_y=0.04m, β =γ = 0.5 and A=1,d=3.
Denoting (t)=[x(t),ẋ(t),z(t)]^ T and (t)=(t), the above differential equation of Bouc-Wen oscillator can be expressed by its state space form as
y_1(t) =ẋ(t),
y_2(t) = -c/mẋ(t) -k/m [α x(t) +(1-α)x_y z(t)] + u(t),
y_3(t) = 1/x_y [Aẋ(t) -β |ẋ(t)|| z(t)|^n-1z(t) -γẋ(t) |z(t)|^n ].
The excitation is modeled by a white noise ground acceleration discretized in frequency domain as in Eq. (<ref>) with spectral intensity being S=0.05 m^2/s^3. The time period is set to T=8 s <cit.>.
We first draw one time history of excitation, and the state vector is estimated with time step t=0.002 s. The corresponding excitation is depicted in Fig. <ref>, together with the nonlinear response of the auxiliary variable z of the Bouc-Wen model. We use this time history to train the S2K model with convergence threshold δ=10^-4. The results are depicted in Fig. <ref>, in which a test time history of state vector is also presented. Again, it is shown that the S2K model yields accurate prediction, with relative errors corresponding to the test time history of state vector depicted in Fig. <ref> of ϵ_1=1.44× 10^-4 for x(t), ϵ_2=1.65× 10^-5 for ẋ(t) and ϵ_3=2.47× 10^-5 for z(t). Since the third state equation in Eq. (<ref>) is highly nonlinear, 173 samples (5 initial samples plus 168 enriched samples) are selected from the whole time history of z(t) to construct the third Kriging model.
The average relative error ϵ̅_1 of x(t) predicted by the S2K model by varying the training time histories with different
magnification factors is presented in Fig. <ref>, in which σ∈ [1, 2] signifies that different magnification factors are assigned to different training time histories
according to Eq. (<ref>). One can see that the S2K model with only one training time history of the state vector already yields accurate predictions. Its accuracy can be improved by about one order of magnitude by magnifying the variability of the excitation by σ=1.5 times, but there is no additional improvement if a larger magnification factor σ=2 is used. In addition, the accuracy can be further improved by collecting more time histories of the state quantities. Again, one can see that the mixture of the 3 training time histories of state vector with different σ values yields the most accurate S2K model when n=3.
The training sample size of the three Kriging models corresponding to the three components of the state space representation in Eq. (<ref>) after the active learning process is depicted in Fig. <ref>. It shows the first two Kriging models remain sparse for various parameter settings while more samples are selected to train the third Kriging model.
The comparison of the relative errors of x(t) obtained with both the S2K model (δ=10^-4,σ=1.5) and the NARX model (maximum polynomial order 5) is depicted in Fig. <ref>.
Note that different from <cit.>, we do not use any physical information to construct the NARX model here. For this challenging nonlinear problem, the vanilla NARX model with polynomial basis fails to provide satisfactory prediction. By contrast, the S2K model yields accurate prediction with relative errors that are 4 orders of magnitude lower than that of the NARX model.
§.§ Two-story nonlinear hysteretic structure
We apply the S2K model to emulate the behavior of a two-story two-span nonlinear hysteretic frame structure subjected to ground motion excitation <cit.>. The corresponding motion equation reads
M(t) + C(t) +G[(t),(t)] = -MI u(t),
where (t)∈R^2,(t)∈R^2, and (t)∈R^2 represent the displacement, velocity and acceleration vector; M∈R^2×2, C∈R^2×2 and I∈R^2×2 are the lumped mass, Rayleigh damping and the identity matrices; u(t) is modeled by a white noise ground acceleration process as that in Eq. (<ref>); G(·) is the restoring force vector characterized by the Bouc-Wen model as
G̃_j((t),(t)) = α K_j x̃_j(t) + (1-α)K_j z_j(t),
in which α = 0.04; G̃_j(·), x̃_j(t), and K_j represent the j-th inter-story restoring force, drift, and story initial stiffness, respectively; z_j(t)(j = 1,2) is the j-th auxiliary hysteretic displacement, which is described by
ż_j(t) = ϖ/1+d_ηϵ_j(t)(1-ξ exp(-{z_j(t) sgn[ẋ̃̇_j(t)]-q/(β+γ)κ/[ψ + d_ψϵ_j(t) ](λ +ζ_sξ )}^2))
,
where κ =1+d_νϵ_j(t), ϖ = ẋ̃̇_j(t) - κ[β |ẋ̃̇_j(t)|z_j(t) + γẋ̃̇_j(t) |z_j(t)| ], ξ= ζ_s[1-e^-pϵ_j(t)]; the parameters β and γ control the basic hysteresis shape; d_ν and d_η are the strength and stiffness degradation; ζ_s measures the total slip; q,p,ψ,d_ψ, and λ are the initiation, slope, magnitude, rate, and severity interaction of pinching <cit.>;
ϵ_j(t) is the j-th story hysteretic-dissipated energy, which is described by
ϵ̇_j(t) = ẋ̃̇_j(t)z_j(t), for j=1,2.
In the current work, the system parameters are set as follows: lumped mass M_j=2.6×10^5 kg, initial stiffness K_j = 10^8 N/m, β=15 m^-1, γ=150 m^-1, d_ν=d_η=p=1000 m^-2, q=0.25, d_ψ= 5 m^-2, λ=0.5, ζ_s=0.99, ψ=0.05 m. In addition, the damping ratios of the first two modes are taken as ζ_1=ζ_2=0.05.
The state vector (t)=[x_1(t),x_2(t),ẋ_1(t),ẋ_2(t),z_1(t), z_2(t), ϵ_1(t), ϵ_2(t)]^ T is 8-dimensional, and we construct 8 Kriging models in the S2K model to learn the state space representation of this dynamical system, where the input is [(t)^ T, u(t)]^ T, and the output is (t)=(t). The time history of the state vector over the whole time period of interest T∈ [0,10] s is evaluated with time step t = 0.01 s. The average relative errors ϵ̅_1 of x_1(t) predicted by the S2K model with varying number of training time histories are presented in Fig. <ref>, in which different magnification factors are assigned to different training time histories
according to Eq. (<ref>). Due to the strong non-linearity and non-smoothness property of this model, at least 5 time histories of the state vector are required to train an accurate S2K model. To illustrate its performance, a specific test time history of the state vector is depicted in Fig. <ref>. It shows that the eight state quantities are well predicted by the S2K model over the whole time period of interest (with 5 training time histories). In this example, only the last four components of the state space representation are strongly nonlinear, especially y_6(t)=ż_1(t) and y_7(t)=ż_2(t) in Eq. (<ref>). We therefore only present the training sample sizes of the last four Kriging models in Fig. <ref>. As expected, as the number of training time histories increases, an increasing number of samples is selected during the active learning procedure to train the Kriging models, and more than 20 % of the total samples from the 5 training time histories are selected to train the two challenging functions: y_6(t)=ż_1(t) and y_7(t)=ż_2(t) in Eq. (<ref>). In this example, application of the NARX model led to meaningless predictions after some time period, it is not suitable for emulating the response of this system due to the high nonlinearity in Eq. (<ref>). Hence we do not report the relative error of the NARX model for comparisons.
§ CONCLUDING REMARKS
In this work, we have presented a novel surrogate modeling framework for emulating dynamical systems under stochastic excitation by learning its state space representation with Kriging (S2K) model. Several conclusions can be drawn from the work:
(1) Learning the state space representation of a dynamical system under stochastic excitation can avoid the curse of dimensionality of surrogate model training due to discretization of the stochastic excitation.
(2) The proposed active learning algorithm can select an informative sample subset from the whole training sample set efficiently, resulting in a sparse Kriging model.
(3) The proposed technique for designing the training time history of the state vector by magnifying the variability of the excitation can improve the accuracy of S2K model.
(4) Numerical examples demonstrate that the S2K model is powerful for emulating various complex nonlinear dynamical systems under stochastic excitation. It provides highly accurate prediction (Relative error is less than 10^-3) with only a few time histories of the state vector, and it outperforms the NARX model in terms of accuracy and efficiency.
In the future, we plan to adapt this framework to emulating complex nonlinear dynamical systems with both random system parameters and stochastic excitation. Moreover, we will also consider combining the S2K model with model reduction techniques, e.g., proper orthogonal decomposition or auto-encoder, to emulate stochastic dynamical systems with large number of degrees of freedom.
§ ACKNOWLEDGEMENTS
This work was supported by the Alexander von Humboldt Foundation.
plain
|
http://arxiv.org/abs/2409.03213v1 | 20240905031804 | Optimizing 3D Gaussian Splatting for Sparse Viewpoint Scene Reconstruction | [
"Shen Chen",
"Jiale Zhou",
"Lei Li"
] | cs.CV | [
"cs.CV"
] |
Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms
Disson S. dos Prazeres and Makson S. Santos
Received 16 July 2024; accepted 04 September 2024
=====================================================================================
empty
empty
§ ABSTRACT
3D Gaussian Splatting (3DGS) has emerged as a promising approach for 3D scene representation, offering a reduction in computational overhead compared to Neural Radiance Fields (NeRF). However, 3DGS is susceptible to high-frequency artifacts and demonstrates suboptimal performance under sparse viewpoint conditions, thereby limiting its applicability in robotics and computer vision.
To address these limitations, we introduce SVS-GS, a novel framework for Sparse Viewpoint Scene reconstruction that integrates a 3D Gaussian smoothing filter to suppress artifacts.
Furthermore, our approach incorporates a Depth Gradient Profile Prior (DGPP) loss with a dynamic depth mask to sharpen edges and 2D diffusion with Score Distillation Sampling (SDS) loss to enhance geometric consistency in novel view synthesis. Experimental evaluations on the MipNeRF-360 and SeaThru-NeRF datasets demonstrate that SVS-GS markedly improves 3D reconstruction from sparse viewpoints, offering a robust and efficient solution for scene understanding in robotics and computer vision applications.
§ INTRODUCTION
The use of RGB cameras in robotic vision systems for 3D scene reconstruction is essential for acquiring multiple viewpoints, a fundamental requirement for high-quality novel view synthesis (NVS). However, in practical scenarios, obtaining dense multi-view data is often impractical, especially in resource-constrained or complex environments. This limitation necessitates developing methods that can achieve effective scene reconstruction from sparse viewpoints. Traditional Neural Radiance Fields (NeRF) <cit.> have shown strong performance in NVS, but their pixel-level ray rendering is computationally intensive and not well-suited for scenarios with sparse input data, requiring substantial resources and processing time.
In contrast, 3D Gaussian Splatting (3DGS) <cit.> employs an explicit representation that significantly reduces both training and rendering times while maintaining high-quality outputs. This method initializes a set of 3D Gaussians from point clouds generated by Structure from Motion (SfM) <cit.> or via random initialization. It uses adaptive density control to clone and prune these Gaussians, enhancing scene detail representation. Leveraging the smooth, differentiable properties of Gaussian distributions, 3DGS enables rapid rasterization by projecting 3D Gaussians onto 2D image planes, supporting efficient rendering and interpolation <cit.>.
3D Gaussian distributions effectively capture details across multiple scales, and their projection onto a 2D plane simplifies the rasterization process. While this method is capable of efficiently representing complex, large-scale scenes or objects, the absence of size constraints for each 3D Gaussian primitive leads to a loss of detail when reconstructing fine objects, especially upon zooming in. This limitation is particularly evident when dealing with extremely thin lines, where it can result in inaccuracies that hinder the precise capture and reproduction of slender structures and small features, thereby compromising the overall visual realism and detail fidelity of the scene <cit.>. Moreover, in practical applications, 3D Gaussian Splatting (3DGS) requires densely sampled multi-view scenes to achieve optimal results <cit.>. However, obtaining such extensive viewpoint data is often impractical in resource-constrained or complex environments. The unconstrained size of primitives in 3DGS and the reliance on dense multi-view image data present significant challenges for practical applications, such as autonomous vehicle navigation.
3DGS methods are heavily dependent on the density and quality of initial point clouds derived from dense multi-view inputs, which limits their effectiveness in sparse-viewpoint scenarios. To address the inherent limitations of 3DGS, we propose a sparse-view 3DGS framework, termed SVS-GS. To impose size constraints on the 3D Gaussian primitives, we introduce a 3D smoothing filter <cit.>. This filter regulates the diffusion range of Gaussian primitives in both 3D space and their 2D projections, ensuring the preservation of more details during reconstruction, particularly for small and thin structures. In standard 3DGS, the initial 3D Gaussian primitives are derived from point cloud data generated by COLMAP <cit.>. However, sparse views yield a limited number of initial points, resulting in low point cloud density, which adversely affects the distribution and quality of Gaussian primitives. To enhance the density of these initial 3D Gaussian primitives, we introduce a local adaptive density scaling module. This module dynamically increases the density of Gaussian primitives based on the sparse point clouds, producing a denser set of 3D Gaussian primitives.
For the optimization of the 3D Gaussian primitives, we employ score distillation sampling (SDS) loss <cit.> to integrate 3DGS with 2D diffusion, incorporating depth prior information to constrain the positions and sizes of the 3D Gaussian primitives. Additionally, we introduce a dynamic depth mask and Gradient Profile Prior (GPP) loss <cit.> to enhance the sharpness of edges in the depth maps. SVS-GS effectively addresses gaps in the sparse point cloud data while simultaneously improving the uniformity and spatial coverage of the initial Gaussian primitives, thereby enhancing precision and detail fidelity in 3D scene reconstruction.
Our main contributions are as follows:
* Novel Sparse-View Framework: SVS-GS reduces dependency on dense multi-view data by optimizing Gaussian primitive distributions, improving practicality and efficiency.
* Adaptive Density Scaling: A local adaptive density scaling module generates denser initial 3D Gaussian primitives, addressing the problem of sparse point clouds.
* Enhanced Optimization Techniques: Integration of SDS loss with 2D diffusion, dynamic depth masks, and depth priors ensures precise control over Gaussian primitives, improving detail reconstruction.
§.§ Novel View Synthesis
Implicit representations for novel view synthesis (NVS), particularly Neural Radiance Field (NeRF)-based methods, have gained substantial attention in recent years <cit.>. NeRF <cit.> utilizes a multi-layer perceptron (MLP) <cit.> to predict radiance and density at 3D locations and viewing directions, leveraging classical volume rendering techniques <cit.> to generate high-quality novel views. Despite their strengths, these methods can produce artifacts when handling high-frequency details. To address this, Mip-NeRF <cit.> introduces multi-scale features and anti-aliased conical frustums to minimize blurring. While NeRF-based approaches are effective for objects and small-scale scenes, inaccuracies in camera parameters can accumulate errors in large-scale, unbounded environments, affecting reconstruction quality. Mip-NeRF 360 <cit.> alleviates these issues with non-linear scene parameterization and online distillation techniques to reduce artifacts in large-scale scenes.
In scenarios with sparse input views, NeRF models are prone to overfitting, which limits their ability to generalize to novel perspectives <cit.>. Several methods have been proposed to enhance reconstruction accuracy in such settings. Depth-Supervised NeRF (DSNeRF) <cit.> combines color and depth supervision to produce more detailed scenes, while SPARF <cit.> uses pixel matching and depth consistency loss to achieve high-precision 3D scene generation from sparse inputs.
§.§ Primitive-Based Rendering
Primitive-based rendering techniques, which rasterize geometric primitives onto a 2D plane, have gained widespread adoption due to their high efficiency <cit.>. Differentiable point-based rendering methods <cit.> are particularly effective for novel view synthesis (NVS) because they offer optimization-friendly representations of complex scene structures. Recently, the introduction of 3D Gaussian Splatting (3DGS) <cit.> has renewed interest in explicit representation methods. Unlike implicit representations, explicit representations directly encode the geometry and lighting information of a scene, reducing computational complexity. However, 3DGS adapts Gaussian primitives to each training image independently, often neglecting the global structural coherence of the scene <cit.>. Additionally, the lack of size constraints during training can lead to artifacts in rendered novel views. To address these issues, Structured 3D Gaussians (Scaffold-GS) <cit.> introduces anchor points to guide the distribution of 3D Gaussian primitives, enhancing the structural integrity of the scene. Mip-Splatting <cit.> further improves 3DGS by incorporating a 3D smoothing filter and a 2D mipmap filter to constrain the size of Gaussian primitives, thereby capturing finer scene details.
Most 3DGS-based methods initialize using point clouds generated from Structure-from-Motion (SfM) techniques, such as COLMAP. These methods rely on dense input images to maintain sufficient point cloud density, which is crucial for high-quality scene reconstruction. When the input images are sparse, the resulting point clouds also become sparse, limiting the capacity of 3D Gaussian primitives to capture intricate geometric details during generation and optimization <cit.>. This sparsity can cause the models to overfit to the limited training views, thereby hindering generalization to novel viewpoints and reducing the effectiveness of scene reconstruction. SparseGS <cit.> attempts to mitigate the dependency on dense input by incorporating 2D diffusion and depth information.
§ PRELIMINARIES
3DGS employs anisotropic Gaussians to effectively capture the varying scales and orientations present within a scene. Each 3D Gaussian primitive, denoted as {𝒢_n | n = 1, …, N }, is characterized by several parameters: a center position μ_n ∈ℝ^3 × 1, a covariance Σ_n ∈ℝ^7, a color c_n ∈ℝ^3, and an opacity α_n ∈ℝ^1. The Gaussian function is defined as:
𝒢_n(x) = e^-1/2(x-μ_n)^T Σ^-1_n (x-μ_n),
where x denotes points queried around the center position μ_n. The size and orientation of each 3D Gaussian primitive are determined by the semi-definite parameters Σ_n = R_n S_n (R_n S_n)^T, where R_n ∈ℝ^4 represents a rotation matrix and S_n ∈ℝ^3 is a scaling matrix.
To render images from different viewpoints, differential splatting is applied to project the 3D Gaussians onto camera planes. This process involves the viewing transformation W_n and the Jacobian matrix J_n, resulting in a transformed covariance:
Σ'_n = J_n W_n Σ_n (J_n W_n)^T.
For color construction, 3DGS utilizes spherical harmonics to model the color c_n of each Gaussian, incorporating its opacity α_n. When rendering from a novel viewpoint, the 3D Gaussians are projected onto 2D planes, and the resulting color C_r(x) for a given ray r is computed as:
C_r(x) = ∑_i ∈ M c_i σ_i ∏_j=1^i-1 (1 - σ_j), σ_i = α_i 𝒢^2D_i(x),
where c_i and α_i represent the color and opacity of the i-th Gaussian, respectively. Here, the ray r originates from the camera center corresponding to the observation viewpoint. Finally, an adaptive density control mechanism is implemented to dynamically clone and prune the 3D Gaussians, maintaining a balance between computational efficiency and scene detail.
§ METHODS
§.§ Problem Formulation
In the context of scene reconstruction, optimizing the initialized 3D Gaussian primitives necessitates a set of multi-view images I = {I_1, I_2, …, I_k} and the corresponding point clouds P = { p_1, p_2, …, p_M }. The multi-view images I are first utilized to generate an initial point cloud P through Structure-from-Motion (SfM) techniques. Subsequently, these images guide the optimization of 3D Gaussian Splatting (3DGS) by comparing them with the rendered images, thereby refining the 3D Gaussian primitives to improve scene representation.
The quality of novel view synthesis (NVS) in 3DGS is heavily influenced by the density and distribution of point clouds P and the quality of the input multi-view images I. When robotic vision systems rely exclusively on RGB cameras with limited data, the resulting sparse point clouds and input images can significantly impair the completeness and level of detail in the geometric representation, limiting the capacity of 3DGS to accurately capture scene complexity. This limitation becomes particularly critical in complex or unbounded environments, where inadequate data hampers the ability to represent intricate geometric structures and variations in lighting, thereby reducing the effectiveness of scene reconstruction.
§.§ Initialize Adaptive Dense
In 3D scene reconstruction, a combined strategy of global and local processing is employed to balance the accuracy of the overall structure with the refinement of local details.
Global processing is responsible for capturing the broad geometric structure of the entire scene, while local processing focuses on enhancing the detail representation within specific regions.
§.§.§ Global Processing
The primary objective of global processing is to ensure the geometric consistency of the entire scene. Using the point clouds P_init = {p_i | i = 1, …, k} generated by SfM, we first address the overall structure to obtain a comprehensive spatial framework and point cloud density distribution.
The global processing optimizes P_init to derive a global density function ρ(p):
ρ_global(p) = ∫_Pexp( - p - q^2/2σ_p^2) f(q) dq,
where each point p_i ∈ P has coordinates (x_i, y_i, z_i),
q represents the potential nearest neighbors of the point p.
f(q) is the density function, representing the weight or density at point q.
This density function is utilized to assess the distribution of points across the point clouds, ensuring that the essential geometric structures are retained at the global level.
§.§.§ Local Processing
Following global processing, the point clouds are partitioned into several local regions N, where each region undergoes more detailed optimization. The main goal of local processing is to enhance the representation of fine details.
For a local region R_i, the bounding box is defined as:
p_min_i = min(p_R_i), p_max_i = max(p_R_i),
where p_R_i denotes the points within the region R_i. The position of the newly generated points p_r∈[ p_min_i, p_max_i] is determined by uniform sampling within this bounding box.
The local point cloud density function ρ_local(p) is further refined to capture intricate geometric details:
ρ_local(p_r) = ∫_R_iexp( - p_r - q_r^2/2σ_p_r^2) f(q_r) dq_r,
where R_i represents the integration domain, which encompasses the entire range of possible values for the local region around p_r;
q_r represents the potential nearest neighbors of the point p_r.
§.§.§ Density-weighted selection
Upon completing the local and global density estimations, the point selection process strategically integrates these results, optimizing the balance between local precision and global coherence to enhance the overall quality of the reconstruction.
Initially, within each local region, a KD-tree <cit.> is constructed to identify the k nearest neighbors p_i for each point p.
The distances between p and these neighbors are calculated and then converted into local density values ρ_local(p) using a Gaussian function. Based on these density values, the probability of retaining each point ℙ_local is determined:
ℙ_local(p_r_j∈ p_r) ∝ρ_local(p_r).
Simultaneously, a similar process is conducted at the global level. The global density ρ_global(p) is estimated by calculating the distances to the global nearest neighbors p_i, and the corresponding global retention probability ℙ_global is computed:
ℙ_global(p_i∈ p) ∝ρ_global(p).
The selected points from both the local P_local and global P_global density estimations are combined with the initial point cloud P_init using a union operation, resulting in the final point cloud P_final:
P_final = P_init⊕ P_local⊕ P_global.
This approach ensures that both the global structural integrity and local detail accuracy are maintained, thereby improving the overall quality and precision.
§.§ 3D Smoothing
The intrinsic and extrinsic parameters of the camera are not fixed, leading to varying degrees of artifacts when rendering novel views, especially upon magnification.
In the optimization process, the coordinates o_i = (x_o_i,y_o_i,z_o_i) of any arbitrary 3D Gaussian need to be transformed from the world coordinate system to each coordinate system of camera:
e_i = o_i R_i + T_i = (x_e_i,y_e_i,z_e_i),
where R_i and T_i represent the rotation matrix and translation matrix for the i-th camera. The transformed point is then projected onto the image plane using the intrinsic matrix of the camera:
x^s_i = x_e_i/z_e_i· f_i,x + W_i/2, y^s_i = y_e_i/z_e_i· f_i,y + H_i/2,
where f_i represents the focal length of the i-th camera; H_i and W_i represent the height and width of the image, respectively.
The maximum Gaussian point frequency β_k is obtained using the observed positions of the 3D Gaussians on the screen:
ζ_k = sup( f_i/z_e_i),
where x_i^s ∈ [-α W_i, (1+α) W_i] and y_i^s ∈ [-α H_i, (1+α) H_i]. The hyperparameter α is used to extend the boundary of the image plane, ensuring that points near the image edges are considered.
After 3D smoothing filtering, the 3D Gaussian is represented as follows:
𝒢_k(x) = √(Σ_k/Σ_k_s)·
e^-1/2(𝐱 - μ_k)^T Σ_k_s^-1 (𝐱 - μ_k),
where Σ_k_s = Σ_k + s/ζ_k^2·𝐈 represents the covariance matrix after filtering.
§.§ Depth SDS as Optimization Guidance
Using the diffusion model to generate spatially aligned RGB images and depth maps, we can guide the 3DGS optimization process in both structure and texture.
The depth map for each view is computed by accumulating the depth values of 𝒩 ordered Gaussian primitives along the ray, using point-based α blending:
D_r(x) = ∑_i ∈𝒩 d_μ_iσ_i ∏_j=1^i-1 (1 - σ_j),
where d_μ_i is the depth of the i-th Gaussian primitive center μ_i in the camera view. All depth maps from the training views are normalized for subsequent depth-based loss calculation.
We employ SDS <cit.> to guide the optimization of 3DGS through 2D diffusion <cit.>.
The rendered image Ĩ and depth map D̃ from unseen viewpoints v are jointly used to optimize 3DGS through SDS:
∇_θℒ_SDS = λ_1 ·𝔼_ϵ_I, t[ w_t ( ϵ_ϕ (I_t; Ĩ^v,t) - ϵ_I ) ∂ I_t/∂θ]
+ λ_2 ·𝔼_ϵ_D, t[ w_t ( ϵ_ϕ (D_t; D̃^v, t) - ϵ_D ) ∂ D_t/∂θ],
where λ_1 and λ_2 are coefficients that balance the influence of image and depth;
ϵ_ϕ(.) is the denoising function of 2D diffusion;
ϵ_I, ϵ_D ∼ N(0, I) are independent Gaussian noises.
By integrating the 2D diffusion model, 3DGS can be optimized more effectively, enabling the generated images and depth maps from new viewpoints to more accurately reflect the geometric structure and textural details of the actual scene.
§.§ Depth mask and Gradient Profile Prior
Since noise and irrelevant details in the distant background can negatively impact the gradient calculation process, leading to blurred edges and loss of detail in the reconstruction, we introduce a dynamic depth mask to effectively suppress high-frequency noise and artifacts from distant objects, thereby improving the geometric accuracy and visual quality of the reconstruction.
To accommodate scenes with varying depth distributions, q_f for the far-distance threshold is calculated as follows:
q_f = q_b + (β_D/β_D + α_D) ×Δ q,
where α_D and β_D represent the mean and standard deviation of the depth map D, respectively. p_b is the base quantile, and Δ p is the dynamic adjustment range. The generated mask M is defined as:
M = 1_D ≤ T_f = 1_D ≤Quantile(D, q_f),
where 1_(·) is an indicator function that assesses the visibility of depth map D. The mask is determined by calculating the value T_f at the quantile p_f of the depth map D.
The final masked depth map (D_m = D ⊙ M) is used for gradient operations.
The Depth Gradient Profile Prior (DGPP) is introduced to enhance the sharpness and accuracy of edges in the depth map, particularly focusing on refining the texture and geometric details.
The GPP loss is formulated to enforce the alignment of gradient profiles between the rendered depth map D̂_m and the target depth map D_m.
When the pixel positions b of D̂_m and D_m correspond one-to-one, the DGPP loss function is defined as:
ℒ_DGPP = 1/b_1 - b_0∫_b_0^b_1∇D̂_m(b) - ∇D_m(b)_1 db,
where ∇D̂_m and ∇ D_m represent the gradient fields of the rendered and target depth maps, respectively.
The depth alignment ensures that the sharpness of edges is preserved and that the 3D reconstruction accurately reflects the underlying geometry.
§.§ Loss Function
To optimize the 3D Gaussian representation ({θ_k=(μ_k, Σ_k,α_k, c_k) }^K_k), we designed a final optimization function that integrates various loss terms.
Our final loss function for optimizing 3D Gaussians is defined as:
ℒ_final =
ℒ_RGB(Î(θ),I) + λ_depthℒ_depth(D̂(θ),D,M)_loss of know view
+ λ_SDSℒ_SDS(Ĩ^v(θ), D̃^v(θ))_loss of novel view,
where Î, Ĩ^v represent the RGB images rendered by the 3D Gaussian primitives; I represents the reference RGB image; D̂, Ĩ^v represent the depth maps rendered by the 3D Gaussian primitives; D represents the reference depth map.
§ EXPERIMENTS
§.§ Datasets and Evaluation Metrics
For unbounded scenes, we select six 360° coverage scenes from Mip-NeRF 360 <cit.> to evaluate our model. For underwater scenes, we used the SeaThru-NeRF dataset <cit.> to evaluate the applicability of our framework to other complex scenes.
We employ tree metrics (PSNR, SSIM <cit.>, and LPIPS <cit.>), to evaluate and compare our method against existing approaches.
§.§ Implementation Details
Our method is implemented using the PyTorch <cit.> framework and the open-source 3DGS <cit.> codebase. AdamW <cit.> is employed as the optimizer. For all scenes, the models were trained for 30K iterations using the same loss function, Gaussian density control strategy, and hyperparameters to optimize the 3D Gaussian primitives. Both Gaussian training and rendering tests were performed on a NVIDIA^TM RTX 4090 GPU.
§.§ Qualitative and Quantitative Evaluation
In the qualitative analysis on MipNeRF360, as shown in the Fig.<ref>, we compared the performance of different methods in reconstructing complex scenes.
When reconstructing unbounded scenes, SVS-GS, with its integration of 3D Gaussian smoothing and depth priors, clearly outperforms traditional 3DGS and SparseGS methods by successfully capturing more intricate structures and lighting variations. These results further confirm the advantages and practical effectiveness of SVS-GS in sparse view scene reconstruction. Similarly, on the SeaThru-NeRF underwater dataset, SVS-GS again outperformed other methods, as shown in the Fig.<ref>. Particularly in handling the challenges of complex underwater lighting conditions and sparse viewpoints, SVS-GS demonstrated greater robustness and accuracy, successfully reducing visual distortions and preserving more scene details. These quantitative results underscore the broad applicability of SVS-GS across different scenarios and viewpoint conditions.
In the quantitative analysis, we systematically evaluated the performance of SVS-GS against other methods on the MipNeRF360 and SeaThru-NeRF datasets, as shown in the Table.<ref>. Comparison of the PSNR, SSIM, and LPIPS metrics clearly demonstrates the significant advantages of SVS-GS in terms of reconstruction accuracy and image quality. On the MipNeRF360 dataset, SVS-GS achieved the highest scores in both PSNR and SSIM, indicating its superior ability to reconstruct geometric and textural details in sparse views, while also exhibiting the lowest perceptual error in the LPIPS, further validating its visual fidelity.
§.§ Ablations and Analysis
As shown in Table.<ref>, we conducted ablation studies to evaluate the impact of key components in our method.
The dynamic depth mask plays a crucial role in effectively reducing noise and artifacts in distant areas, confirming its importance in filtering out irrelevant depth information.
DGPP sharpens edge contours, highlighting its importance in preserving details.
Additionally, omitting the 3D Gaussian smoothing filter results in a noticeable increase in surface noise and artifacts, demonstrating its essential role in maintaining the smoothness and consistency of the reconstructed surfaces. The lack of SDS leads to geometric inconsistencies in the synthesized novel views, emphasizing the necessity of this component in ensuring geometric coherence and minimizing visual discrepancies.
Each component contributes to the effectiveness of achieving high-quality 3D scene reconstruction.
§ CONCLUSION
In this paper, we introduce SVS-GS, a novel framework for 3D scene reconstruction from sparse viewpoints, optimized for both robotic vision systems and broader computer vision tasks using only RGB cameras. Our method utilizes a dynamic depth mask to enhance geometric accuracy by selectively retaining critical depth information. Additionally, by incorporating depth priors, a 3D Gaussian smoothing filter, and Depth Gradient Profile Prior (DGPP) loss, our approach sharpens edges and preserves fine details in complex scenes. To ensure high-quality and consistent novel view synthesis, we integrate Score Distillation Sampling (SDS) loss, which reduces noise and maintains geometric coherence across different viewpoints. Experimental results demonstrate that SVS-GS outperforms existing methods in sparse viewpoint scenarios, achieving superior visual fidelity and geometric consistency. Furthermore, our framework shows robust performance across various challenging environments, making it an efficient and effective solution for 3D scene reconstruction in both robotics and computer vision applications.
IEEEbib
|
http://arxiv.org/abs/2409.02342v1 | 20240904000623 | Optimal sampling for least-squares approximation | [
"Ben Adcock"
] | stat.ML | [
"stat.ML",
"cs.LG",
"cs.NA",
"math.NA"
] |
Fair Minimum Representation Clustering via Integer Programming A preliminary version of this paper appeared in CPAIOR 2024.
[
===========================================================================================================================
§ ABSTRACT
Least-squares approximation is one of the most important methods for recovering an unknown function from data. While in many applications the data is fixed, in many others there is substantial freedom to choose where to sample. In this paper, we review recent progress on optimal sampling for (weighted) least-squares approximation in arbitrary linear spaces. We introduce the Christoffel function as a key quantity in the analysis of (weighted) least-squares approximation from random samples, then show how it can be used to construct sampling strategies that possess near-optimal sample complexity: namely, the number of samples scales log-linearly in n, the dimension of the approximation space. We discuss a series of variations, extensions and further topics, and throughout highlight connections to approximation theory, machine learning, information-based complexity and numerical linear algebra. Finally, motivated by various contemporary applications, we consider a generalization of the classical setting where the samples need not be pointwise samples of a scalar-valued function, and the approximation space need not be linear. We show that even in this significantly more general setting suitable generalizations of the Christoffel function still determine the sample complexity. This provides a unified procedure for designing improved sampling strategies for general recovery problems. This article is largely self-contained, and intended to be accessible to nonspecialists.
§ INTRODUCTION
Least-squares approximation is the process of fitting an unknown function from samples by computing a best ℓ^2-norm fit in a given subspace, which is often termed the approximation space. Least squares is a classical topic, yet it is one of the widely-used tools in applied mathematics, computer science, engineering and numerous other disciplines. For the data scientist, it is almost always ones first `go-to' method when trying to fit a function to data.
In many data-fitting problems, the samples are fixed. However, many other problems offer substantial flexibility to choose where to sample. When data is also expensive to acquire – which, despite claims about `big data' is often the case in applications in science and engineering – we are naturally led to the following questions. How many samples do we need – or, in other words, what is the sample complexity – and how should we best choose them? This is by no means a new question. It arises in many different guises in different fields, including optimal design of experiments in statistics, active learning in machine learning, optimal sensor placement in sampling theory and signal processing, and optimal (standard) information in information-based complexity.
The purpose of this article is to survey recent advances made in the last 5-10 years in optimal sampling, as we shall term it from now on, which has been motivated by certain function approximation problems in high dimensions. Such methods are in essence importance sampling techniques, where samples are drawn randomly from a probability measure chosen specifically for the given approximation space. Throughout, our aim is to ensure quasi-optimal recovery (in an appropriate sense) with near-optimal sample complexity.
§.§ Overview
After a short literature review (<ref>), this article commences with a formulation and review of (weighted) least-squares approximation (<ref>). We then discuss multivariate polynomial approximation (<ref>), this being one of the main motivating examples for this work. The next two sections contain the core developments of this article. We describe the theory of least-squares approximation with random sampling and introduce the so-called Christoffel function, which plays a key role in its analysis (<ref>). We then show that sampling from a measure proportional to this function leads to provably near-optimal sampling (<ref>). The power of such a result lies in its generality: this strategy is near optimal for any linear approximation space. Next, we consider the matter of how much can be gained through this approach in comparison to Monte Carlo sampling, i.e., i.i.d. random sampling from some underlying probability measure (<ref>). Monte Carlo sampling is ubiquitous in applications, especially high-dimensional approximation tasks. Yet, as we discuss, sample complexity bounds for this naïve sampling strategy can be arbitrarily bad. Once more, we see the Christoffel function plays a key role in analyzing the sample complexity. Having done this, we then conclude this part of the article by discussing series of further topics (<ref>). In particular, we describe very recent advances of optimal (as opposed to near-optimal) sampling and its connections to sampling numbers in information-based complexity and the study of sampling discretizations in approximation theory. We also discuss connections to matrix sketching via leverage score sampling, as well as various practical considerations.
The majority of this article considers linear approximation spaces, i.e., finite-dimensional subspaces of functions. However, modern applications increasingly make use of nonlinear spaces. Moreover, in many applications the object to recover may not be a scalar-valued function, and samples may not be simple pointwise evaluations. We conclude this article by describing a recent framework for optimal sampling with general linear samples and nonlinear approximation spaces (<ref>). We discuss how many of the key ideas seen in linear spaces, such as Christoffel functions, naturally extend to this general setting. Finally, we end with some concluding thoughts (<ref>).
§.§ Scope and target audience
In this article, we focus on the foundational techniques and theory. After a brief review in the next section, we largely omit applications, although this is currently an active area of interest.
This article is intended to be accessible to nonspecialists. We build most concepts up from first principles, relying on basic knowledge only. In order to make it as self-contained as possible, proofs of most of the results shown in this work are given in an appendix.
§ LITERATURE REVIEW
We commence with a short discussion of relevant literature. Additional literature on variations, extensions and further topics can be found in <ref>.
Least squares is a classical topic, with origins tracing back to the work of Gauss and Legendre <cit.>. Starting in the early 2010s, and motivated by problems in parametric and stochastic Differential Equations (DEs), there was a resurgence of research on this topic, focusing on high- and and infinite-dimensional function approximation, and typically involving polynomial spaces. Key works in this direction include <cit.>. This resurgence was based on least squares with random sampling, inspired by Monte Carlo quadrature and its ability to integrate functions without succumbing to the curse of dimensionality. However, it is worth noting that the goal of least-squares approximation is to achieve quasi-optimal rates of convergence with respect to the approximation space. Typically, the resulting rate will exceed the error rate for Monte Carlo quadrature.
As noted above, Monte Carlo sampling generically leads to suboptimal sample complexity bounds for least-squares approximation. This observation led to a concerted effort to develop practical sampling strategies with better performance (see <cit.> for an overview), culminating in the near-optimal strategies which are the basis of this work. These were developed in <cit.>, but also appeared slightly earlier in <cit.> in the case of (total degree) polynomial spaces.
At a similar time, related techniques under the name leverage score sampling – which are based on the classical topic of statistical leverage – have become increasingly popular in machine learning and data science. In particular, leverage score sampling is an effective tool for matrix sketching <cit.>. As we comment in <ref>, it is can also be viewed as a special case of the techniques described in this article, corresponding to functions defined over a discrete domain.
Finally, we remark on some applications. As observed, this work is closely related to optimal design of experiments and optimal sensor placement in sampling theory and signal processing – both large areas with countless applications that we shall not attempt to review. However, this specific line of research emerged out of computing polynomial approximations to high-dimensional functions arising in parametric and stochastic DEs <cit.>, and this remains a key area of application. See <cit.> and references therein.
For other surveys focused multivariate polynomial approximation and parametric and stochastic DEs, see <cit.> and <cit.>.
Recently, these techniques have also been applied to the closely related problem of numerical integration (cubature) <cit.>. There are also emerging applications in Trefftz methods for solving Helmoltz equations <cit.> and methods for option pricing in finance <cit.>. On the theoretical side, this line of work has also spurred recent advances in approximation theory (so-called sampling discretizations) and information-based complexity (so-called sampling numbers). We discuss these topics further in <ref>. Related ideas have also been used in sampling theory <cit.>. We also note that Christoffel functions are useful tools for empirical inference in data analysis <cit.>.
Finally, through the close connection to leverage score sampling, there are manifold applications in machine learning and data science. These include randomized numerical linear algebra <cit.>, kernel methods <cit.> and active learning <cit.>.
Moreover, the generalization we describe in <ref> opens the door to applications in many seemingly unrelated areas, such as inverse problems in imaging <cit.>.
§ PRELIMINARIES
Let (D,,ϱ) be a measure space and L^2_ϱ(D) be the Lebesgue space of square-integrable functions f : D → with respect to ϱ. Typically, in this work, D ⊆^d. For convenience, we assume that ϱ is a finite measure (ϱ(D) < ∞) and, therefore, without loss of generality, that ϱ is a probability measure (ϱ(D) = 1). It is possible to consider infinite measures, but for ease of exposition we shall not do this.
Given m ∈, we consider sampling measures μ_1,…,μ_m. These are assumed to be such that (D,,μ_i) is a probability space for every i. We also make the following assumption.
[Absolute continuity and positivity]
The additive mixture
μ= 1/m ∑^m_i=1 μ_i
is absolutely continuous with respect to ϱ and its Radon–Nikodym derivative ν is strictly positive almost everywhere on supp(ϱ).
This assumption allows us to write
1/m ∑^m_i=1 μ_i(x) = ν(x) ϱ(x),
where the density ν : D → (the Radon–Nikodym derivative) is measurable, positive almost everywhere and satisfies
∫_D ν(x) ϱ(x) = 1.
In what follows it will often be more convenient to work with the reciprocal of this function. We define the weight function w : D → as w(x) = 1/ν(x), x ∈ D.
Given sampling measures μ_1,…,μ_m, we now draw samples x_i ∼μ_i, i = 1,…,m, independently from these measures and consider noisy measurements of an unknown function f : D → the form
y_i = f(x_i) + e_i , i = 1,…,m.
Typically, we will assume that f ∈ L^2_ϱ(D) so that the samples f_meas are almost surely well defined.
We consider a bounded, adversarial noise model, where the e_i's are not assumed to be random, but are assumed to be small in magnitude. Our aim is to derive error bounds in which the noise term scales like the ℓ^2-norm
e_2 = √(∑^m_i=1 |e_i|^2)
of the noise vector e = (e_i)^m_i=1. Random noise models (included unbounded noise) can also be considered (see <cit.> and <cit.>).
§.§ Weighted least-squares approximation
Let ⊂ L^2_ϱ(D) be an arbitrary n-dimensional subspace, where n ≤ m, in which we seek to approximate the unknown f from the measurements f_meas. We term the approximation space. In this work, we consider general approximation spaces. In particular, this means that interpolation – which generally requires carefully-constructed points sets – may not be possible <cit.>. Instead, we consider the weighted least-squares approximation
f̂ ∈p ∈ 1/m ∑^m_i=1 w(x_i) | y_i - p(x_i) |^2.
Note that the loss function is well defined almost surely (for fixed f ∈ L^2_ϱ(D) and weight function w as above), since pointwise evaluations of f and w are well-defined almost surely.
[The scaling factor]
The scaling factors in wls-prob are motivated by noticing that
1/m ∑^m_i=1 w(x_i) | g(x_i) |^2 = 1/m ∑^m_i=1 ∫_D w(x) |g(x)|^2 μ_i(x)
= ∫_D |g(x)|^2 ρ(x) = g^2_L^2_ϱ(D),
where the second equality is due to mu_weight_fn. Thus, in the noiseless case, wls-prob can be considered as a empirical approximation to the continuous least-squares approximation
f̂ = p ∈ f - p^2_L^2_ϱ(D),
i.e., the best approximation to f from in the L^2_ϱ(D)-norm. In particular, if μ_1 = ⋯ = μ_m = μ, then the minimizers of wls-prob converge almost surely to the minimizer of cts-min as m →∞ <cit.>.
The objective of this article is to describe how to choose the measures μ_1,…,μ_m to achieve the most sample-efficient approximation. We shall compare such approaches against the standard approach of Monte Carlo sampling, i.e., i.i.d. random sampling from ϱ. This is equivalent to setting
μ_1 = ⋯= μ_m = ϱ,
which leads, via mu_weight_fn, to ν≡ 1. Thus, wls-prob becomes an unweighted least-squares approximation.
[Hierarchical approximation]
Often, rather than a fixed subspace , one may wish to construct a sequence of approximations in a nested collection of subspaces
^(1) ⊆^(2) ⊆⋯,
of dimension (^(k)) = n_k. In this case, given numbers 1 ≤ m_1 ≤ m_2 ≤⋯ satisfying m_k ≥ n_k, ∀ k, one aims to design nested collections of sample points
{ x^(1)_i }^m_1_i=1 ⊆{ x^(2)_i }^m_2_i=1 ⊆⋯.
We write f̂^(1),f̂^(2),… for the ensuing (weighted) least-squares approximations, where f̂^(k) is constructed from the sample points { x^(k)_i }^m_k_i=1. Nested implies that samples are recycled at each iteration – a highly desirable property in the setting of limited data.
We term such a procedure a hierarchical (also known as a progressive <cit.> or sequential <cit.>) approximation scheme.
§.§ Reformulations of wls-prob
Given a basis {ϕ_i }^n_i=1 for , the problem wls-prob is readily reformulated as an algebraic least-squares problem for the coefficients ĉ = (ĉ_i)^n_i=1∈^n of f̂ = ∑^n_i=1ĉ_i ϕ_i. This takes the form
ĉ ∈c ∈^n A c - b^2_2,
where
A = ( √(w(x_i)/m) ϕ_j(x_i) )^m,n_i,j=1 ∈^m ×n, b = ( √(w(x_i)/m) (f(x_i) + e_i ) )^m_i=1 ∈^m.
To be precise, every minimizer f̂ satisfying wls-prob has coefficients ĉ that satisfy algls-prob and vice versa. Classical least-squares analysis asserts that any vector ĉ satisfying algls-prob is also a solution of the normal equations
A^* A c = A^* b,
and vice versa. Rewriting the normal equations in terms of functions also leads to the following variational form of wls-prob.
Find f̂∈ such that f̂p_𝖽𝗂𝗌𝖼,w = fp_𝖽𝗂𝗌𝖼,w + 1/m ∑^m_i=1 w(x_i) e_i p(x_i), ∀p ∈.
This is equivalent to normal-eqns in the same sense as before. Here we wrote
gh_𝖽𝗂𝗌𝖼,w = 1/m ∑^m_i=1 w(x_i) g(x_i) h(x_i),
for the discrete semi-inner product induced by the sample points and the weight function (whenever defined). For convenience, we shall denote the corresponding seminorm as
g^2_𝖽𝗂𝗌𝖼,w = 1/m ∑^m_i=1 w(x_i) | g(x_i) |^2.
In the noiseless case e =0, the formulation variational-form asserts that f̂ is precisely the orthogonal projection of f onto with respect to the discrete semi-inner product semi-inner-product. Since semi-inner-product is an empirical approximation to the continuous inner product ··_L^2_ϱ(D) (recall exp-sum-scaling), this sheds further light on why minimizers of wls-prob converge to cts-min (the orthogonal projection in the L^2_ϱ-inner product).
[Numerical considerations]
Fast numerical computations are not the primary concern of this article. However, we note that algls-prob can be solved using standard linear algebra techniques. Since the matrix A is generally dense and unstructured, each matrix-vector multiplication involves m n floating-point operations (flops). Hence, when using an iterative method such as conjugate gradients, the number of flops that suffice to compute ĉ to an error of η > 0 (in the norm A ·_2 )is roughly cond(A) · m · n ·log(1/η), where cond(A) is the condition number of A. In <ref> we see that the sufficient conditions that ensure accuracy and stability of the approximation f̂ also guarantee that A is well conditioned.
§.§ Key terminology
We now introduce some key terminology that will be used from now on. First, we say that the approximation f̂ is L^2_ϱ-quasi-optimal or (L^2_ϱ,L^∞_ϱ)-quasi-optimal if, in the absence of noise,
f - f̂_L^2_ϱ(D) ≲inf_p ∈ f - p_L^2_ϱ(D), or f - f̂_L^2_ϱ(D) ≲inf_p ∈ f - p_L^∞_ϱ(D),
respectively (note that the term instance optimality also sometimes used <cit.>). Obviously the former is stronger – therefore, achieving it will be our main objective. We say that the recovery is stable if, in the presence of noise, the recovery error depends on e_2, i.e.,
f - f̂_L^2_ϱ(D) ≲e_(f) + e_2,
where e_(f) is some best approximation error term. Finally, we say that a sampling strategy (i.e., a collection of measures μ_1,…,μ_m) has near-optimal sample complexity or optimal sample complexity if, respectively, m ≥ c n log(n) or m ≥ c n samples suffice for obtaining a quasi-optimal and stable approximation, for some constant c > 0. This is sometimes referred to as rate optimality <cit.>.
§ APPLICATION TO MULTIVARIATE POLYNOMIAL APPROXIMATION
We now introduce an important example considered in this paper, namely, multivariate polynomial approximation in d ≥ 1 dimensions.
§.§ Spaces of multivariate polynomials
Let D ⊆^d be a domain and ϱ a measure. Let S ⊂^d_0 be a finite set of multi-indices with |S| = n. We consider the polynomial space
= _S : = { x ↦x^ν : ν∈S } ⊂L^2_ϱ(D).
Here, we use the notation x = (x_i)^d_i=1 for the d-dimensional variable, ν = (ν_1,…,ν_d) for a multi-index and x^ν = x^ν_1_1⋯ x^ν_d_d.
There are several standard choices for the index set S. In low dimensions, one may consider the (isotropic) tensor-product or total degree
S = S^𝖳𝖯_p = { ν∈^d_0 : max_k=1,…,d ν_k ≤p }, S = S^𝖳𝖣_p = { ν∈^d_0 : ∑^d_k=1 ν_k ≤p }
index sets of order p ∈_0. Unfortunately, the cardinality of these index sets scales poorly with respect to d (for fixed p). A better choice is moderate dimensions is the hyperbolic cross index set
S = S^𝖧𝖢_p = { ν∈^d_0 : ∑^d_k=1 (ν_k+1) ≤p +1 }.
However, as the dimension increases, this too may become too large to use. Since high-dimensional functions often have a very anisotropic dependence with respect to the coordinate variables, in higher dimensions one may look to consider anisotropic versions of these index sets. Given an anisotropy parameter a = (a_k)^d_k=1 with a > 0 (understood componentwise) and p ≥ 0 (not necessarily an integer), the corresponding anisotropic index sets are defined as
S^𝖳𝖯_p,a = { ν∈^d_0 : max_k=1,…,d a_k ν_k ≤p },
S^𝖳𝖣_p,a = { ν∈^d_0 : ∑^d_k=1 a_k ν_k ≤p }
and
S^𝖧𝖢_p,a = { ν∈^d_0 : ∑^d_k=1 (ν_k+1)^a_k ≤p +1 }.
Notice that the isotropic index sets are recovered by setting a = 1 (the vector of ones).
The choice of index set is not the focus of this paper. For more discussion, see, e.g., <cit.> and <cit.>. We remark, however, that all index sets defined above are examples of lower (also known as monotone or downward closed) sets. A set S ⊆^d_0 is lower if whenever ν∈ S and μ≤ν (understood componentwise, once more), one also has μ∈ S.
§.§ Multivariate orthogonal polynomials on tensor-product domains
As we shall see in the next section, orthonormal bases play a key role in least-squares approximation from random samples and, in particular, optimal sampling. It is therefore convenient to be able to readily compute orthonormal polynomial bases for the space _S.
Fortunately, when S is lower and D and ϱ are of tensor-product type, such polynomials are easily generated via tensor-products of univariate orthogonal polynomials. For concreteness, let
D = (a_1,b_1) ×⋯×(a_d , b_d), ϱ= ρ_1 ×⋯×ρ_d,
where, for each k = 1,…,d, -∞≤ a_k < b_k ≤∞ and ρ_k is a probability measure on (a_k,b_d). Then, under mild conditions on ρ_k (see, e.g., <cit.>), there exists a unique sequence of orthonormal polynomials
{ ψ^(k)_i }^∞_i=0 ⊂L^2_ρ_k(a_k,b_k),
where, for each i, ψ^(k)_i is a polynomial of degree i.
Using this, one immediately obtains an orthonormal basis of L^2_ϱ(D) via tensor products. Specifically,
{ Ψ_ν }_ν∈^d_0 ⊂L^2_ϱ(D), where Ψ_ν = ψ^(1)_ν_1 ⊗⋯⊗ψ^(d)_ν_d, ∀ν= (ν_k)^d_k=1 ∈^d_0.
What about the subspace _S introduced in PS-def? Fortunately, whenever S is a lower set, the functions Ψ_ν with indices ν∈ S also form an orthonormal basis for this space. In other words,
S lower ⟹ { Ψ_ν : ν∈S } = _S.
See, e.g., <cit.>.
This property, combined with the tensor-product structure of the basis functions, makes optimal sampling and least-squares approximation in the subspaces _S computationally feasible and, in many cases, straightforward, for tensor-product domains and measures. See <ref> for some further discussion.
To conclude this section, we list several standard families of univariate measures and their corresponding orthogonal polynomials. Consider a compact interval, which without loss of generality we take to be (-1,1). Given parameters α,β > -1, the Jacobi (probability) measure is given by
ρ(x) = c_α,β (1-x)^α (1+x)^β y, y ∈(-1,1), where c_α,β = ( ∫^1_-1 (1-x)^α (1+x)^β x )^-1.
This measure generates the Jacobi polynomials for general α,β and the ultraspherical polynomials when α = β. Of particular interest are the following cases.
* α = β = -1/2, which corresponds to the arcsine measure ρ(x) = ( π√(1-x^2))^-1 y. This yields the Chebyshev polynomials of the first kind.
* α = β = 0, which corresponds to the uniform measure ρ(x) = 1/2 x. This yields the Legendre polynomials.
* α = β = 1/2, which corresponds to the measure ρ(x) = (2 / π) √(1-x^2) x. This yields the Chebyshev polynomials of the second kind.
We will consider these polynomials later in this paper.
We will also briefly discuss certain unbounded domains. Here two common examples are as follows.
* ρ(x) = (2 π)^-1/2^-x^2/2 x over , which yields the Hermite polynomials.
* ρ(x) = ^-x x over [0,∞), which yields the Laguerre polynomials.
§ THEORY OF WEIGHTED LEAST-SQUARES APPROXIMATION FROM RANDOM SAMPLES
§.§ Basic accuracy and stability guarantee
Accuracy and stability of the weighted least-squares approximation are controlled by the following discrete stability constants:
α_w = inf{ p_𝖽𝗂𝗌𝖼,w : p ∈, p_L^2_ϱ(D) = 1 },
β_w = sup{ p_𝖽𝗂𝗌𝖼,w : p ∈, p_L^2_ϱ(D) = 1 }.
In other words, α_w and β_w are the optimal constants in the norm equivalence
α_w p_L^2_ϱ(D) ≤p_𝖽𝗂𝗌𝖼,w ≤β_w p_L^2_ϱ(D), ∀p ∈.
In approximation theory, this is known as a sampling discretization <cit.> or a (weighted) Marcinkiewicz–Zygmund inequality <cit.>. Squaring and writing out the discrete semi-norm, MZ-inequality is equivalent to
α^2_w p^2_L^2_ϱ(D) ≤1/m ∑^m_i=1 w(x_i) | p(x_i) |^2 ≤β^2_w p^2_L^2_ϱ(D), ∀p ∈.
Hence, the existence of finite, positive constants 0 < α_w ≤β_w < ∞ implies that ·_𝖽𝗂𝗌𝖼,w is a norm over the n-dimensional space , with α_w,β_w being the constants of the norm equivalence.
Note that if {ϕ_i }^n_i=1 is an orthonormal basis for , then it is straightforward to show that
α_w = σ_min(A) = √(λ_min(A^*A)), β_w = σ_max(A) = √(λ_max(A^*A)),
where A is the least-squares matrix (<ref>).
[Accuracy and stability of weighted least squares]
Let ⊂ L^2_ϱ(D), f ∈ L^2_ϱ(D), x_1,…,x_m ∈ D be sample points at which both f and any p ∈ are finite, e ∈^m and w : D → be such that w(x_i) > 0, ∀ i ∈{1,…,m}. Suppose that α_w > 0. Then the weighted least-squares problem wls-prob has a unique solution f̂. Moreover, this solution satisfies
f - f̂_L^2_ϱ(D) ≤inf_p ∈ { f - p_L^2_ϱ(D) + 1/α_w f - p_𝖽𝗂𝗌𝖼,w } + 1/α_w e_2,w,
where e_2,w = √(1/m∑^m_i=1 w(x_i) | e_i |^2 ).
Also, if {ϕ_i }^n_i=1 is an orthonormal basis of then the condition number of the least-squares matrix ls-Ab satisfies cond(A) = β_w / α_w.
This result is a standard exercise. In particular, the condition number statement follows immediately from alpha-beta-sigma. We include a short proof of the other parts in the appendix for completeness. We also observe that this result holds for arbitrary weight functions w and sample points x_1,…,x_m satisfying the stipulated assumptions. At this stage, we do not require the sample points to be random. This will be used in the next subsection to derive concrete sample complexity estimates.
[The noise bound]
On the face of it, the noise term e_2,w is undesirable since coefficients e_i corresponding to large values of w(x_i) are more heavily weighted than others. We will take this into account later when we construct near-optimal sampling measures. Specifically, in <ref> we construct sampling measures that lead to log-linear sample complexity and for which w(x) ≤ 2. Hence, the noise term e_2,w≤√(2)e_2 in this case.
§.§ The (reciprocal) Christoffel function
We now return to the main setting of this paper, where the samples points are drawn randomly and independently with x_i ∼μ_i, i=1,…,m, for measures μ_i satisfying Assumption <ref>. Our aim is to analyze the sample complexity of weighted least-squares approximation. In view of Lemma <ref>, this involves first bounding the discrete stability constants α_w and β_w.
A key tool in this analysis is the Christoffel function of . Christoffel functions are well-known objects in approximation theory <cit.>, where they are typically considered in the context of spaces spanned by algebraic polynomials. It transpires that Christoffel functions – or, more precisely, their reciprocals – are also fundamentally associated with random sampling for least-squares approximation.
[Christoffel function]
Let ⊆ L^2_ϱ(D). The (reciprocal) Christoffel function of is the function = () : D → defined by
(x) = ()(x) : = sup{ | p(x) |^2/p^2_L^2_ϱ(D) : p ∈, p ≠0 }, ∀x ∈D.
In other words, (x) measures how large in magnitude an element of can be at x ∈ D in relation to its L^2_ϱ-norm.
This function also admits an explicit expression. Let {ϕ_i }^n_i=1 be an arbitrary orthonormal basis of . Then it is a short exercise to show that
(x) = ∑^n_i=1 | ϕ_i(x) |^2.
Often taken as the definition of , this formulation will be useful in our subsequent analysis. It also emphasizes the fact that is precisely the diagonal of the reproducing kernel of in L^2_ϱ(D) <cit.>.
For reasons that will become clear soon, we are particularly interested in the maximal behaviour of the function w(x) (x), where w = 1/ν is the weight function defined by mu_weight_fn. We therefore let
κ_w = κ_w() : = _x ∼ϱ w(x) ()(x).
To continue the connection with approximation theory, it is worth noting that κ_w is the optimal constant in the (weighted) Nikolskii-type inequality (see, e.g., <cit.> and references therein),
√(w(·)) p(·) _L^∞_ϱ(D) ≤√(κ_w) p_L^2_ϱ(D), ∀p ∈.
Thus, κ_w measures how large the scaled element √(w(·)) p(·) can be uniformly in relation to the L^2_ϱ-norm of p.
It is important to observe that
κ_w() ≥n,
for any weight function w and n-dimensional subspace . This bound follows by integrating both sides with respect to the measure ϱ and noticing that ∫_D(x) ϱ(x) = n, the latter being an immediate consequence of Kappa-def-alt. This gives
n = ∫_D (x) ϱ(x) = ∫_D w(x) (x) 1/w(x) ϱ(x) ≤κ_w ∫_D 1/w(x) ϱ(x) = κ_w,
where in the last equality we used w_normalization and the fact that w = 1/ν.
§.§ Bounding the discrete stability constants
The following result establishes a key relationship between the Christoffel function and the sample complexity of weighted least-squares approximation.
[Estimates for α_w and β_w in probability]
Let 0 < δ,ϵ < 1, ⊂ L^2_ϱ(D) be a finite-dimensional subspace with () = n and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m. Then
√(1-δ) < α_w ≤β_w < √(1+δ),
with probability at least 1-ϵ, provided
m ≥c_δ ·κ_w() ·log(2 n/ϵ), where c_δ = ((1+δ) log(1+δ) - δ))^-1
and w = 1/ν and κ_w are as in mu_weight_fn and kappa-w-def, respectively.
This result is well known. In view of alpha-beta-sigma, its proof relies on bounding the maximum and minimum eigenvalues of A^*A. This is achieved by using what have now become quite standard matrix concentration inequalities, such as the matrix Chernoff bound <cit.> (see also Theorem <ref>). See the appendix for a proof.
[One-sided estimates]
The conclusions of Lemma <ref> only rely on bounding the lower discrete stability constant α_w from below. This can be done with a slightly smaller sampling condition than m-bound-alpha-beta. It follows readily from the proof of Theorem <ref> that α_w > √(1-δ)
with probability at least 1-ϵ, whenever
m ≥c'_δ ·κ_w() ·log(n/ϵ), where c'_δ = ((1-δ) log(1-δ) + δ)^-1.
However, bounding β_w from above yields a bound on the condition number of A (see Lemma <ref>), which, as discussed in Remark <ref>, is important for numerical purposes.
§.§ Error bounds in probability
We next combine Lemma <ref> and Theorem <ref> to obtain error bounds for weighted least-squares approximation. We split these bounds into two types: error bounds in probability (this subsection) and error bounds in expectation (the next subsection). In these two subsections, we will strive for generality by tracking the dependence in these bounds on the parameter 0 < δ < 1 appearing in Theorem <ref>. However, it is generally informative to think of this as a fixed scalar, e.g., δ = 1/2.
[First uniform error bound error bound in probability]
Let 0 < δ,ϵ < 1, ⊂ L^2_ϱ(D) be a finite-dimensional subspace with () = n and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m, where
m ≥c_δ ·κ_w() ·log(4 n /ϵ), c_δ = ((1+δ) log(1+δ) - δ))^-1
and w= 1/ν and κ_w are as in mu_weight_fn and kappa-w-def, respectively. Then the following hold with probability at least 1-ϵ. For any e ∈^m and f ∈ L^2_ϱ(D) that is defined everywhere in D, the weighted least-squares approximation f̂ is unique and satisfies
f - f̂_L^2_ϱ(D) ≤( 1 + 1/√(1-δ) c_w ) inf_p ∈ f - p_L^∞_ϱ(D) + 1/√(1-δ) e_2,w,
where c_w = √(1/m∑^m_i=1 w(x_i)). Moreover, if {ϕ_i }^n_i=1 is an orthonormal basis of , then the condition number of the least-squares matrix ls-Ab satisfies cond(A) ≤√(1+δ/1-δ).
This result follows immediately from Lemma <ref> and Theorem <ref> via the estimate
f-p_𝖽𝗂𝗌𝖼,w ≤c_w f - p_L^∞_ϱ(D).
Now suppose that δ = 1/2 (for concreteness) and assume further that w(x) ≲ 1, a.e. x ∼ϱ. This will be the case in Section <ref> when we construct near-optimal sampling measures. Then ls-err-bd-prob-unif-1 reads
f - f̂_L^2_ϱ(D) ≲inf_p ∈ f - p_L^∞_ϱ(D) + e_2.
Hence the approximation is stable and (L^2_ϱ,L^∞_ϱ)-quasi-optimal.
In some problems, the difference between the L^2_ϱ- and L^∞_ϱ-norms may be of little consequence. For example, in the case of polynomial approximation of holomorphic functions in low dimensions, the best approximation error decays exponentially fast with respect to n in either norm (see, e.g., <cit.>). On the other hand, for high-dimensional holomorphic functions or functions of finite regularity in any dimension, the best approximation errors decay algebraically fast, with, typically, the L^∞_ϱ-norm error decaying at least √(n) slower than the L^2_ϱ-norm error (see, e.g., <cit.>). Thus, the crude bound ls-err-bd-prob-unif-1 may underestimate the convergence rate of the least-squares approximation. Motivated by these considerations, we next discuss how to establish L^2_ϱ-quasi-optimality results.
[Uniform versus nonuniform]
Corollary <ref> is a uniform result, in the sense that a single random draw of the sample points suffices for all functions. We next discuss a nonuniform results, in which the error bound holds with high probability for each fixed function. Uniform bounds are desirable in many applications, as it means that the same sample points (which may correspond to, e.g., sensor locations) can be re-used for approximating multiple functions. Theoretically, it also means that one can achieve worst-case error bounds. Indeed, let ⊂ L^2_ϱ(D) be a set of functions that are defined everywhere and for which
E_() = sup_f ∈ inf_p ∈ f - p_L^∞_ϱ(D) < ∞.
Typically, may be a unit ball of some Banach space – for example, the space of Sobolev functions H^k_ϱ(D). Then, ignoring noise for simplicity and assuming as before that δ = 1/2 and w(x) ≲ 1, a.e. x ∼ϱ, Corollary <ref> implies the following uniform bound with high probability:
sup_f ∈ f - f̂_L^2_ϱ(D) ≲E_().
As we discuss in <ref>, this has implications in the study of sampling numbers in information-based complexity and the efficacy of pointwise samples (standard information).
[The term c_w]
As an alternative to assuming that w(x) ≲ 1, a.e. x ∼ϱ, one may also bound the term c_w by assuming that contains a function h with h_L^2_ϱ(D) = 1 and h(x) ≳ 1, a.e. x ∼ϱ. This holds, for example, whenever the constant function 1 ∈. In this case,
c_w ≲h_𝖽𝗂𝗌𝖼,w ≤β_w h_L^2_ϱ(D) ≤√(1+δ).
However, as noted in Remark <ref>, we can always construct w so that the former assumption holds.
We now present a nonuniform `in probability' bound that provides L^2_ϱ-quasi-optimality, at the expense of a poor scaling with respect to the failure probability ϵ. The proof is based on Markov inequality, which, roughly speaking, is used to bound the discrete error term arising in LSerrbd.
[First nonuniform error bound in probability]
Let 0 < δ,ϵ < 1, f ∈ L^2_ϱ(D), ⊂ L^2_ϱ(D) be a finite-dimensional subspace with () = n and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m, where
m ≥c_δ ·κ_w() ·log(4 n /ϵ), c_δ = ((1+δ) log(1+δ) - δ))^-1,
and w = 1/ν and κ_w are as in mu_weight_fn and kappa-w-def, respectively. Then the following hold with probability at least 1-ϵ. For any e ∈^m, the weighted least-squares approximation f̂ is unique and satisfies
f - f̂_L^2_ϱ(D) ≤( 1 + √(2 κ_w()/m ϵ) 1/1-δ )inf_p ∈ f - p_L^2_ϱ(D) + 1/√(1-δ) e_2,w.
Moreover, if {ϕ_i }^n_i=1 is an orthonormal basis of , then cond(A) ≤√(1+δ/1-δ).
Suppose again that δ = 1/2 and w(x) ≲ 1, ∀ x. Then this bound implies that
f - f̂_L^2_ϱ(D) ≲( 1 + 1/√(ϵlog(4 n / ϵ) ) ) inf_p ∈ f - p_L^2_ϱ(D) + e_2.
While stable and L^2_ϱ-quasi-optimal, the scaling with respect to ϵ is undesirable. To obtain an ϵ-independent bound, this suggests we either need to n to be exponentially large in 1/ϵ, or impose an additional constraint on m that m ≳κ_w() / ϵ. Neither is a desirable outcome.
One possible way to circumvent this issue involves using Bernstein's inequality instead of Markov's inequality. This exploits the fact that the discrete term in LSerrbd is a sum of independent random variables with bounded variance. It therefore concentrates exponentially fast (in m) around its mean f - p^2_𝖽𝗂𝗌𝖼,w = f-p^2_L^2_ϱ(D) (recall exp-sum-scaling). This leads to the following result, which is also nonuniform.
[Second nonuniform error bound in probability]
Consider the setup of Corollary <ref> with m-cond-one replaced by
m ≥c_δ ·κ_w() ·log(4n /ϵ) and m ≥2 ·k ·log(4/ϵ)
for some k > 0. Then the following hold with probability at least 1-ϵ. For any e ∈^m, the weighted least-squares approximation f̂ is unique and satisfies
f - f̂_L^2_ϱ(D) ≤( 1 + √(2/1-δ) ) inf_p ∈ { f - p_L^2_ϱ(D) + √(w)(f-p)_L^∞_ϱ(D)/√(k) } + 1/√(1-δ) e_2,w.
Moreover, if {ϕ_i }^n_i=1 is an orthonormal basis of , then cond(A) ≤√(1+δ/1-δ).
This result asserts a mixed type of quasi-optimality, involving the L^2_ϱ-norm and a (weighted) L^∞-norm divided by the factor √(k). Notice that the factor √(w) can be removed whenever w(x) ≲ 1, a.e. x ∼ϱ, as will be the case later. Therefore, consider, as in Remark <ref>, a case where the L^2_ϱ-norm best approximation error decays algebraically fast in n = ().
As we noted therein, the L^∞_ϱ-norm best approximation error often decays √(n) slower than the former. Hence, one may choose k = n in m-conds-in-prob-2 to show that f̂ achieves the same algebraic convergence rate in L^2_ϱ-norm as the L^2_ϱ-norm best approximation in . This approach has also been used in the related context of function approximation via compressed sensing in <cit.> and <cit.>.
§.§ Error bounds in expectation
To obtain error bounds in expectation, we need to modify the least-squares estimator to avoid the `bad' regime where the discrete stability constants can be poorly behaved. In this section, we proceed as in <cit.>, which is based on <cit.>.
Let {ϕ_i }^n_i=1 be an orthonormal basis of . First, we notice that alpha-beta-delta holds whenever
G - I_2 ≤δ,
where G is the discrete Gram matrix
G = ( ϕ_jϕ_k_𝖽𝗂𝗌𝖼,w )^n_j,k=1 ∈^n ×n.
It is then possible to show the following bound.
Let 0 < δ < 1, f ∈ L^2_ϱ(D), ⊂ L^2_ϱ(D) be a finite-dimensional subspace with dim() = n and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m, where
m ≥c_δ ·κ_w() ·log(2 n /ϵ), c_δ = ((1+δ) log(1+δ) - δ))^-1,
and w = 1/ν and κ_w are as in mu_weight_fn and kappa-w-def, respectively.
Then
( f - f̂^2_L^2_ϱ(D) χ_G-I_2 ≤δ ) ≤( 1 + 2/(1-δ)^2 κ_w()/m ) inf_p ∈ f - p^2_L^2_ϱ(D) + 2/1-δ e^2_2,w,
where χ_E denotes the indicator function of an event E.
This lemma can be used to construct two estimators with error bounds in expectation. The first is the conditioned estimator <cit.>, which is defined as
f̂^𝖼𝖾 = f̂ χ_G - I_2 ≤δ.
Computing this estimator requires one to evaluate
G-I_2 = max{ | 1 - σ^2_max(A) | , | 1 - σ^2_min(A) | } = max{ | 1 - α^2_w | , | 1- β^2_w | }.
Having done this, one simply sets f̂^𝖼𝖾 = f̂ when G - I_2 ≤δ and f̂^𝖼𝖾 = 0 otherwise.
The conditioned estimator has the disadvantage that it requires an orthonormal basis for to be known – a property that may not hold in practice (see <ref>). This can be avoided by using a truncated estimator. This approach assumes an a priori bound for f of the form
f_L^2_ϱ(D) ≤σ,
for some known σ≥ 0. Now define the truncation operator _σ : L^2_ϱ(D) → L^2_ϱ(D) by
_σ(g)
:= min{1, L/g_L_ϱ^2()} g
=
g, g_L^2_ϱ()≤σ,
σ g / g_L_ϱ^2(), g_L^2_ϱ() > σ,
∀ g ∈ L^2_ϱ().
Then this estimator is defined as
f̂^𝗍𝖾 = _σ(f̂).
Note that one can also construct a truncated estimator with respect to the other norms. The L^∞_ϱ-norm has also been commonly used for this purpose <cit.>.
[Nonuniform error bound in expectation]
Let 0 < δ,ϵ < 1, f ∈ L^2_ϱ(D), ⊂ L^2_ϱ(D) be a finite-dimensional subspace with dim() = n and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m, where
m ≥c_δ ·κ_w() ·log(2n/ϵ), c_δ = ((1+δ) log(1+δ) - δ))^-1,
and w = 1/ν and κ_w are as in mu_weight_fn and kappa-w-def, respectively.
Then
f - f̂^𝖼𝖾^2_L^2_ϱ(D) ≤( 1 + 2/(1-δ)^2 κ_w()/m ) inf_p ∈ f - p^2_L^2_ϱ(D) + 2/1-δ e^2_2,w + f^2_L^2_ϱ(D) ϵ.
The same bound holds for the f̂^𝗍𝖾, except with the final term replaced by 4 σ^2 ϵ.
Observe that the factor
2/(1-δ)^2 κ_w()/m ≤2/(1-δ)^2 c_δ 1/log(2n) ≤2/(1-δ)^2 c_δ 1/log(2n/ϵ) →0
as n →∞. Hence, this bound asserts L^2_ϱ-quasi-optimality of the two estimators (with constant approaching 1) up to the ϵ term.
[Removing the ϵ term]
As discussed in <cit.>, the additional ϵ term in the error bound may cause problems when striving to achieve rate-optimal bounds in the absence of noise. In particular, if the best approximation error inf_p ∈f-p_L^2_ϱ(D)≲ρ^-n decays exponentially fast (or faster) in n = () for some ρ > 1 then achieving the same rate for the least-squares approximation would require setting ϵ = ρ^-n. This adds an additional multiplicative factor of n in the sample complexity bound sampl-com-exp-err, thus prohibiting rate optimality (recall that κ_w() ≥ n). One way to remove this term, which was introduced in <cit.>, is to repeatedly redraw the sample points {x_1,…,x_m} until the condition G - I_2 ≤δ is met, and then use the resulting points to construct the weighted-least squares estimator f̂^⋆. By doing this, one can achieve a similar error bound in expectation, except without the term ϵ (the term ϵ now only influences the expected number of redraws needed to achieve G - I_2 ≤δ). See <cit.> for further information.
[The noise bound]
Due to Assumption <ref>, the noise term in Theorem <ref> satisfies, in general, the bound
e^2_2,w = 1/m ∑^m_i=1 w(x_i) | e_i|^2 = 1/m ∑^m_i=1 ∫_D w(x) μ_i(x) | e_i|^2 ≤e^2_∞ ∫_D w(x) μ(x) = e^2_∞.
If μ_1 = ⋯ = μ_m = μ, then one also has e^2_2,w = 1/me^2_2.
Moreover, as in Remark <ref>, this can also be achieved (up to a constant) in the general case if w(x) ≲ 1, a.e. x ∼ϱ.
§ CHRISTOFFEL SAMPLING
We now come to the crux of this article, which is devise random sampling schemes that achieve near-optimal sample complexity bounds.
§.§ Optimal choice of weight function via the Christoffel function
The results shown in the previous section relate the number of measurements m to the constant κ_w(). Hence, our goal is to choose a weight function w that minimizes this constant. Recall that
κ_w() = _x ∈ϱ w(x) ()(x).
A natural first choice involves selecting
w(x) ∝1/()(x).
Applying the normalization condition w_normalization (recall that ν = 1/w) and the fact that ∫_D()(x) ϱ(x) = n (recall Kappa-def-alt), we obtain
w(x) = n/()(x).
This choice is quite popular in the literature.
However, it requires the additional assumption that ()(x) > 0 almost everywhere. This is a rather mild assumption, which is equivalent to requiring that for almost every x ∼ϱ there exists a p = p_x ∈ for which p_x(x) ≠ 0 (in particular, it holds under the assumption made in Remark <ref>). If this holds, then w-opt-orig is the optimal for choice of w, since κ_w() = n achieves the optimal lower bound (recall kappa-w-lb).
However, this choice may not be desirable for the reasons considered in Remark <ref>. If ()(x) is small at x, then w(x) becomes large and potentially blows up the noise. Fortunately, this issue can be resolved and the above assumption removed, by slightly changing w(x) to
w(x) = ( 1/2 + 1/2 ()(x)/n )^-1.
This avoids the assumption on () and leads to a bounded weight function satisfying w(x) ≤ 2. The only cost is suboptimality by a factor of 2, i.e., κ_w() ≤ 2n.
The reader will likely notice that one could replace the 1/2 in w-opt with a weighted combination θ + (1-θ) ()(x)/n for any 0 < θ < 1, giving κ_w() ≤ (1-θ)^-1 n and w(x) ≤θ^-1. For simplicity, we consider the factor 1/2 throughout.
This aside, having chosen w as in w-opt-orig (one could also consider w-opt), to achieve near-optimal sampling we wish to select measures μ_1,…,μ_m such that mu_weight_fn holds, i.e.,
1/m ∑^m_i=1 μ_i(x) = ()(x)/n ϱ(x)=∑^n_i=1 | ϕ_i(x) |^2 /n ϱ(x).
If the measures are chosen so that this holds, then the various sample complexity estimates of the previous section are near-optimal in n. Indeed, letting δ = 1/2, we see that the condition
m ≳n ·log(2n/ϵ)
suffices to ensure the various `in probability' or `in expectation' bounds given in the previous section.
§.§ Christoffel sampling
There are several ways to achieve mu_weight_fn_opts. Arguably the simplest involves setting
μ_1 = ⋯= μ_m = μ, where μ(x) = ()(x)/n ϱ(x) = ∑^n_i=1 | ϕ_i(x) |^2 /n ϱ(x).
However, this strategy is not well suited in the case of hierarchical approximation (Remark <ref>). Indeed, let μ^(k) be the optimal sampling measure for ^(k), as defined in mu-opt-1 with = ^(k). Now suppose that the first m_1 points x^(1)_i ∼_i.i.d.μ^(1), i = 1,…,m_1, where μ^(1) is as in mu-opt-1 for = ^(1). Then we would like to recycle these m_1 points { x^(1)_i }^m_1_i=1 when constructing the second set of sample points { x^(2)_i }^m_2_i=1. However, since μ^(1)≠μ^(2) in general, these m_1 points are drawn with respect to the wrong measure for near-optimal sampling in the subspace ^(2). Thus, it is not possible to achieve near-optimal sampling simply by augmenting the set { x^(1)_i }^m_1_i=1 with m_2 - m_1 new points.
One strategy to overcome this limitation involves interpreting μ^(k+1) as an additive mixture involving μ^(k) and a certain update measure σ^(k). One can then use this construct a sampling procedure that recycles `most' of the first m_1 points, while ensuring that the overall sample is drawn i.i.d. from μ^(k+1) <cit.>.
An alternative approach, introduced in <cit.>, involves choosing measures μ_i according to the individual basis functions. Let {ϕ_i }^n_i=1 be an orthonormal basis of and suppose that m = k n for some k ∈. Then we define
μ_i(x) = | ϕ_j(x) |^2 ϱ(x), (j-1) k < i ≤j k, j = 1,…,n.
Observe that
1/m ∑^m_i=1 μ_i(x) = k/m ∑^n_j=1 | ϕ_j(x) |^2 ϱ(x) = ()(x)/n ϱ(x),
due to Kappa-def-alt. Hence mu_weight_fn_opts also holds for this choice, guaranteeing quasi-optimal sampling. Moreover, as each sampling measure corresponds to a single basis function, rather than an additive mixture of basis functions as in mu-opt-1, this approach readily lends itself to hierarchical approximation. See <cit.> for further details. Note that the distributions corresponding to these measures known as induced distributions <cit.>, as they are induced by the orthonormal basis {ϕ_i }^n_i=1.
We will, henceforth, refer to either procedure – or, indeed, any selection of measures μ_i for which mu_weight_fn holds for w = 1/ν as in w-opt or w-opt-orig – as Christoffel sampling.
§.§ Uniform error bounds in probability
To conclude this section, we now describe how a further modification of the near-optimal sampling measure can lead to uniform bounds in probability that improve on the somewhat crude bounds shown in Corollary <ref> and achieve something close to L^2_ϱ-quasi-optimality. This section is based on techniques developed in <cit.> to estimate sampling numbers. See also <ref> for additional discussion.
For this, we assume that there is an orthonormal basis {ϕ_i }^∞_i=1⊂ L^2_ϱ(D) and that
= _n = { ϕ_1,…,ϕ_n}.
For convenience, given f ∈ L^2_ϱ(D) let
e_n(f) = inf_p ∈_n f - p_L^2_ϱ = √(∑^∞_i > n | c_i |^2),
where c_i = fϕ_i_L^2_ϱ(D) is the ith coefficient of f. The second equality is due to Parseval's identity.
We now construct the measure. Define sets
I_l = { n 2^l + 1,…, n 2^l+1 }, l = 0,1,2,…
and consider a sequence (v_l)^∞_l=0 with ∑^∞_l=0 v^2_l = 1. Then let
μ(x) = ( 1/2 + 1/4 ∑^n_i=1 | ϕ_i (x) |^2/n + 1/4 ∑^∞_l=0 v^2_l/|I_l| ∑_i ∈I_l | ϕ_i(x) |^2 ) ϱ(x).
Let 0 < δ,ϵ < 1, n ∈, {ϕ_i }^∞_i=1⊂ L^2_ϱ(D) be an orthonormal basis and = _n be as in Pn-def-unif. Let 0 < p < 2 and v_l = c_θ 2^-θ l for 0 < θ < 1/p-1/2, where c_θ is such that ∑^∞_l=0 v^2_l = 1. Consider sample points x_i ∼_i.i.d.μ, i=1,…,m, where μ is as in opt-meas-krieg and
m ≥4 ·c_δ ·n ·log(4n/ϵ), c_δ = ((1+δ) log(1+δ)-δ))^-1.
Then the following holds with probability at least 1-ϵ. For any f that is defined everywhere in D and for which (e_n(f) )^∞_n=1∈ℓ^p() and any noise e ∈^m, the weighted least-squares approximation f̂ is unique and satisfies
f - f̂_L^2_ϱ(D) ≤c_p,θ/√(1-δ) ( 1/n ∑_k ≥n (e_k(f))^p )^1/p + √(2/1-δ) e_2,
where c_p,θ > 0 is a constant depending on p and θ only. Moreover, the condition number of the least-squares matrix ls-Ab satisfies cond(A) ≤√(1+δ/1-δ).
To put this result into context, consider a class of functions that are defined everywhere on D and for which
sup_f ∈ e_n(f) ≍n^-α log^β(n+1)
for some α > 1/2 and β∈. This holds, for instance, in the case of polynomial approximation when is a unit ball of functions of finite regularity. Then the right-hand side of f-hatf-bound satisfies
( 1/n ∑_k ≥n (e_k(f))^p )^1/p ≤c_α,β,p n^-α log^β(n+1).
Hence, with probability at least 1-ϵ, one obtains a matching error bound for the least-squares estimator (up to constants), uniformly for functions f ∈, with near-optimal sample complexity.
§ IMPROVEMENT OVER MONTE CARLO SAMPLING
Having derived near-optimal sampling strategies, in this section we discuss how this compares against standard Monte Carlo sampling, i.e., i.i.d. random sampling from the measure ϱ. Recall that in this case, the weight function w is precisely w ≡ 1, meaning that Monte Carlo sampling corresponds to an unweighted least-squares approximation. Theorem <ref> asserts that the sample complexity of Monte Carlo sampling therefore depends on the unweighted quantity
κ() = κ_1() = ()_L^∞_ϱ(D).
In other words, the maximal behaviour of the Christoffel function (), or equivalently, the optimal constant in the unweighted Nikolskii-type inequality nikolskii.
§.§ Bounded orthonormal systems
Recall that κ_w() ≥ n for any w, and therefore, κ() ≥ n. On the other hand, if κ() ≤ c n for some c ≥ 1, then Monte Carlo sampling is already a near-optimal strategy, and there may be little need to optimize the sampling measure (besides reducing the constant c).
This situation occurs whenever has an orthonormal basis {ϕ_i }^n_i=1 that is uniformly bounded. Such a basis is sometimes referred to as a bounded orthonormal system (see, e.g., <cit.> or <cit.>). Specifically, if
ϕ_i^2_L^∞_ϱ(D) ≤c, ∀i = 1,…,n,
then it follows immediately from Kappa-def-alt that κ() ≤ c n. Subspaces of trigonometric polynomials are a standard example for which this property holds (with c = 1). Closely related are the Chebyshev polynomials of the first kind (see <ref>). The orthonormal Chebyshev polynomials on (-1,1) are defined by
ψ_0(x) = 1, ψ_i(x) = √(2) cos(i arccos(x)), i ∈.
Therefore they satisfy BOS-c c = 2. In d dimensions, the tensor-product Chebyshev polynomial basis on (-1,1)^d satisfies BOS-c with c = 2^d. Hence it is also a bounded orthonormal system, albeit with a constant that grows exponentially fast as d →∞.
§.§ Arbitrarily-bad sample complexity bounds
Unfortunately, bounded orthonormal bases are quite rare in practice. It is also straightforward to construct subspaces for which κ() can be arbitrarily large in comparison to n. One way to do this involves the Legendre polynomials. Unlike the Chebyshev polynomials, the univariate Legendre polynomials grow with their degree, and satisfy (see, e.g., <cit.>)
ψ_i _L^∞(-1,1) = | ψ_i(±1) | = √(i+1/2).
Therefore, for any set S ⊂_0, the space _S = {ψ_i : i ∈ S } has constant κ(_S) given by κ(_S) = ∑_i ∈ S (i+1/2) (this follows from Kappa-def-alt and MC-kappa). The following result is now immediate.
There exists a probability space (D,,ϱ) such that the following holds. For every n ∈ and C > 0, there exists a subspace ⊂ L^2_ϱ(D) of dimension n such that κ() ≥ C.
The reason for this poor behaviour is that Legendre polynomials are highly peaked near the endpoints x = ± 1. Specifically, the ith such polynomial has absolute value √(i+1/2) at x = ± 1, but is 1 uniformly within compact subsets of (-1,1). This points towards a general observation. We expect poor scaling of κ(), and therefore poor performance of Monte Carlo sampling, whenever has an orthonormal basis that is highly localized around a common shared point.
§.§ Sample complexity bounds for polynomial spaces
Following <ref>, we now describe a series of further bounds for κ() in the case of multivariate polynomial spaces = _S, focusing on the case where S is a lower set.
Chebyshev polynomials. As noted previously, any subspace
_S = { Ψ_ν : ν∈S }, S ⊂^d_0, |S| = n,
of multivariate Chebyshev polynomials satisfies the exponentially-large (in d) bound κ(_S) ≤ 2^d n. Fortunately, if S is a lower set, then one has the d-independent bound (see, e.g., <cit.>)
κ(_S) = κ(_S) ≤n^log(3)/log(2),
This bound is sharp up to a constant, i.e., there is a lower set with κ(_S) ≳ n^log(3)/log(2). For d ≥log_2(n/2), this bound is sharper than the previous bounds κ(_S) ≤ 2^d n.
Improvements such as this are typical when lower set structure is imposed – we will see another instance of this in a moment. It is one of the features that makes lower sets desirable for high-dimensional approximation. The underlying reason for it is because a lower set cannot contain too many `large' (or even nonzero) indices.
Legendre polynomials. Because of Kappa-def-alt, MC-kappa and leg-poly-unif-bd, for any subspace _S = {Ψ_ν : ν∈ S } of Legendre polynomials, one has
κ(_S) = ∑_ν∈S ∏^d_k=1 (ν_k+1),
which can be arbitrarily large in comparison to n = |S|. However, lower set structure leads to a dramatic improvement over the general case. If S is lower, then the following sharp upper bound holds (see, e.g., <cit.>)
κ(_S) ≤n^2.
Ultraspherical and Jacobi polynomials. The situation for Jacobi polynomials with max{α,β} > -1/2 is similar to that of Legendre polynomials. The quantity κ(_S) can be made arbitrarily large for general S. However, if S is lower then one has the finite bound
κ(_S) ≤n^2max{ α, β} + 2 , when α,β∈_0
for Jacobi polynomials (see <cit.>) and
κ(_S) ≤n^2 α+ 2 , when 2α+1 ∈,
for ultraspherical polynomials (see <cit.>).
[Sharpness of the rates]
The bounds kappa-legendre–kappa-ultraspherical imply that a superlinear sample complexity suffices for stable and accurate polynomial approximation with random samples drawn from Jacobi measures. These rates are also necessary. This was recently shown in <cit.> in the d = 1 case, based on earlier work <cit.> on deterministic points that are equidistributed with respect to such measures. Specifically, <cit.> shows that choosing m ≍ n^τ, where τ < 2(max{α,β}+1), necessarily implies exponential instability of the least-squares approximation (or, indeed, any other approximation method that achieves similar accuracy).
Bounded, non-tensor product domains. Several of these bounds generalize to bounded non-tensor product domains <cit.>. If ϱ is the uniform measure and D ⊂^d is bounded and satisfies the inner cone condition (see, e.g., <cit.>), then
κ(_S) ≤c_D ·n^2 if S = S^𝖳𝖣_p and n = | S^𝖳𝖣_p|.
See <cit.>.
Thus, the same quadratic growth occurs for the total degree index set. Further, if D has C^2 boundary, then <cit.>
κ(_S) ≤c_D ·n^d+1/d if S = S^𝖳𝖣_p and n = | S^𝖳𝖣_p|.
See <cit.> and references therein for further results in this direction. These bounds apply only to the total degree index set TP-TD. For arbitrary lower sets, one has (see <cit.>)
κ(_S) ≤n^2 /λ if S is lower and n = |S|,
whenever D has the λ-rectangle property: there is a 0 < λ < 1 such that for any x ∈ D there exists an axis-aligned rectangle R ⊆ D containing x for which |R| ≥λ |D|.
Hermite and Laguerre polynomials. Unfortunately, this analysis of Monte Carlo sampling says nothing about Hermite and Laguerre polynomial approximations, for the simple reason that such polynomials are not uniformly bounded, and therefore the constant MC-kappa satisfies κ() = +∞. The sample complexity of Hermite and Laguerre polynomial approximation is poorly understood in comparison to that of Jacobi polynomials. A number of empirical studies have suggested a super-polynomial or exponential sample complexity (in n, for fixed d). But relatively few theoretical estimates exist. See <cit.> and <cit.>. Suffice to say, though, Hermite and Laguerre polynomial approximations are examples where one stands to gain substantially from Christoffel sampling, and as such, these has often been used as examples to illustrate its efficacy <cit.>.
§.§ Summary
We summarize this section as follows. In general, the sample complexity of Monte Carlo sampling depends on the L^∞-norm of the Christoffel function, and it is easy to construct examples where this is arbitrarily large in comparison to n = (). Moreover, these scenarios arise in various polynomial approximation problems, especially when considering unbounded domains.
§ VARIATIONS, EXTENSIONS AND FURTHER TOPICS
To conclude this first part of the article, we now discuss a number of issues not considered so far, along with some variations and extensions.
§.§.§ Sampling from the near-optimal measures
A key practical issue is drawing samples from the measure mu-opt-1 or mu-opt-2 in a computationally efficient manner. As observed in <cit.>, mu-opt-1 is an additive mixture of measures of the form mu-opt-2. Hence it is enough to be able to draw samples from the induced distributions | ϕ_i(x) |^2 ϱ(x), where {ϕ_i }^n_i=1 is any orthonormal basis for .
A prerequisite for this is, of course, the existence of an explicit orthonormal basis. This is often the case – for example, the problem considered in <ref>, which involve tensor products of orthogonal polynomials, for which fast algorithms for sampling from the induced distributions exist <cit.> – but not always. One such case is polynomial approximation on nontensorial domains. Orthogonal polynomials can be defined explicitly for certain non-tensorial domains, e.g., simplices, balls and spheres <cit.>. Yet, this is impossible in general.
A simple way to address this problem is to replace D by a fine grid Z = { z_i }^K_i=1 of points drawn i.i.d. from ϱ and replace the continuous measure ϱ by the discrete uniform measure ϱ̅ = K^-1∑^K_i=1δ_z_i supported on Z <cit.>. Given a nonorthogonal basis {ψ_i }^n_i=1 for , now considered a subspace of L^2_ϱ̅(D), one may construct an orthonormal basis via numerical linear algebra tools such as classical QR factorizations or SVDs, or through more recent approaches such as V+A (Vandermonde with Arnoldi) <cit.>. Sampling from the Christoffel sampling measure is then straightforward, since it is also a discrete measure supported on Z.
This approach, which is a form of leverage score sampling (see next), ensures accurate and stable recovery in the L^2_ϱ̅(D)-norm. To guarantee the same in the original L^2_ϱ(D)-norm one needs to that ensure K is large enough. Since Z is obtained by Monte Carlo sampling, this is governed, as in <ref> and specifically MC-kappa, by the constant MC-kappa. This is another reason why analyzing the maximal behaviour of the Christoffel function is important, since it can inform the size of grid needed in such a construction. See <cit.> for further details.
Note that the computation of an orthonormal basis over Z is a purely offline cost. It does not involve additional evaluations of the target function f, which are often the main computational bottleneck in practice. Of course, theoretical bounds for the Christoffel may not be available or, if available, may result value of K that is too large for practical computations. This has spurred several recent works <cit.> which develop more refined strategies for constructing the grid K than Monte Carlo sampling.
An alternative to this approach involves using a structured (e.g. Quasi-Monte Carlo) grid for Z <cit.>. This has the additional benefit of ensuring that the algebraic least-squares problem can be solved efficiently via Fast Fourier Transforms (FFT).
§.§.§ Connections to matrix sketching and leverage score sampling
Let X ∈^N × n, where N ≥ n, and x ∈^N. In applications in data science, it may be infeasible to solve the `full' least squares problem w ∈z ∈^nX z - x_2. Matrix sketching involves using a sketching matrix S ∈^m × N (a matrix with one nonzero per row) and solving the sketched problem ŵ∈z ∈^nS X z - S x_2. The objective is to find S with a minimal number of rows m such that
X ŵ - x_ℓ^2 ≲X w - x_2.
In random sketching, one considers a discrete probability distribution π = {π_1,…,π_N } on {1,…,N} with π_i > 0, ∀ i, draws j_1,…,j_m ∼_i.i.d.π and then sets S_i,j_i = 1/√(π_j_i) and S_ij = 0 otherwise. A near-optimal choice of distribution π is provided by the leverage scores ℓ_i(X), i = 1,…,N, of the matrix X. These are given by
ℓ_i(X) = max_z ∈^n
X z ≠0 |(X z)_i|^2/X z^2_2 ≡Q(i,:)^2_2,
where Q ∈^N × n is any matrix whose columns form an orthonormal basis for Ran(X).
The resulting procedure is the well-known technique of leverage score sampling <cit.>.
Leverage score sampling can be considered as a special case of Christoffel sampling involving the discrete set D = {1,…,N}. Note that any vector x = (x_i)^N_i=1∈^N can be viewed as a function f ∈ L^2_ϱ(D) via the relation f(i) = x_i and vice versa. One now defines the subspace = { X z : z ∈^n } to cast the above problem into the form introduced in <ref>. In particular, the values of the Christoffel function () over the set D are precisely the leverage scores leverage-scores of the matrix X. We refer to <cit.> and <cit.> for details.
§.§.§ n sampling strategies, frame subsampling and Kadison–Singer
The Christoffel sampling schemes described in this paper are near-optimal, in the sense the sample complexity is log-linear in n. Due to the coupon collector's problem, this is the best achievable when using i.i.d. samples (see, e.g., <cit.>).
This limitation has spurred a recent line of work on methods that have linear sample complexity in n, i.e., optimal up to possible constants, and even schemes that can achieve interpolation, i.e., m = n. This is based on techniques introduced in <cit.> on frame subsampling. Here, given a frame of m ≥ n vectors in ^n, one asks whether it is possible to extract a subset of size n that, after potential reweighting, still constitutes a frame. This problem is closely related to Weaver's KS_2 conjecture, see <cit.> or <cit.>). Weaver's conjecture (now theorem) is equivalent to the Kadison–Singer problem, and form the basis of the proof in <cit.> for the latter.
The connection to sampling for least-squares approximation comes from the Marcinkiewicz–Zygmund inequality MZ-inequality, which can be recast as a frame condition for the vectors x^(1),…,x^(n) defined by x^(i) = ( √(w(x_i))ϕ_j(x_i) )^n_j=1, i = 1,…,n.
The work of <cit.> (as well as extensions due to <cit.>) has been used to show the existence of sampling points with n sample complexity for an arbitrary subspace . See <cit.> and references therein. Unfortunately, these works are impractical, as the computational cost in constructing the subsample is (at least) exponential in n.
Fortunately, recent progress has been made by using the approach of <cit.>, as well as techniques from <cit.>, leading to practical algorithms that run in polynomial time. See <cit.> and <cit.> for two such approaches, as well as <cit.> for related work in the discrete setting. A significant result of <cit.> is to provides polynomial-time algorithms that also work down to the interpolation regime m = n = (), albeit with constants in the error bounds that grow algebraically with n.
§.§.§ Sampling numbers and information-based complexity
Another major area of focus in the last several years has been the use of (weighted) least-squares approximation, Christoffel sampling and subsampling techniques to provide new results in information-based complexity <cit.>. In this line of work, one considers a compact subset of a Banach space, then studies object such as the (linear) sampling numbers for . These measure how well (in a worst-case sense over ) one can approximate functions in from m samples using arbitrary (linear) reconstruction maps. New results have emerged that bound the sampling numbers (for arbitrary ) in terms of its Kolmogorov n-width, i.e., the best approximation error (uniformly over ) that can be achieved by any n-dimensional subspace. These results show that pointwise samples (known as standard information) can constitute near-optimal information for recovery. Some of the core ideas of this work can be found in Theorem <ref>, including the construction of the measure opt-meas-krieg which is due to <cit.>.
Note that sampling number are often formulated with respect to the L^2-norm (as in this article), but recent works also consider other L^p-norms – in particular, the uniform norm. For a selection of the many recent results in this direction, see <cit.> and references therein.
§.§.§ Sampling discretizations
In tandem with these efforts, there has also been a focus on the development and systematic study of sampling discretizations using these ideas, both in the L^2-norm such as in MZ-inequality and in other L^p-norms. We refer to <cit.> for reviews, as well as <cit.>, and references therein. Note that L^∞-norm sampling discretizations are related to the construction of weakly admissible meshes. See <cit.> for recent work on weakly admissible meshes that employs similar techniques.
As we have seen, sampling discretization are sufficient conditions for accurate and stable recovery via (weighted) least squares. However, they are also necessary conditions for stable recovery by any method. Modifying <cit.>, let : ^m → L^2_ϱ(D) be an arbitrary reconstruction map and suppose that is δ-accurate over , i.e.,
p - ({ p(x_i) }^m_i=1)_L^2_ϱ(D) ≤δp_L^2_ϱ(D), ∀p ∈.
Note that his holds with δ = 0 for least squares whenever α_w > 0, due to Lemma <ref>. Now let be any set of functions that are defined at the { x_i }^m_i=1 and suppose that ⊆. Then it is a short argument to show that the ϵ-Lipschitz constant
L_ϵ(; ) = sup_f ∈ sup_0 < e_2,w ≤ϵ ({f(x_i)}^m_i=1) + e) - ({ f(x_i) }^m_i=1) _L^2_ϱ(D)
of the map restricted to the domain { (f(x_i))^m_i=1 : f ∈}⊆^m satisfies
L_ϵ(; ) ≥(1-δ)/√(α_w),
where α_w is the lower constant in the sampling discretization MZ-inequality. It follows that a reconstruction map cannot be both accurate (even over , as in R-accurate) and stable without a sampling discretization.
§.§.§ Alternative sampling methods
Many other sampling methods have been proposed over the last decade, especially in the context of high-dimensional polynomial approximation. However, these are generally lacking near-optimal sample complexity guarantees. See <cit.> and <cit.> and references therein for an overview of the many different approaches considered.
A limitation of Christoffel sampling is that i.i.d. points may cluster, thereby reducing the practical efficiency of the scheme. Most recently, a number of works have explored ideas such as volume sampling <cit.> using determinantal point processes to overcome this limitation. These are well-known concepts in machine learning <cit.>, in which non-independent samples are drawn from a measure that promotes repulsion between the points. This transpires to be closely related to Christoffel sampling, since the marginals of the sample points follow the same distribution. The application of volume sampling to least-squares approximation in arbitrary subspaces has been considered in <cit.> for reproducing kernel Hilbert spaces and <cit.> for general spaces, along with its theoretical analysis and comparison with Christoffel sampling. Despite practical benefits, however, it is as of yet unclear whether they offer theoretical advantages over Christoffel sampling <cit.>.
§.§.§ Adaptive methods
Finally, we briefly mention the prospect of adaptive methods. While these methods typically lack full theoretical guarantees, they can prove extremely effective in practice. In a variation of Remark <ref>, in an adaptive scheme one also chooses each subspace ^(k+1) adaptively based on the previous approximation f̂^(k). In this case, we term this procedure an adaptive approximation scheme. This can be done using greedy methods <cit.>, as in <cit.> (which are themselves based on adaptive quadrature routines <cit.>), when given a dictionary of candidate basis functions to use to build the spaces ^(k). This aside, adaptive methods can also be used when constructing approximations in complex, nonlinear approximation spaces. See <ref> for further discussion.
§ BEYOND LINEAR SPACES AND POINTWISE SAMPLES
Up to now, we have considered approximating an unknown function f : D → from a collection of m pointwise samples in an n-dimensional approximation space ⊂ L^2_ϱ(D). In this final section, we introduce a general framework that significantly extends this setup. This section is primarily based on the framework introduced in <cit.>, which was then further extended in <cit.>. Unlike in previous sections, our presentation will now be less thorough: we aim to convey the main ideas without the full details or variations. See <cit.> for in-depth treatments, and <cit.> for related work.
There are four main extensions we now address. First, the target object f need not be a scalar-valued function, but simply an element of a Hilbert space . Second, the measurements arise as evaluations of arbitrary linear operators, which may be scalar-, vector- or function space-valued. Third, there may be C ≥ 1 different distinct processes generating the measurements. And finally, the approximation space need not be finite-dimensional subspace of . Examples that motivate these generalizations are discussed in <ref>.
§.§ The general framework
Let (Ω,,) be a probability space, be a separable Hilbert space and consider a normed vector subspace of _0 ⊆, termed the object space. Let C ≥ 1 be the number of measurement processes. For each c = 1,…,C, let (D_c,_c,ϱ_c) be a probability space, which we term the measurement domain, _c be a Hilbert space, which we term the measurement space, and
L_c : D_c →(_0,_c),
be a mapping from D_c to the space of (_0,_c) of bounded linear operators _0 →_c, which we term the sampling operator.
For each c, let μ_c be such that (D_c,_c,μ_c) is a probability space. We assume that μ_c is absolutely continuous with respect to ϱ_c and its Radon–Nikodym derivative ν_c : D_c → is strictly positive almost everywhere on supp(ϱ_c). By definition
∫_D_c ν_c(θ) ϱ_c(θ) = 1.
Note that, in what follows, we use θ or θ_c to denote the variable in D_c, rather than x. For convenience, we also define the weight function w_c : D_c → by w_c(θ) = 1/ν_c(θ), θ∈ D_c.
Let m_1,…,m_C ∈, where m_c is the number of measurements in the cth measurement process.
With this setup in hand, we now draw samples independently with θ_ic∼μ_c, i = 1,…,m_c, c = 1,…,C and consider the measurements
y_ic = L_c(θ_ic)(f) + e_ic ∈_c, i =1,…,m_c, c = 1,…,C,
where e_ic∈_c is a noise term.
Finally, we let ⊂ be the approximation space (which now need not be a linear space) and we consider the empirical least-squares fit
f̂ ∈p ∈ ∑^C_c=1 1/m_c ∑^m_c_i=1 w_c(θ_ic) y_ic - L_c(θ_ic)(f) ^2__c.
Note that computing solutions to general-least-squares may be very challenging numerically when is a nonlinear set. However, this is highly dependent on the choice of , and therefore not the focus of this article.
§.§ Examples
As shown in <cit.>, this framework includes many problems of practical relevance. We now summarize several such examples. We start by showing that it generalizes the setup of <ref>.
(i) Scalar-valued function approximation from pointwise samples.
The setup of <ref> be considered in the general framework as follows. Let C = 1, = L^2_ϱ(D), _0 = C(D), D_c = D, ϱ_1 = ϱ and _1 = (with the Euclidean inner product). We then define the sampling operator L_1 = L as the pointwise sampling operator L(x)(f) = f(x) for x ∈ D and f ∈_0. Note that the measurements general-meas and least-squares fit general-least-squares reduce to f_meas and wls-prob, respectively
(ii) Function approximation from gradient-augmented samples.
A simple modification of (i) involves sampling both the function and its gradient. This arises in various applications, including parametric PDEs and UQ <cit.>, seismology, Physics-Informed Neural Networks (PINNs) for PDEs <cit.> and deep learning <cit.>. This problem can be cast into the general framework. Suppose that D ⊆^d. We then let C = 1, = H^1_ϱ(D) be the Sobolev space of order one, _0 = C(D), _1 = ^d+1 and L_1 = L be defined by L(θ)(f) = (f(θ) , ∇ f(θ)^⊤ )^⊤.
The main difference with (i) is that the samples are vector-valued. Further generalizations are also possible. For example, in some cases it may be too expensive to evaluate the gradient at every point. Let m_1 be the number of function samples and m_2 be the number of function-plus-gradient samples. As shown in <cit.>, we can consider this as a multimodal sampling problem with C = 2, _1 =, _2 = ^d+1 and sampling operators L_1(θ)(f) = f(θ) and L_2(θ)(f) = (f(θ) , ∇ f(θ)^⊤ )^⊤.
(iii) Hilbert-valued function approximation from pointwise samples. In some applications, the unknown is a function taking values in a Hilbert space . This arises in, for instance, parametric PDEs and UQ, where f is the parametric solution map of a PDE whose (weak) solutions take values in . Approximating such a function from pointwise samples is easily considered in this framework. We use the setup of the scalar-valued case described above, except with _1 = replaced by _1 =. One can also consider gradient-augmented samples, much as in the previous example. A further extension to this problem is that of operator learning <cit.>, in which the unknown is an operator between two Hilbert spaces. This also fits within this framework.
(iv) Image reconstruction. We now briefly describe a seemingly quite different problem, based on <cit.>. This example highlights that the general framework can handle both discrete and continuous settings, and measurements that do not arise as pointwise samples.
Consider a discrete d-dimensional image of size n ×⋯× n, which we may vectorize and express as a vector f ∈^N, where N = n^d. Let F ∈^N × N be the matrix of the d-dimensional discrete Fourier transform. In Fourier imaging <cit.>, the goal is to recover f from a subset of its frequencies. If Ω⊆{1,…,N}, | Ω | = m, is the set of frequencies sampled, then the measurements of f are
P_Ω F f + e ∈^m,
where e ∈^m is noise and P_Ω∈^m × N is a matrix that selects the rows of F corresponding to the indices in Ω.
Fourier imaging arises in various applications, including Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR) and radio interferometry <cit.>. A key question is how to best choose Ω within any relevant physical constraints.
As described in <cit.>, this problem can be cast into the general framework. The framework can also handle various practical constraints – for instance, the fact that MR imaging devices cannot sample individual frequencies, but may only sample along piecewise smooth curves in frequency space, which can be handled by considering vector-valued measurements. Moreover, the framework can also handle the more advanced scenario of parallel MRI, where C ≥ 1 coils simultaneously acquire measurements.
(v) Other examples. Various other families of problems can be considered within this framework. For instance, as discussed in <cit.>, many standard measurement constructions in compressed sensing <cit.> become special cases of this approach. One can also readily consider related problems such as matrix completion and, more generally, matrix recovery for linear measurements <cit.>. Various other problems in sampling theory and signal processing, such as so-called mobile sampling <cit.>, also fit into this framework. It can also incorporate recovery problems involving function averages <cit.>, as well as techniques such as stratification and antithetics, which are commonly variance reduction techniques in Monte Carlo integration <cit.>.
(vi) Nonlinear approximation spaces. Many recovery problems call for nonlinear approximation spaces. A standard example is the sparse regression problem. Here, one may consider the setup of <ref> (or a more general setup, as in (ii)–(v)), but, rather than a linear subspace, one defines
= { ∑_i ∈S c_i ϕ_i : c_i ∈, ∀i, S ⊆{1,…,N}, | S | = n },
where N ≥ n ≥ 1 and {ϕ_i }^N_i=1⊂ L^2_ϱ(D) is some known dictionary of functions. The sparse regression problem has been studied extensively (see <cit.> and references therein), especially in the context of dictionaries of polynomials, where it is termed sparse polynomial approximation <cit.>. However, many other nonlinear approximations are widely used in practice. A partial list includes various `structured' sparse models, such as joint, group or block sparsity or sparsity in levels <cit.>, low-rank matrix or tensor models <cit.>, single- <cit.> or multi-layer neural networks, tensor networks <cit.>, Fourier sparse functions <cit.> and spaces defined by generative models <cit.>.
§.§ The generalized Christoffel function
The key tool in our analysis is a generalization of the Christoffel function (Definition <ref>).
[Generalized Christoffel function]
Let _0 be a normed vector space, be a Hilbert space, (D,,ϱ) be a measure space, L : D →(_0,) be such that the function θ∈ D ↦ L(θ)(f) ∈ is measurable for every f ∈_0 and suppose that ⊆_0, ≠{ 0 }. The generalized Christoffel function of with respect to L is the function = (P,L) : D → defined by
(θ) = (,L)(θ) = sup{ L(θ)(f)^2_/f^2_ : f ∈, f ≠0 }, ∀θ∈D.
Notice that reduces to the standard Christoffel function Kappa-def in the case of (i) above. In general, measures how large the measurement L(θ)(f) of an arbitrary f ∈ can be (in norm) at an index θ∈ D in relation to the norm of f. For instance, in the Fourier imaging problem (iv) it measures how large the Fourier transform can be at a given frequency for an element of the approximation space in relation to its norm. We remark in passing that inherits some of the properties of the standard Christoffel function. See, e.g., <cit.>.
Much like in kappa-w-def, given a nonnegative weight function w : D →, we also define
κ_w = κ_w(, L) = _θ∼ϱ w(θ) (,L)(θ)
§.§ Theoretical guarantee
We now present a theoretical guarantee for this framework. We first require several assumptions.
[Nondegeneracy of the sampling operators]
For each c = 1,…,C and f ∈_0, the map θ∈ D_c ↦ L_c(θ)(f) ∈_c is measurable and θ∈ D_c ↦L_c(θ)(f)^2__c∈ is integrable. Moreover, there are constants 0 < a ≤ b <∞ such that
a f^2_ ≤∑^C_c=1 ∫_D_c L_c(θ_c)(f)^2__c ϱ_c(θ_c) ≤b f^2_, ∀f ∈_0.
We remark in passing that the lower bound nondegeneracy is, in fact, only required to hold for f ∈', where ' = - = { p_1 - p_2 : p_1,p_2 ∈} is the difference set (see the proof of Theorem <ref> below, as well as that of <cit.>).
This assumption says that the action of the sampling operators preserves the norm of any f ∈_0, up to constants. Note that it holds trivially with a = b = 1 in the standard problem (i), since in that case we have C = 1 and
∫_D_1 L_1(θ_1)(f)^2__1 ϱ_1(θ_1) = ∫_D |f(x)|^2 ϱ(x) = f^2_L^2_ϱ(D) = f^2_.
All other examples discussed in <ref> can also be formulated so that this assumption also holds.
Recall that in the standard case studied earlier, stability and accuracy is ensured by the sampling discretization samp-disc. The middle term in this inequality is an empirical approximation to the L^2_ϱ-norm. An analogous construct is crucial to the analysis of this general setting, and arises by making an empirical approximation to the integrals in nondegeneracy. Given {θ_ic : i = 1,…,m_c, c = 1,…,C}, we say that empirical nondegeneracy holds for with constants 0 < α_w ≤β_w < ∞ if
α_w q_ ≤√(∑^C_c=1 1/m_c ∑^m_c_c=1 w(θ_ic) L_c(θ_ic)(q)^2__c ) ≤β_w q_, ∀q ∈' = - .
This can be seen as a generalization of the well-known Restricted Isometry Property (RIP) in compressed sensing <cit.>. In the case of sparse regression (see (vi) above), it is sometimes termed a universal sampling discretization <cit.>.
[Union of subspaces model]
The set ' = - satisfies the following.
(i) ' is a cone, i.e., t p ∈' for any t ≥ 0 and p ∈').
(ii) ' ⊆_1 ∪⋯∪_d = :, where each _i ⊆_0 is a subspace of dimension n.
This trivially holds with d = 1 and _1 = ' = when is an n-dimensional subspace. In general, Assumption <ref> is a extension of the union-of-subspaces model, which is well-known in the context of compressed sensing <cit.>. It includes many of the examples of nonlinear model classes used in practice, such as sparse regression problem and many of its generalizations (see (vi) above).
Finally, for the result below we also need a truncation operator. Similar to <ref>, this is the operator _σ : → defined by _σ(g) = min{ 1 , σ / g_} g, where σ≥ 0. Given a minimizer f̂ of general-least-squares, we define f̂^𝗍𝖾 = _σ(f̂). Note that we do not consider the other estimator f̂^𝖼𝖾 introduced in <ref>, although it could be readily formulated in this setting. The reason is that when is a nonlinear space, it is generally not possible to certify (in polynomial time) that emp-nondegen holds, which, like samp-disc is the key ingredient for accuracy and stable recovery. In sparse recovery, for instance, this is equivalent to certifying that a given matrix has the RIP – a well-known NP-hard problem.
Consider the setup of <ref> and suppose that Assumptions <ref> and <ref> hold. Let 0 < ϵ < 1, κ_w_c be as in kappa-w-def-general and suppose that
m_c ≳a^-1 ·κ_w_c('-' , L_c) ·( log(2d/ϵ) + n ), c = 1,…,C,
or, if is as in Assumption <ref>(ii),
m_c ≳a^-1 ·κ_w_c(, L_c) ·log(2nd/ϵ), c = 1,…,C.
Then, for any f ∈_0, σ≥f_ and noise { e_ic}, the estimator f̂^𝗍𝗋 satisfies
f - f̂^𝗍𝗋^2_ ≲(b/a) ·inf_p ∈ f - p_ + e^2_2 / a + σ^2 ϵ,
where e^2_2 = ∑^C_c=11/m_c∑^m_c_i=1e_ic^2__c.
Note that mc-cond-1 involves the term κ_c evaluated over ' - '. One can replace this with just ', at the cost of a more complicated log term (see the proof of Theorem <ref> and <cit.>).
§.§ Christoffel sampling
Much as in <ref>, we can use Theorem <ref> to optimize the sampling measures μ_c. Let = ' - ' in the case of mc-cond-1 or = in the case of mc-cond-2. Then, using kappa-w-def-general, we now choose
w_c(θ) = ( 1/2 + 1/2 (, L_c)(θ) /∫_D_c (, L_c)(θ) ϱ_c(θ) )^-1,
which gives the sampling measures
μ_c(θ) = ( 1/2 + 1/2 (, L_c)(θ) /∫_D_c (, L_c)(θ) ϱ_c(θ) ) ϱ_c(θ), c = 1,…,C.
We term this Christoffel sampling.[Note we may assume without loss of generality that ∫_D_c( , L_c)(θ) ϱ_c(θ) > 0. If not, the sampling operator L_c simply yields zero measurements over the space almost everywhere, and can therefore be excluded. Nondegeneracy nondegeneracy implies that there is at least one sampling operator yielding nonzero measurements over .]
Substituting this into mc-cond-1 yields the measurement condition
m_c ≳( ∫_D_c (' - ' , L_c)(θ) ϱ_c(θ) ) ·(log(2d/ϵ) + n)
or, in the case of mc-cond-2,
m_c ≳( ∫_D_c (, L_c)(θ) ϱ_c(θ) ) ·log(2nd/ϵ) .
This approach is `optimal' in the sense that it minimizes (up to a factor of 2) the bound mc-cond-1 over all possible sampling measures μ_c. When is an n-dimensional subspace – in which case ' - ' = ' = and in Assumption <ref> can be chosen as = – it is a short argument using the nondegeneracy condition nondegeneracy to see that
∑^C_c=1 ∫_D_c (, L_c)(θ) ϱ_c(θ) ≤b n.
(see <cit.>). Hence, if each m_c is chosen proportional to the right-hand side in mc-cond-2-opt, then the total number of measurements satisfies the near-optimal log-linear scaling
m = m_1 + ⋯+ m_C ≲(b/a) ·n ·log(2n/ϵ).
Unfortunately, in the general case of a nonlinear set , there is no clear way to relate the integral in mc-cond-1-opt to explicit quantities such as n and d. However, it is possible to show the bound
m = m_1 + ⋯+ m_C ≲(b/a) ·n ·d ·log(2n/ϵ).
(see <cit.> once more),
where we recall that d is the number of subspaces in Assumption <ref>(ii). For fixed and small d, this is near-optimal. However, in cases such as sparse regression, d ≫ 1. Fortunately, a more refined analysis is possible in these cases. See <cit.> for discussion.
While it is difficult to provide explicit measurement conditions in the general case, it is possible to gain some insight over why Christoffel sampling, in general, improves over Monte Carlo sampling, i.e., the case where μ_c = ϱ_c, ∀ c. Since w_c ≡ 1 in this case, kappa-w-def-general and Theorem <ref> provides the measurement conditions for Monte Carlo sampling of the form
m_c ≳( _θ∼ϱ_c (' - ' , L_c)(θ) )·(log(2d/ϵ) + n ), c = 1,…,C,
and likewise in the case of mc-cond-2.
Therefore, comparing with mc-cond-1-opt, the improvement of Christoffel sampling, in general terms, can be equated to the difference between the supremum of the (generalized) Christoffel function and its integral (mean). In particular, if this function is sharply peaked, then we expect significant improvements, while if it is approximately flat, then we expect less improvement. Such observations are witnessed in numerical experiments <cit.> .
§ CONCLUSIONS AND OUTLOOK
In this article, we have surveyed recent advances in optimal sampling for least-squares approximation, focusing on the key role that the Christoffel function plays in both understanding the sample complexity in general, and in constructing near-optimal sampling strategies. We have also seen in <ref> how these ideas naturally extend to more general types of measurements and play a key role in the sample complexity even for nonlinear spaces. We now offer some concluding thoughts.
First, although the picture for pointwise sampling in linear spaces is increasingly mature, there remain various open questions. While optimal (i.e., n) sampling strategies that are practical (i.e., implementable in polynomial time) are now known (recall <ref>), future investigations are needed on their practical efficacy, especially in the interpolation regime. There are also open questions about uniform recovery, which are especially relevant to sampling numbers and questions in information-based complexity. Finally, the question of optimal sampling with hierarchical or adaptive schemes has not yet been addressed.
By contrast, nonlinear approximation spaces pose many more open problems. First, even in relatively well-studied settings such as sparse regression, it is unknown whether Christoffel sampling generally leads to near-optimal sample complexity bounds. This issue is discussed in detail in <cit.>. Less is known about more complicated nonlinear spaces. See <cit.> for discussion. Second, there is also the practical matter of drawing samples from the resulting Christoffel sampling measure. This is very dependent on the particular nonlinear space and samples under consideration, and may well be highly nontrivial. Finding practical surrogate sampling measures is an interesting open problem.
In the nonlinear setting, it is worth noting that Christoffel sampling is well-suited only when the approximation space has low intrinsic complexity that is comparable to the number of samples (e.g., m ≍ n log(n) in the case where is a linear space with () = n). It is not well suited for `complex' approximation spaces, such as spaces of deep neural networks <cit.> or low-rank tensor networks <cit.>. Christoffel sampling can be implemented in an adaptive manner in such cases. Here, one alternates between adding samples and learning an approximation, and at each stages uses a linearization to obtain an intermediate linear approximation space over which Christoffel sampling can be performed. This idea was developed for approximating functions and solving PDEs via deep learning in <cit.> and <cit.>, and later extended to more general approximation spaces in <cit.>.
Finally, it is worth noting that Christoffel sampling, in any of its guises, is not a panacea. Depending on the problem or target function, there may be little or no benefit over standard Monte Carlo sampling. This is relevant in various applications, as Monte Carlo samples are often commonly encountered in practice <cit.>. It leads to another interesting line of research, which is understanding function classes where Monte Carlo sampling is near-optimal. One such case is
holomorphic function approximation in high dimensions. In a series of works <cit.>, it has been shown that Monte Carlo sampling is near-optimal information for classes of infinite-dimensional holomorphic functions arising in parametric DE problems. In particular, Monte Carlo sampling yields error bounds that are within a polylogarithmic factor of the adaptive m-width for such classes. For other work in this direction involving functions Sobolev spaces, see <cit.>.
§ ACKNOWLEDGEMENTS
The idea for this article came from a talk given at the 2023 Foundations of Computational Mathematics conference. The author would like to thank the conference organizers, as well as the various participants for insightful comments and questions. Parts of this article were written during a visit to the Isaac Newton Institute for Mathematical Sciences, Cambridge for the programme Discretization and recovery in high-dimensional spaces. The author would like to thank the institute for support and hospitality, and the programme participants for providing a stimulating environment. Finally, he would like to thank Simone Brugiapaglia, Matthew J. Colbrook, Maksym Neyra–Nesterenko and Daniel Fassler for helpful feedback.
This work was supported by EPSRC grant number EP/R014604/1 and NSERC grant number RGPIN-2021-611675.
plain
§ PROOFS
In this appendix we give proofs for various results in the paper. We commence with Lemma <ref>. This is a standard result. We include a short proof for completeness.
[Proof of Lemma <ref>]
Recall that α_w = σ_min(A), where A is the matrix defined in (<ref>). This matrix is full rank since m ≥ n and α_w > 0, and therefore the LS problem has a unique solution.
Now let p ∈ be arbitrary and consider the variational form variational-form applied to the element f̂ - p ∈. Subtracting pf̂ - p_𝖽𝗂𝗌𝖼,w from both sides gives
f̂ - p^2_𝖽𝗂𝗌𝖼,w = f - pf̂ - p_𝖽𝗂𝗌𝖼,w + 1/m ∑^m_i=1 w(x_i) e_i f̂(x_i) - p(x_i).
Several applications of the Cauchy-Schwarz inequality now yield
f̂ - p_𝖽𝗂𝗌𝖼,w ≤f - p_𝖽𝗂𝗌𝖼,w + e_2,w.
We now use the triangle inequality and MZ-inequality to get
f - f̂_L^2_ϱ() ≤f - p_L^2_ϱ() + f̂ - p_L^2_ϱ() ≤f - p_L^2_ϱ() + 1/α_w f̂-p_𝖽𝗂𝗌𝖼,w.
The result now follows from disc-err-bd.
We next prove Theorem <ref>. This has now become a standard exercise involving the matrix Chernoff bound (see, e.g., <cit.>), which we repeat here for convenience. This bound was first used in the context of least-squares approximation from i.i.d. samples in <cit.>.
[Matrix Chernoff bound]
Let X_1,…,X_m be independent, self-adjoint random matrices of dimension n. Assume that
X_i ≽0
and λ_max(X_i) ≤R
almost surely for each i=1,…,m, and define
μ_min = λ_min ( ∑^m_i=1 X_i ) and μ_max = λ_max ( ∑^m_i=1 X_i ).
Then, for 0 ≤δ≤ 1,
( λ_min ( ∑^m_i=1 X_i ) ≤( 1 - δ) μ_min ) ≤n exp( -μ_min ( (1-δ) log(1-δ) + δ)/R ),
and, for δ≥ 0,
( λ_max ( ∑^m_i=1 X_i ) ≥( 1 + δ) μ_max ) ≤n exp( -μ_max ( (1+δ) log(1+δ)-δ)/R ).
[Proof of Theorem <ref>]
Let {ϕ_i }^n_i=1 be an orthonormal basis for and A be as in ls-Ab. Observe that
A^* A = ∑^m_i=1 X_i, X_i : = 1/m ( w(x_i) ϕ_j(x_i) ϕ_k(x_i) )^n_j,k=1 ,
is a sum of independent random matrices. Also, orthonormality of the basis functions implies that alpha-beta-sigma holds, i.e.,
α^2_w = λ_min(A^*A), β^2_w = λ_max(A^*A).
We now wish to apply Theorem <ref>. Due to mu_weight_fn, the fact that w = 1/ν and orthonormality, we have
((A^*A))_jk = ∑^m_i=1 ((X_i))_jk = 1/m ∑^m_i=1 ∫_D w(x) ϕ_j(x) ϕ_k(x) μ_i(x) = ∫_D ϕ_j(x) ϕ_k(x) w(x) ν(x) ϱ(x) = δ_jk.
Hence (A^*A) = I and therefore μ_min = μ_max = 1. Next, let c ∈^n be arbitrary. Then
c^* X_i c = w(x_i)/m| ∑^n_j=1 c_j ϕ_j(x_i) |^2 = w(x_i)/m | p(x_i) |^2, where p = ∑^n_j=1 c_j ϕ_j.
We immediately deduce that X_i≽ 0. Moreover, Parseval's identity implies that c^2_2 = p^2_L^2_ϱ(D). Hence, using this and Kappa-def and kappa-w-def, we obtain
c^* X_i c ≤w(x_i)/m K()(x_i) p^2_L^2_ϱ(D) ≤κ_w()/m c^2_2.
Since c was arbitrary and X_i ≽ 0, we conclude that
λ_max(X_i) ≤R : = κ_w()/m.
We are now ready to apply Theorem <ref>. Using the union bound, we have
( α_w ≤√(1-δ) or β_w ≥√(1+δ) ) ≤(λ_min(A^*A) ≤1-δ) + ( λ_max(A^*A) ≥1-δ)
≤n exp( - m a_δ/ κ_w() ) + n exp( - m b_δ/ κ_w() ),
where a_δ = ( (1-δ) log(1-δ) + δ ) and b_δ = ( (1+δ) log(1+δ)-δ). Notice that a_δ≥ b_δ and b_δ = 1/c_δ, where c_δ is as in m-bound-alpha-beta. Therefore
( α_w ≤√(1-δ) or β_w ≥√(1+δ) ) ≤2 n exp( -m/c_δ κ_w() ) ≤ϵ,
where in the last step we used m-bound-alpha-beta. This completes the proof.
We next prove Corollary <ref>. For this, we require the following lemma, which is based on <cit.> (which is, in turn, based on <cit.>).
Let ⊂ L^2_ϱ(D) be an n-dimensional subspace with orthonormal basis {ϕ_i }^n_i=1 and μ_1,…,μ_m be probability measures satisfying Assumption <ref>. Consider sample points drawn randomly and independently with x_i ∼μ_i, i = 1,…,m. Then
∑^n_i=1 | gϕ_i_𝖽𝗂𝗌𝖼,w |^2 ≤κ_w()/m g^2_L^2_ϱ(D), ∀g ∈^⊥,
where w = 1/ν, ··_𝖽𝗂𝗌𝖼,w and κ_w are as in mu_weight_fn, semi-inner-product and kappa-w-def, respectively.
Let g ∈^⊥ and ł∈{1,… , n } be arbitrary. Then
|gϕ_l_𝖽𝗂𝗌𝖼,w|^2 = 1/m^2 ∑^m_i,j = 1 [ w(x_i) w(x_j) g(x_i) g(x_j) ϕ_l(x_i) ϕ_l(x_j) ]
= 1/m^2 ∑^m_i,j = 1
i ≠j ( w(x_i) g(x_i) ϕ_l(x_i) ) ( w(x_j) g(x_j) ϕ_l(x_j) )
+ 1/m^2 ∑^m_i=1 ( | w(x_i) g(x_i) ϕ_l(x_i) |^2 ) .
Now mu_weight_fn implies that
1/m ∑^m_i=1 (w(x_i) g(x_i) ϕ_l(x_i) ) = 1/m ∑^m_i=1 ∫_D w(x) g(x) ϕ_l(x) μ_i(x) = gϕ_l_L^2_ϱ(D) = 0,
since g ∈^⊥. Therefore
|gϕ_l_𝖽𝗂𝗌𝖼,w|^2 = 1/m^2 ∑^m_i=1 [ ( | w(x_i) g(x_i) ϕ_l(x_i) |^2 ) - ( (w(x_i) g(x_i) ϕ_l(x_i) ) )^2 ]
≤1/m^2 ∑^m_i=1 ∫_D (w(x))^2 | g(x) |^2 | ϕ_l(x) |^2 μ_i(x)
= 1/m ∫_D w(x) |g(x)|^2 | ϕ_l(x) |^2 ϱ(x),
where in the final step we used mu_weight_fn once more.
We now apply Kappa-def-alt and kappa-w-def to obtain
∑^n_l=1 |gϕ_l_𝖽𝗂𝗌𝖼,w|^2 ≤1/m ∫_D w(x) |g(x)|^2 ∑^n_l=1 | ϕ_l(x) |^2 ϱ(x) ≤κ_w()/m g^2_L^2_ϱ(D),
as required.
[Proof of Corollary <ref>]
The condition on m and Theorem <ref> imply that alpha-beta-delta holds with probability at least 1-ϵ/2. Therefore Lemma <ref> asserts that the weighted least-squares problem has a unique solution for any function that is defined at the sample points x_i and any noise vector. Let f ∈ L^2_ϱ(D). Then f is defined at the x_i with probability one. Now consider arbitrary noise e = (e_i)^m_i=1∈^m and write f̂_e ∈ for the corresponding weighted least-squares approximation from noisy samples f(x_i) + e_i. Let p^* ∈ be the (unique) element such that f - p^*_L^2_ϱ(D) = inf_p ∈f - p_L^2_ϱ(D) and write g = f - p^*. Let ĝ = ĝ_0 be the least-squares approximation to g from noiseless samples g(x_i) and 0̂_e ∈ be the least-squares approximation to the zero function from the noisy samples e_i. Notice that p^* = p^*_0 = p^*, since the weighted least-squares approximation is a projection in the discrete inner product. Since the least-squares approximation is also linear, we have f - f̂_e = f - f̂ - 0̂_e = g - ĝ - 0̂_e.
This gives
f-f̂_e_L^2_ϱ(D) ≤g_L^2_ϱ(D) + ĝ_L^2_ϱ(D) + 0̂_e_L^2_ϱ(D).
Lemma <ref> and the fact that alpha-beta-delta holds imply that
0̂_e_L^2_ϱ(D) ≤1/√(1-δ) e_2,w.
Now consider the term ĝ_L^2_ϱ(D). Writing ĝ = ∑^n_i=1ĉ_i ϕ_i and ĉ = (ĉ_i)^n_i=1 and using the variational form variational-form gives
ĝ^2_𝖽𝗂𝗌𝖼,w = gĝ_𝖽𝗂𝗌𝖼,w= ∑^n_j=1 c_j gϕ_j_𝖽𝗂𝗌𝖼,𝗐 ≤c_2 √(∑^n_j=1 |gϕ_j_𝖽𝗂𝗌𝖼,𝗐|^2 ).
Now Parseval's identity and the fact that α_w ≥√(1-δ) gives
ĝ^2_L^2_ϱ(D) ≤1/1-δ ĝ^2_𝖽𝗂𝗌𝖼,w ≤ĝ_L^2_ϱ(D) √(∑^n_j=1 |gϕ_j_𝖽𝗂𝗌𝖼,𝗐|^2 ),
i.e.,
ĝ_L^2_ϱ(D) ≤1/1-δ √(∑^n_j=1 |gϕ_j_𝖽𝗂𝗌𝖼,𝗐|^2 ).
We now bound this term in probability. Consider the random variable X = ∑^n_j=1 |gϕ_j_𝖽𝗂𝗌𝖼,𝗐|^2. Since p^* is the orthogonal projection of f onto , g = f - p^* ∈^⊥ and Lemma <ref> implies that
(X) ≤κ_w()/m g^2_L^2_ϱ(D).
Hence, by Markov's inequality
( ĝ_L^2_ϱ(D) ≥1/1-δ √(2 κ_w()/m ϵ) g_L^2_ϱ(D) ) ≤( X ≥2 (X)/ϵ ) ≤ϵ/2.
We deduce that
ĝ_L^2_ϱ(D) ≤1/1-δ √(2 κ_w()/m ϵ) g_L^2_ϱ(D),
with probability at least 1-ϵ/2. Substituting this and q-bd into main-err-split we deduce, after an application of the union bound, that
f-f̂_e_L^2_ϱ(D) ≤( 1 + 1/1-δ √(2 κ_w()/m ϵ) g_L^2_ϱ(D) ) g_L^2_ϱ(D) + 1/√(1-δ) e_2,w,
with probability at least 1-ϵ. This completes the proof.
We next prove Corollary <ref>. This follows <cit.> and employs Bernstein's inequality.
[Proof of Corollary <ref>]
Let E be the event that alpha-beta-delta holds Let p = p^* be a polynomial attaining the infimum in ls-err-bd-prob-2. Now let F be the event that
f - p_𝖽𝗂𝗌𝖼,w ≤√(2) ( f - p_L^2_ϱ(D) + √(w)(f-p)_L^∞_ϱ(D)/√(k) ).
Suppose that E and F occur. Then the definitions of E, F and Lemma <ref> imply that
f - f̂_L^2_ϱ(D) ≤f - p_L^2_ϱ(D) + 1/√(1-δ) f - p_𝖽𝗂𝗌𝖼,w + 1/√(1-δ) e_2,w
≤( 1 + √(2/1-δ) ) f - p_L^2_ϱ(D) + √(2/1-δ) f-p_L^∞_ϱ(D)/√(k) + 1/√(1-δ) e_2,w.
This yields the desired bound ls-err-bd-prob-2. Hence, by the union bound, it suffices to show that (E^c),(F^c) ≤ϵ /2.
The fact that (E^c) ≤ϵ / 2 follows immediately from the first condition in on m in m-conds-in-prob-2 and Theorem <ref>. We now consider (F^c). Define the random variables
Z_i = w(x_i)| f(x_i) - p(x_i) |^2 and X_i = Z_i - (Z_i).
Notice that
1/m ∑^m_i=1 (Z_i) = f-p^2_L^2_ϱ(D) : = a
due to exp-sum-scaling, and therefore
f -p^2_𝖽𝗂𝗌𝖼,w = 1/m ∑^m_i=1 Z_i = 1/m ∑^m_i=1 X_i + a.
The idea now is to use Bernstein's inequality to estimate the random variable ∑^m_i=1 X_i. Let
b = _x ∼ϱ w(x) | f(x) - p(x) |^2 ≡√(w)(f - p)^2_L^∞_ϱ(D)
and notice that X_i ≤ Z_i ≤ b and -X_i ≤(Z_i) ≤ b
almost surely. Hence |X_i| ≤ b almost surely. We also have 0 ≤ Z_i ≤ b almost surely, and therefore
∑^m_i=1 ((X_i)^2) ≤∑^m_i=1 ((Z_i)^2) ≤b ∑^m_i=1 (Z_i) = a b m.
Since (X_i) = 0, we may apply Bernstein's inequality (see, e.g., <cit.>) to get
( | 1/m ∑^m_i=1 X_i | ≥t ) ≤2 exp( -t^2 m/2/a b + b t /3 ), ∀t > 0.
Set t = a + b/k. Then it is a short argument involving the second condition in m-conds-in-prob-2 to show that t^2 m/2/a b + b t /3≥log(4/ϵ). Therefore,
f - p^2_𝖽𝗂𝗌𝖼,w ≤| 1/m ∑^m_i=1 X_i | + a < 2 a + b/k,
with probability at least 1-ϵ/2. Substituting the values for a, b and using the inequality √(s+t)≤√(s) + √(t), we see that
f - p_𝖽𝗂𝗌𝖼,w ≤√(2) ( f - p_L^2_ϱ(D) + √(w)(f-p)_L^∞_ϱ(D)/√(k) ),
with probability at least 1-ϵ/2. Therefore, (F^c) ≤ϵ/2, as required.
We now prove Lemma <ref> and Theorem <ref>. These ideas go back to <cit.>, but the specific arguments are based on <cit.>.
[Proof of Lemma <ref>]
The setup is the same as the proof of Corollary <ref>. However, since we now square the error terms, we use the fact that g ∈^⊥ to replace main-err-split by
f-f̂_e^2_L^2_ϱ(D) = g^2_L^2_ϱ(D) + ĝ + 0̂_e^2_L^2_ϱ(D) ≤g^2_L^2_ϱ(D) + 2 ĝ^2_L^2_ϱ(D) + 20̂_e^2_L^2_ϱ(D).
Whenever G - I_2≤δ holds we have that alpha-beta-delta holds, and therefore q-bd and g-bd-1 also hold. This implies that
( f-f̂^2_L^2_ϱ(D) χ_G-I_2 ≤δ ) ≤g^2_L^2_ϱ(D) + 2/1-δ e^2_2,w + 2/(1-δ)^2 ∑^n_l=1 |gϕ_l_𝖽𝗂𝗌𝖼,w|^2.
The result now follows from Lemma <ref>.
[Proof of Theorem <ref>]
Let E be the event that G - I_2 ≤δ. Suppose that E occurs. Then
f - f̂^𝖼𝖾_L^2_ϱ(D) = f - f̂_L^2_ϱ(D), f - f̂^𝗍𝖾_L^2_ϱ(D) ≤f - f̂_L^2_ϱ(D).
Here, the second bound follows from the facts that f = _σ(f) and _σ is a contraction in the L^2_ϱ(D) norm. On the other hand, if E does not occur then we have
f - f̂^𝖼𝖾_L^2_ϱ(D) = f_L^2_ϱ(D), f - f̂^𝗍𝖾_L^2_ϱ(D) ≤f + f̂^𝗍𝖾_L^2_ϱ(D) ≤2 σ.
Now observe that (E^c) ≤ϵ, due to the assumption on m and Theorem <ref>. The result now follows by the law of total expectation and Lemma <ref>.
[Proof of Theorem <ref>]
The weight function w corresponding to opt-meas-krieg is given by
w(x) = ( 1/2 + 1/4 ∑^n_i=1 | ϕ_i (x) |^2/n + 1/4 ∑^∞_l=0 v^2_l/|I_l| ∑_i ∈I_l | ϕ_i(x) |^2 )^-1.
Therefore, by Kappa-def-alt, we have w(x) ≤ 4 / ()(x), which gives
κ_w = _y ∼ϱ w(x) ()(x) ≤4 n.
We deduce from Theorem <ref> and m-est-krieg that √(1-δ) < α_w ≤β_w ≤√(1+δ) with probability at least 1-ϵ/2. Using this and the fact that w(x) ≤ 2 by construction, we deduce from Lemma <ref> that
f - f̂_L^2_ϱ(D) ≤inf_p ∈ { f - p_L^2_ϱ(D) + 1/√(1-δ) f - p_𝖽𝗂𝗌𝖼,w } + √(2/1-δ) e_2
for any f and e. Let c_i = fϕ_i_L^2_ϱ(D) be the coefficients of f and p = ∑^n_i=1 c_i ϕ_i be the best approximation to f from . Then this gives
f - f̂_L^2_ϱ(D) ≤e_n(f) + 1/√(1-δ) f - p_𝖽𝗂𝗌𝖼,w + √(2/1-δ) e_2.
For l = 0,1,2,…, define the matrices
A^(l) = ( √(w(x_i)/m) ϕ_j(x_i) )^m,2^l+1 n_i = 1,j=2^l n + 1 ∈^m ×2^l n,
c^(l) = ( c_i )^2^l+1 n_i=2^l n + 1 ∈^2^l n.
Then
f - p_𝖽𝗂𝗌𝖼,𝗐 ≤∑^∞_l=0 A^(l) c^(l)_2 ≤∑^∞_l=0 A^(l) _2 c^(l)_2 ≤∑^∞_l=0 A^(l)_2 e_2^l n(f),
where e_2^l n(f) is as in e-def.
We now wish to use the matrix Chernoff bound to estimate A^(l)_2. Observe that
( A^(l)^2_2 ≥1+t ) = ( λ_max(
(A^(l))^* A^(l) ) ≥1+t ).
Now, as in the proof of Theorem <ref>, note that ( (A^(l))^* A^(l) ) = I
and
(A^(l))^* A^(l) = ∑^m_i=1 X_i, where X_i = ( w(x_i)/m ϕ_j(x_i) ϕ_k(x_i) ) _j,k ∈I_l.
The matrices X_i are nonnegative definite and satisfy, for any c = (c_j)_j ∈ I_l,
c^* X_i c = w(x_i)/m | ∑_j ∈I_l c_j ϕ_j(x_i) |^2 ≤w(x_i)/m ∑_j ∈I_l | ϕ_j(x_i) |^2 c^2_2 .
Using w-krieg-def and taking the supremum over all such c with c_2 = 1, we deduce that
λ_max(X_i) ≤4 |I_l|/m v^2_l.
Hence, the matrix Chernoff bound (Theorem <ref>) gives that
( A^(l)^2_2 ≥1+t ) ≤|I_l| exp( - m v^2_l h(t)/4 |I_l| ),
where h(t) = (1+t) log(1+t) - t.
Now let ϵ_l = (3/π^2) ϵ / (l+1)^2, so that ∑^∞_l=0ϵ_l = ϵ / 2. We want to choose t = t_l so that ( A^(l)^2_2 ≥ 1+t) ≤ϵ_l. Using the bound m-est-krieg, we see that
( A^(l)^2_2 ≥1+t_t ) ≤| I_l| exp( - c_δ n log(4n/ϵ) h(t_t) v^2_l/|I_l| ) ≤ϵ_l,
provided
h(t_l) ≥| I_l| log(|I_l| / ϵ_l) /c_δ n log(4n/ϵ) v^2_l .
Now h is increasing and h(t) ≥ h(1) t for t ≥ 1. Therefore it suffices to take
t_l = | I_l| log(|I_l| / ϵ_l) /h(1) c_δ n log(4n/ϵ) v^2_l .
Now we recall that |I_l| = n 2^l, the definition of ϵ_l and c_δ≥ c_1
≳ 1, to get, after some algebra,
1+t_l ≤c 2^l log(2^l (l+1)^2)/v^2_l
for some numerical constant c> 0. To summarize, we have shown that
( A^(l)^2_2 ≥c 2^l log(2^l (l+1)^2)/v^2_l ) ≤ϵ_l.
Taking the union bound and recalling that ∑^∞_l=0ϵ_l = ϵ/2, we deduce that
f - p_𝖽𝗂𝗌𝖼,𝗐 ≤∑^∞_l=0 A^(l)_2 c^(l)_2 ≤c ∑^∞_l=0 2^l/2 √(log(2^l(l+1)^2))/v_l e_2^l n(f) ≤c ∑^∞_l=0 2^l/2 (l+1)^3/2/v_l e_2^l n(f),
with probability at least 1-ϵ/2 and a potentially different numerical constant c. Now consider e_2^l n(f). The terms e_k(f) are monotonically nonincreasing in k. Therefore, for l = 1,2,… we have
n (2^l-1) (e_2^l n(f))^p ≤e_n+1(f))^p + ⋯+ (e_2^l n(f))^p ≤∑_k > n (e_k(f))^p ,
Hence
e_2^l n(f) ≤c_p 2^-l/p ( 1/n ∑_k > n (e_k(f))^p )^1/p, l = 1,2,….
We deduce that
f - p_𝖽𝗂𝗌𝖼,𝗐 ≤c_p [ e_n(f)/v_0 + ( ∑_k ≥n (e_k(f))^p )^1/p ( ∑^∞_l=1 2^l(1/2-1/p) l^3/2/v_l ) ]
≤c_p,θ ( ∑_k > n (e_k(f))^p )^1/p
where in the final step we used the fact that v_l = 2^-θ l for 0 < θ < 1/p-1/2 to deduce that the final sum converges. Substituting this into f-krieg-bound-1 now gives the result.
[Proof of Theorem <ref>]
Much like in Lemma <ref>, if emp-nondegen holds with α_w > 0 then the estimator f̂ given by general-least-squares satisfies (see <cit.>)
f - f̂_ ≤inf_p ∈ { f-p_ + 2/α_w f-p_𝖽𝗂𝗌𝖼,w } + 2/α_w e_2,w,
where, for convenience, we define e^2_2,w = ∑^C_c=11/m_c∑^m_c_i=1 w_c(θ_ic) e_ic^2__c and
g^2_𝖽𝗂𝗌𝖼,w = ∑^C_c=1 1/m_c ∑^m_c_c=1 w(θ_ic) L_c(θ_ic)(g)^2__c , g ∈_0.
Now let E be the event that α_w ≥ a/2, where a is as in nondegeneracy (the choice of 1/2 here is arbitrary), and consider the estimator f̂^𝗍𝗋. We argue similarly to the proofs of Lemma <ref> and Theorem <ref>. Since _σ is a contraction, we have
f - f̂^𝗍𝗋_ ≤f - f̂_ and f - f̂^𝗍𝗋_ ≤2 σ.
Now fix p ∈. Then
f - f̂^𝗍𝗋^2_ = ( f - f̂^𝗍𝗋^2_ | E ) (E) + ( f - f̂^𝗍𝗋^2_ | E^c ) (E^c)
≤2 f - p^2_ + 16/a^2 f - p^2_𝖽𝗂𝗌𝖼,w + 16/a^2 e^2_2,w + 4 σ^2 (E^c).
Now observe that
f - p^2_𝖽𝗂𝗌𝖼,w = ( ∑^C_c=1 1/m_c ∑^m_c_i=1 w_c(θ_ic) L_c(f-p)(θ_ic)^2__c )
= ∑^C_c=1 ∫_D_c w_c(θ_c) L_c(f-p)(θ_c)^2__c μ_c(θ_c)
= ∑^C_c=1 ∫_D_cL_c(f-p)(θ_c)^2__c ϱ_c(θ_c) ≤b^2 f-p^2_,
where in the last step we used nondegeneracy (Assumption <ref>). We also have
e^2_2,w = ∑^C_c=1 1/m_c ∑^m_c_i=1 e_ic^2__c ∫_D_c w_c(θ_c) μ(θ_c) = ∑^C_c=1 1/m_c ∑^m_c_i=1 e_ic^2__c,
Here we used the definition of the weight functions w_c and the fact that each ϱ_c is a probability measure. We deduce that
f - f̂^𝗍𝗋^2_ ≤2 f - p^2_ + 16 b^2/a^2 f - p^2_ + 16/a^2 e^2_2 + 4 σ^2 (E^c),
Hence, the result follows, provided (E^c) ≤ϵ.
To show that (E^c) ≤ϵ, we appeal to <cit.>, using conditions (b) and (c) defined therein. The remainder of the proof involves showing how to recast the setup considered in <ref> as a special case of that considered in <cit.>. Let m = m_1+⋯ + m_C. Now, for c = 1,…,C, let ^(c) be the distribution of operators in (_0,_c) defined by A^(c)∼^(c) if
A^(c)(f) = √(m/m_c) √(w_c(θ_c)) L_c(θ_c)(f), where θ_c ∼μ_c.
Now, following <cit.>, define _1,…,_m by _i = ^(c) if m_1+⋯ + m_c-1 < i ≤ m_1 + ⋯ + m_c for c = 1,…,C. Doing this, the setup of <ref> is a special case of <cit.> with the family = {_i}^m_i=1. In particular, nondegeneracy in the sense of <cit.> is implied by nondegeneracy.
We now apply <cit.>, and specifically, parts (b) and (c), with = ' and δ = 1/2. This implies that emp-nondegen holds, provided
m ≳a^-1/2 ·Φ(S('-') ; ) ·(log(2 d / ϵ) + n ),
or, with as in Assumption <ref>(ii),
m ≳a^-1/2 ·Φ(S() ; ) ·log(2 d n/ ϵ) .
where Φ is the so-called variation, as defined in <cit.> and, for ⊆_0, S() = { u / u_ : u ∈, u ≠ 0 }. Consider any such set . Using this and the definition of , we see that
Φ(S() ; ) = max_c=1,…,C Φ(S() ; ^(c) ) = max_c=1,…,C { m/m_c _θ_c ∼ϱ_c sup_u ∈\{ 0 } w_c(θ_c) L_c(θ_c)(u / u_)^2__c }.
Since L_c(θ_c) is linear, we deduce that
Φ(S() ; ) = max_c=1,…,C { m/m_c κ_w_c(; L_c) },
where κ_w_c is as in kappa-w-def-general. Using this and setting, respectively, = ' - ' or = we see that m-cond-gen-1 is equivalent to mc-cond-1 and m-cond-gen-2 is equivalent to mc-cond-2. The result now follows.
|
http://arxiv.org/abs/2409.02692v1 | 20240904132504 | New gravity field of comet 67P/C-G based on Rosetta's Doppler and optical data | [
"Julien Laurent-Varin",
"Théo James",
"Jean-Charles Marty",
"Laurent Jorda",
"Sebastien Le Maistre",
"Robert Gaskell"
] | astro-ph.EP | [
"astro-ph.EP"
] |
1
.001
67P/C-G gravity from Rosetta's Doppler and optical data
JLV, TJ, JCM, LJ, SLM, RG
mode = title]New gravity field of comet 67P/C-G based on Rosetta's Doppler and optical data
1]Julien Laurent-Varin
[1]
[email protected]
Develop and maintain the GINS software, introduces the "landmark" measurement function into GINS, carried out the simulations and produced the results of the study
[1]organization=CNES,
addressline=18, avenue Edouard Belin,
city=Toulouse,
postcode=F-31401,
country=France
[1]Corresponding author
1, 2]Théo James
Carried out the simulations and produced the results of the study
[2]organization=ESA,
addressline=Keplerlaan 1,
city=Noordwijk,
postcode=2201 AZ,
country=Netherlands
1]Jean-Charles Marty
3]Laurent Jorda
[3]organization=Aix Marseille Univ,
addressline=CNRS, CNES, LAM,
city=Marseille,
country=France
4,5]Sebastien Le Maistre
[4]organization=Royal Observatory of Belgium,
addressline=Avenue Circulaire 3,
city= Uccle,
postcode=BE-1180,
country=Belgium
[5]organization=Université Catholique de Louvain,
addressline=Place Louis Pasteur 3,
city=Louvain-La-Neuve,
postcode=BE-1348,
country=Belgium
6]Robert Gaskell
[6]organization=Planetary Science Institute,
addressline=1700 East Fort Lowell, Suite 106,
city=Tucson,
postcode=AZ 85719-2395,
country=USA
§ ABSTRACT
The gravity field of a celestial body gives valuable insights into its fundamental properties such as its density and internal structure. The Doppler data collected by the Radio-Science Investigation (RSI) experiment of the Rosetta mission were previously used to determine the gravity field of comet 67P/Churyumov–Gerasimenko up to degree 2 <cit.>. In the present study we re-estimate the gravity field of 67P/C-G using not only RSI data as before, but also images data from Rosetta's OSIRIS camera. These data, converted into "landmark" observations, are complementary to RSI data.
Therefore, the analysis of combined Doppler and optical data results in a significant improvement in the restitution of Rosetta's orbit and the determination of the comet gravity field with respect to previous work. Some coefficients of the comet's gravity field are now resolved up to degree 4. The mass and low degrees estimates are in fairly good agreement with those previously published, but the improvement in their accuracy (i.e. lower sigmas) as well as the better resolution (i.e. maximum degree) of the new gravity field suggests that the distribution of mass in the nucleus may not be uniform, contrary to what was previously thought.
Moreover, we estimate a change in the mass of the comet attributed to ice sublimation at its orbital perihelion that is almost three times greater than that previously published. The new estimated mass loss is Δ M=28.0 ± 0.29 × 10^9 kg, corresponding to 0.28% of the total mass of the comet. Thanks to a precise determination of the degree-1 gravity coefficients, we observe for the first time a motion of the center of mass of the comet by ∼35 m northward that could be explained by a more pronounced outgassing activity in the south of the comet due to the orientation of its spin axis relative to the Sun.
The temporal evolution (before versus after perihelion) of the other estimated gravity coefficients and in particular degree-2 is more modest (0.8% for C_20 and 2% for C_22, S_22).
Comet 67P/C-G Gravitational fields Radio-Science Landmark observations
[
[
September 9, 2024
=====================
§ INTRODUCTION
After a 10-year cruise, the Rosetta spacecraft <cit.> arrived at comet 67P/C-G (i.e., 67P/Churyumov–Gerasimenko) on August 6th, 2014, and orbited around its oddly-shaped nucleus for a little more than two years, until September 30th, 2016. The measurements acquired during that time are invaluable for exploring the fundamental characteristics of the nucleus (mass, density, porosity, composition). Among others, the Radio Science Investigation Experiment (RSI, <cit.>) onboard Rosetta was dedicated to these topics (These data are available at The European Space Agency's Planetary Science Archive (PSA)[https://www.cosmos.esa.int/web/psa/rosetta])
The exploitation of Doppler measurements led to many important results (see <cit.>). First of all, the mass of the nucleus was accurately determined (GM = 666.2 ± 0.2 m^3.s^-2). Combined with an accurate shape model of the comet, this allowed us to compute the average density of the body, providing insights into the porosity and overall dust-to-ice mass ratio <cit.>. Secondly, the mass loss between 2014 and 2016, resulting from the sublimation of ice during the perihelion passage (in Aug. 2015), was first estimated by <cit.> to Δ M=10.5 ± 3.4 × 10^9 kg, corresponding to 0.1% of the total mass. This measurement was important to constrain the dust-to-gas and refractory-to-ice mass ratios, as discussed in <cit.>, and revealed that most of the evaporated mass eventually fell back on 67P/C-G, leading to a global mass redistribution throughout the surface of the comet.
Finally, the degree-2 spherical harmonic coefficients of the gravity field have been estimated by <cit.> with statistical significance for C̅_20 and C̅_22. These values can be compared to those derived from shape models in order to deduce the level of heterogeneity inside the nucleus.
Several high-resolution shape models have been reconstructed from NAVCams and/or OSIRIS images <cit.>. We determined that all of them perfectly agree for the first few degrees of the gravity field, the difference remaining well below the formal errors of the spherical harmonics gravity coefficients.
We therefore choose to use the shape model of <cit.>, from which one can compute the gravity field of a homogeneous comet, where the mass is uniformly distributed across the nucleus. The position of the center of mass calculated with this hypothesis is shifted by (18 ± 7, -32 ± 4 m, 16 ± 10 m) with respect to the actual center of mass of the comet <cit.>, suggesting an inhomogeneous density distribution <cit.>.
The analysis of <cit.>, based on Rosetta Doppler measurements only, suggests that the mass is uniformly distributed (coefficients of a homogeneous comet are within the confidence interval of their estimated counterparts). However, several other studies, based on different kinds of data, suggest a non-uniform density distribution. For instance, the analysis of CONSERT data shows that the sub-surface (up to about 25 m) around the final landing site of the Philae lander is significantly denser than the deeper part of the nucleus <cit.>. Also, the analysis of the excited rotational state detected during the shape reconstruction <cit.> indicates that a uniform density is not compatible with the measured rotation and precession periods of the comet <cit.>.
Finally, a detailed three-dimensional model of the layers identified in the two lobes suggests that the small lobe was compressed during the impact which led to the formation of the bilobate nucleus of 67P <cit.>. This would imply that the more compact neck region would be denser than the two lobes.
Most of these small inhomogeneities have a signature in the gravity field which is too small to be observed in the measured field given its accuracy and resolution. Reducing uncertainties and increasing the degree of the spherical harmonic expansion of the field is the only way to detect mass anomalies from the orbit.
In fact, to some extent, such an improvement should still be possible using Rosetta data. Indeed, because, on occasion, the probe was as close as 7 km from the center of the nucleus, it is reasonable to assume that Rosetta's orbit is sensitive to degrees of the gravity field higher than 2. Furthermore, Doppler measurements are not the only kind of data that can be used for the Precise Orbit Determination (POD). Indeed, one can also use images of the comet to constrain the spacecraft position at the time of their acquisition, since such images can be reverted into positions of Rosetta in the nucleus frame.
Several tens of thousands of images of the nucleus acquired by the OSIRIS camera <cit.> have been used by the SPC software developed by <cit.> to reconstruct the global shape of the nucleus of 67P <cit.>. During the process, a huge set of stereo landmarks are defined at the surface of the comet, and their coordinates in the images as well as in the body-fixed frame are calculated.
These measurements are complementary to Doppler data. While Doppler measurements constrain the velocity along the line-of-sight (Earth-spacecraft direction), OSIRIS images constrain the position of the spacecraft relative to the comet. In other words, the Doppler measurements contain information about the speed of the spacecraft (in the line-of-sight) while the optical observations are anchors for the relative position of the comet on the trajectory. Landmarks have been increasingly used in the field of planetary geodesy for the last two decades. For small bodies especially, it has been found to efficiently decrease uncertainties that are typically rather high due to the low gravity: for instance for asteroids Eros (<cit.>, mission NEAR), Vesta (<cit.>, mission DAWN), or Bennu (<cit.>, mission OSIRIS-REx). In the case of the Rosetta mission, optical data have been used for navigation purposes <cit.> and also in the scientific study published by <cit.>. The latter estimated the gravity field coefficients up to degree 3 based on pre-perihelion data only. The data they used consist in both radiometric observations and NAVCAM images (less accurate than the OSIRIS scientific camera). <cit.> did not model the comet's outgassing, nor the degree 1 impact, and they (obviously) didn't estimate different coefficients before and after perihelion.
Based on the above, we re-estimate the gravity field of comet 67P/C-G up to the highest degree achievable, using both Doppler and optical measurements. The GINS (see <cit.>) and DYNAMO software of the French Centre National d'Etude Spatiales (CNES) are used for these calculations. In section <ref> we present our modeling work and the observations we use to fit the model. In section <ref> we detail the estimation methodology, while in section <ref> we show our results and in section <ref> we discuss the implications for the mass distribution within the nucleus as well as the composition of the coma.
§ DATA AND MODELS
This section summarizes our model of the spacecraft's dynamics, and present the observations used in this study.
§.§ Periods of interest
We focus our analysis on two periods of the mission: one in late 2014/early 2015 before perihelion and one in mid-2016 after perihelion. During these two periods the sensitivity to 67P gravity is maximum since the distance from Rosetta to the center of the comet is minimum, most of the time smaller than 30 km, and sometimes as low as 7 km (see Fig. <ref>). In addition, the out-gassing activity of the comet is inversely proportional to the distance to the sun, therefore it was maximum between these periods. From a gravity estimation perspective, high out-gassing means that significant aerodynamic forces perturb the spacecraft trajectory and make the POD procedure too complex and uncertain. That's also why, for mission safety, the probe was moved away from the comet during the passage at perihelion, de facto reducing the sensitivity of the orbit to the gravity field. The combination of high out-gassing and large distance to the comet led us to exclude the perihelion period from February 2015 to April 2016 from our analysis. This leaves us with 132 days of data with a good sensitivity to the comet's gravity field.
Before any editing and studies, we start with 36 arcs for the pre-perihelion period and 60 arcs for the post-perihelion period. At the end of study we kept 15 arcs pre-perihelion and 43 post-perihelion. Details of the arcs kept are given in the appendix (see Appendix <ref>).
§.§ Modeling
This sub-section brings together the physical modelling details and presents the forces and accelerations taken into account in the trajectory calculation. They include the gravity field of the central body (67P/C-G), Center of Mass offset, out-gassing accelerations, Solar-Radiation-Pressure (SRP) and third body accelerations (Sun, Jupiter, Earth, Moon, Venus, ...).
§.§.§ Shape
For the global shape of the nucleus, we use the latest SPC model <cit.>
[The model labeled “SPC shap8 v2.1”.] at a resolution of about 4 m (i.e., with 3.1 millions of facets).
The model is based on about 49,000 OSIRIS images (both NAC and WAC) which are analyzed to retrieve the topography of the nucleus.
The reconstruction is split into a total of about 25,000 small topographic units called “maplets”
[SPC maplets are squared elevation models of 99x99 elements, at a typical resolution of 1 m.] covering the entire surface of the comet.
During the analysis, a stereo landmarks is defined at the center of each maplet.
These landmarks are used in SPC to retrieve the geometric information associated to each image.
The SPC analysis provides the body frame and image coordinates of the landmarks as well as the camera position and pointing direction for each image analyzed by the software.
All SPC data (shape and landmarks coordinates) are calculated in the “Cheops” reference frame <cit.>.
Other more accurate shapes exist (e.g. <cit.>), but we chose not to use them firstly to use a shape model consistent with the landmarks definition, and secondly because the shape is used in our method only to initialise the gravity field under the assumption of uniform mass distribution. The resolution of the shape is significantly higher than that of the gravity field (even at degree 20), so the gravity field calculated from a more precise shape model gives essentially identical Stokes coefficients.
For the sake of clarity, we define three specific center points here: the Centre of Mass (CoM), the Centre of Reference (CoR) and the Centre of Figure (CoF). The CoM is the body's true physical centre of gravity, i.e. the barycentre of all the masses. The CoR is the centre of the frame of reference in which the landmarks and spherical harmonics of gravity are described, i.e. the Cheops frame of reference. Finally, CoF is the centre of gravity if the distribution of masses were uniform in the shape.
§.§.§ 67P/C-G ephemeris, rotation and orientation
The ephemeris, orientation and rotation of the comet are extracted from the latest SPICE kernels <cit.> reported in Tab. <ref>.
As stated in the SPICE documentation, the orientation of 67P/C-G cannot be represented over a long period of time using the standard IAU formulation. Instead, it is recommended to use attitude kernels (CK) provided by the mission and archived in the SPICE repository to orient the comet over time. Thus, a discretisation of the comet's orientation has been constructed in the form of a quaternion table, and supplied to GINS.
The model of orientation of the spin axis of 67P in right ascension and declination (Fig. <ref>) based on SPICE kernels (Tab. <ref>) are compared to those resulting from OSIRIS picture analysis <cit.>. We observe piece-wise constant values in the SPICE model for the first part of the mission, while a more precise model is given after July 2016. The differences between the SPICE model and the OSIRIS reconstructions suggest that the comet's orientation is accurate to between a tenth of a degree and half a degree. Comet orientation errors can induce errors in the estimate of the gravity coefficients which we avoid here by adding landmark data (helping to position the spacecraft in the body-fixed frame and therefore relative to its gravity potential) and adjusting the pointing of the OSIRIS camera.
As for the ephemeris of the comet, we used the one recommended by ESA because it showed better performance (i.e. smaller post-fit residuals, more arcs that converge) than the one provided by JPL <cit.>. We think that the lower performance obtained with the JPL ephemeris is inherent to the way it is constructed, fitting not only pseudo-distance measurements derived at given epochs of the Rosetta mission, but also terrestrial astrometric measurements. The so-obtained continuous orbit over a long period is therefore the best compromise between all these measurements but does not specifically optimise the trajectory of the comet at the time of Rosetta's 2-way Doppler measurements acquisition, as does the ESA ephemeris. It should also be pointed out that <cit.> has only used the NASA DSN measurements in its processing, and not the ESTRACK ESA measurements, which are more numerous, and the optical measurements from NAVCAM, which are less accurate than the OSIRIS instrument.
§.§.§ Manoeuvres and wheel off-loadings
The trajectory of the spacecraft was controlled with regular manoeuvres, generating substantial velocity increments (Δ V). The magnitude of those Δ V is much larger than the gravity force and their values are not accurately known. Estimating them would be a hazardous task, therefore we design our arcs to exclude these maneuvers, which occur every few days. The exclusion of these maneuvers have been the driver of our arc splitting procedure, leading to arcs duration between 16 hours and 150 hours (see Tabs. <ref> and <ref>).
The spacecraft controls its attitude with reaction wheels which need to be periodically desaturated, leading to engine activation called wheel off-loading (WoL) maneuvers. Given the relatively old technologies onboard Rosetta, WoL maneuvers generate significant residual Δ Vs, which have to be taken into account in the POD. Since these WoL occur twice a day, it is not possible to avoid them in our computation arcs (the resulting arcs would then be too short to be sensitive enough to the comet's gravity field). Therefore, we estimate their values in the orbit determination process.
§.§.§ Comet gravity field
Despite its non-spherical shape, we model the gravitational potential of the central body using classical spherical harmonic expansion according to:
U(r,ϕ,λ) = GM/r[ C̅_00 +
∑^∞_l=1(R/r)^l ∑^l_m=0P̅_l,m(sinϕ) .
. ×( C̅_l,mcosmλ + S̅_l,msinmλ) ],
where r is the distance to the comet' CoR (in which spherical harmonic expansion is defined) and ϕ,λ the latitude and longitude of the field-point, respectively. G is the gravitational constant, M is the total mass of the 67P/C-G and R is the equatorial radius of 2650.0 m, consistent with that of <cit.>. P̅_l,m is the fully normalized associated Legendre function of degree l and order m and C̅_l,m,S̅_l,m are the normalized Stokes coefficients. In this paper, all the reported values for the Stokes coefficients will be normalized. The normalisation factor is classically <cit.> computed as follows
N_lm=√((2-δ_0m)(2l+1)(l-m)!/(l+m)!),
where the Kronecker delta δ_0,m is equal to 1 for m=0, and 0 otherwise.
In practice, the spherical harmonic expansion is done up to a given degree (l_max) which we arbitrarily set to 20. The Fig. <ref> shows the Root Mean Square (RMS) power spectrum of the gravity field of 67P computed according to <cit.>:.
P_l=√(∑_m=0^l (C̅_lm^2+S̅_lm^2)/2l+1),
where the C̅_lm,S̅_lm are deduced from the shape of the comet assuming uniform internal mass distribution (see supp.mat.).
As can be deduced from Fig. <ref>, the decay of the gravity spectrum follows a power law in K/l^4. This is very different from that of most of the other bodies of the solar system that follow Kaula's law in K/l^2, simply because 67P/C-G has a very irregular shape that is very different from the spherical shape assumed in Kaula's equation.
§.§.§ Solar Radiation Pressure
The acceleration experienced by the spacecraft due to solar radiation pressure depends on the thermo-optical properties of the surfaces that compose it. It is modeled as follows:
a_SRP = F_S Φ/c ·d_SE^2( d_SE/d_S)^2 ∑_k S_k R̅_k,
where F_S is scale factor of the solar pressure force, c speed of light, Φ the solar flux at 1 AU (Astronomical Unit : mean distance between sun and earth), d_SE one AU, d_S the distance between spacecraft and sun, S_k area of face k of the satellite and R̅_k the reflectivity vectorised coefficient on face k of the satellite (depends on the reflectivity coefficients of the face and one the angle of incidence lighting (without units). The set of facets defined by their specific shapes, surfaces and thermo-optical coefficients used in this formula compose the so-called macro-model of the spacecraft (A 2.8 m × 2.1 m × 2.0 m box and two 32.31 m^2 wings). The SRP is small, but cannot be neglected because the total area of the macro-model of Rosetta is quite big, 64 m^2, which induces a SRP force of the order of a few 10^-9 m/s^2 (see Fig <ref>).
To account for uncertainties in the spacecraft macro-model, a single SRP scaling factor is estimated for the entire mission. It can vary slightly around one, thus allowing to partially correct for small imperfections of the model.
§.§.§ Outgassing-induced aerodynamic forces
Even though the selected arcs avoid periods of intense activity, some residual out-gassing still occurs during our periods of interest and has to be accounted for in the orbit computation. The interaction of the gas with the spacecraft generates aerodynamic forces in the direction of their relative motion. Since the velocity of the spacecraft in the body-fixed frame (few tens of cm/s) is very low compared to the out-gassing velocities (few hundred m/s), the resulting aerodynamic force is mostly centrifugal. Neglecting it would therefore result in a direct error in the gravity force, which is centripetal. We model the out-gassing velocity as <cit.>:
v=(-55.5 · r_h + 771)(1+0.171 · e^-r_h-1.24/0.13),
where v is the gas velocity (assumed to be radial only, oriented outwards) and r_h is the heliocentric distance of the nucleus. The numerical values are empirical constants resulting from the estimation of the gas density carried out using the ROSINA instrument. Indeed, the COPS experiment of ROSINA yields an estimate of the density of molecules around the nucleus (<cit.>).
Assuming that the mean molecular mass of these particles is the mass of water vapour H_2O (18 g/mol), we can then build a gas model as a radial wind emanating from the comet with a measured mass density. While this force can be compared to a drag force, its orientation is fundamentally different.
In <cit.>, it is stated that the velocity model has an error of less than 0.1%, and that the ROSINA water abundances have an uncertainty of 10%, meaning that the major part of the error on this aerodynamic force will be of the order of 10%. It should be emphasised that the contribution of this force is very small.
Nevertheless, the gas flow interacts with each face of the spacecraft described by its macro-model in accordance with the description in the document <cit.>. This acceleration due to aerodynamic forces has an expression similar to the solar radiation pressure acceleration, involving a contribution from each surface of the spacecraft model.
Fig. <ref> shows the respective magnitude of each force applied on the spacecraft. The outgassing-induced aerodynamic forces are indeed low compared to the gravity or the SRP, but not negligible, especially at low altitude.
§.§.§ Non inertial reference frame
The dynamic system representing the centre of the spacecraft is integrated in a reference frame assumed to be Galilean (or inertial). By construction, this frame of reference is centered on the CoR of the comet, which may differ from its CoM. The non-zero distance between CoR and CoM undermines the assumption that the integration frame of reference is inertial. An additional acceleration representing the movement of the CoR around the comet's CoM must therefore be taken into account.
Thus, as soon as the coefficients (C̅_11,S̅_11) are different from zero, the integration frame of reference loses its inertial character. In such a case, an additional acceleration γ⃗_1 of the following form must be taken into:
γ⃗_1 = -Ω⃗× (Ω⃗×CO) - d Ω⃗/dt×CO,
where C is the comet's CoM and O is the CoR. Assuming that the comet's spin angular vector Ω⃗ remains constant during the time periods considered and oriented in the z direction, then the frame acceleration can be expressed as a function of the degree-1 coefficients according to:
γ⃗_1 = -Ω^2 √(3) R [ C̅_11; S̅_11; 0 ].
R=2650.0 m is the equatorial radius of the nucleus and Ω is its mean rotation rate equal to 0.14072 10^-3 rad/s for the pre-perihelion period and to 0.14151 10^-3 rad/s for the post-perihelion period (values from <cit.>).
Traditionally set to zero, we estimate here the degree-1 coefficients, and therefore their impact on the dynamics must be taken into account. For 67P/C-G, we expect these coefficients to have non-zero values varying with time because of ice sublimation processes which should induce movements of the CoM in the body-fixed frame during perihelion. Moreover, because the landmark data are sensitive to the position of the CoR, while the Doppler data provide information about the position of the CoM, the combination of these two data sets gives us sensitivity to degree-1 that can be used to determine the CoR-CoM offset.
Note that the acceleration γ⃗_⃗1⃗ is actually quite significant, of the order of 10^-9 m.s^-2 given the order of magnitude of our estimates of C̅_11 and S̅_11 (see Sec. <ref>), which is comparable to the comet higher degree gravity acceleration (see Fig. <ref>).
§.§.§ Accelerations magnitude
The complete dynamical model and the magnitudes of each type of accelerations is presented in Fig. <ref>. The ranges plotted in this figure encompass the accelerations amplitudes (or norm) of each of the arcs computed in this study.
The gravity field is broken down into two parts: the central contribution plus C̅_20, and the contribution of the other coefficients. As expected, the former are dominant. Nevertheless, there are also other important forces such as the solar radiation pressure, which thus needs to be precisely modeled.
As far as gravity is concerned, it should be reminded that the terms of degree d contribute to the acceleration of gravity proportionally to 1/r^2+d. Therefore, reducing the distance to the comet by a factor of 2, increases the central term (of degree 0) acceleration by a factor of 4, that of the terms of degree 1 by a factor of 8 and more generally the accelerations induced by the terms of degree d by a factor of 2^2+d. This explains the large variability of the 67P gravitational accelerations undergone by Rosetta and shown in Fig. <ref>, as a result from the spacecraft altitude variation (Fig. <ref>).
The magnitude of the outgassing-induced aerodynamic forces and the non-inertial frame acceleration are small but comparable to high-degree gravity effects. They can be detected and thus estimated using Rosetta's RSI data. Indeed, an acceleration of 10^-10 m/s^2 acting over 6 days (which is the maximum duration of our arcs) can cause the velocity to vary by around 0.05 mm/s, which corresponds to a signature in the Doppler signal of 2.5 mHz, which is the measurement noise. Therefore, all the forces inducing acceleration above this threshold of 10^-10 m/s^2 are estimated while those below that threshold are kept fixed.
§.§ Observations
§.§.§ Doppler measurements
Doppler tracking measurements are collected for the Rosetta mission as part of the RSI experiment. The data used in this study are two-way X-band Doppler observations averaged over 60 seconds (See <cit.>) and collected by the ESTRACK antennas located in New Norcia (Australia), Cerebros (Spain) and Malargüe (Argentina). After data calibration and editing (see Sec. <ref>), the nominal level of noise is typically σ_DOP = 3 mHz.
Such a kind-of-limited Doppler precision is partly due to the orientation of Rosetta's orbital plane, which remains pretty low over the periods of interest, especially pre-perihelion where the orbit is close to a face-on configuration (beta angle around 20^∘ as shown in Fig. <ref>).
An edge-on configuration would have been more favorable to the orbit reconstruction process since the information contained in the Doppler measurements would have been stronger, likely leading to slightly flatter/smaller residuals.
For Doppler data, an important step is to correct the propagation delay for tropospheric perturbations. This correction is performed using the VMF1 model <cit.>.
§.§.§ Landmarks
Rosetta was equipped with two scientific cameras, the WAC (Wide Angle Camera) and the NAC (Narrow Angle Camera). They are both part of the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) instrument <cit.>, which acquired about 100,000 images during the whole mission.
A subset of about 49,000 OSIRIS images of the nucleus have been analyzed with the SPC software <cit.>. Among other products, the method provides landmarks coordinates in the body-fixed frame (see Fig. <ref> left panel). As described in section <ref>, these landmarks correspond to the center of small squared maplets in the SPC method.
The pixel coordinates of these landmarks are calculated from a stereo analysis and saved for each image having an intersection with the corresponding maplet.
An example of such landmark coordinates in a given image is shown in Fig. <ref> right panel. Since we also know their coordinates in the body-fixed reference frame, we can obtain information about the position of the spacecraft with respect to the nucleus at the time of acquisition of the images.
Finally, the stereo analysis also provides the orientation of the camera in the body-fixed frame for each OSIRIS image registered in SPC.
Although the SPC approach implements accurate distortion corrections, the archived SPC landmark coordinates refer to “level 1” images, which are not corrected from the optical distortion of the two cameras (NAC and WAC). It is therefore necessary to reprocess them with the model based on the in-flight geometric calibration of the cameras. The distortion is rather high, especially on the WAC, with displacements of tens of pixels at the edges. This correction is achieved by fitting 4th-order polynomial models whose coefficients are estimated by astrometric calibration of starfields during the commissioning phase of the mission.
Since the position of the spacecraft is estimated at each iteration, it is necessary to also adjust its attitude relative to the comet. To achieve this, the camera frame is rotated to fit each picture using three estimated angles. Such adjustment of the camera's pointing also permits the correction of ± 30 arcsec errors relative to the commanded attitude of the spacecraft, which are due to thermo-elastic deformations of the spacecraft structure. The amplitude of these errors is quantified beforehand by comparing the star tracker quaternion with the pointing direction deduced from an astrometric analysis of OSIRIS starfield images acquired during the commissioning phase.
The attitude is independently re-estimated for each image, from the image itself. Then small rotations for each image were estimated. This allows to correct small errors in the estimated distortion corrections, thus reducing the level of noise.
§ METHODOLOGY
To make use of these different types of data, a dynamical model is configured in the GINS software. The parameters of the model, primarily the gravity field coefficients, are then adjusted to fit the observations. The process is described in more details hereafter.
§.§ Preliminary gravity sensitivity study
Before estimating the gravity field coefficients, we need to determine which ones actually have an observable signature in the data. Estimating non-observable (or weakly observable) parameters will result in inaccurate estimates, possibly leading to an overall unrealistic (i.e. non-physical) solution. We thus perform a sensitivity study to identify the highest spherical harmonic degree and order that can be determined by Rosetta. For that, we apply the following procedure:
1. The longest arc with closest distance to the comet (i.e. Arc N=09 in Tab. <ref> and Arc N=08 in Tab. <ref>) is approximated at best using a fully keplerian orbit (GM only).
2. The theoretical Doppler observations for this case are simulated using GINS.
3. Then, the orbit is perturbed by switching a single gravity field coefficient at a time from 0 to its `homogeneous value'.
4. The resulting Doppler observations are again simulated and compared to the unperturbed ones.
If the deviation is higher than the noise measurement, the coefficient is considered observable. If it is comparable or below, the coefficient will be either weakly observable or not observable at all, and should not be estimated.
Results of the sensitivity analysis are shown in Fig. <ref> for the C̅_lm. Results for the S̅_lm are basically the same.
Attention must be paid on different approximations made for this sensitivity study. First, only Doppler observations were considered, which explains why degree-1 is ignored here. Adding landmarks will improve the solution and allow determining degree-1 coefficients.
Second, the gravity field used in this sensitivity study is based on the assumption of homogeneous mass distribution. If some coefficients turn out to be weaker or stronger than their homogeneous counterpart, the results of the sensitivity study could change a little. These results must therefore be interpreted only in terms of orders of magnitude.
Third, we quantify the influence of each gravity coefficient separately. The correlations are thus ignored and a linear combination of some of these coefficients could actually leave the Doppler signal unchanged. These results should therefore be considered as an achievable maximum.
Based on this study, we have decided to define two calculation cases:
* “Case 0/2" : in this first case we estimate the coefficients C̅_10, C̅_11, S̅_11, C̅_20, C̅_22 and S̅_22 over the entire mission (global parameters), and we estimate C̅_00 (i.e. the GM) before and after perihelion (period-dependent parameter).
* “Case 2/4" : in the second case we estimate the coefficients C̅_21, S̅_21, C̅_30, C̅_31, S̅_31 and C̅_40 over the whole mission (global parameters), while C̅_00, C̅_10, C̅_11, S̅_11, C̅_20, C̅_22 and S̅_22 are estimated separately either from data before perihelion or from those acquired after perihelion.
§.§ Measurements editing and arcs selection
Doppler measurements are affected by white noise of the order of a few milli-hertz. For some reasons (technical or operational), some data points are doubtful. An appropriate editing method would allow us to reject these bad or questionable observations, but it is not straightforward to decide which data points are “good” or not. In GINS, data editing is done in two steps. After comparison with the pre-fit orbits of Rosetta, residuals with a root mean square (RMS) larger than 5Hz are clear outliers and discarded (This represents an average of less than 2% of measurements over the selected arcs). Additionally, measurements acquired when the spacecraft elevation as seen from the ground station is below the empirical threshold of 12 are also discarded, since the accuracy of the tropospheric correction decreases with this angle.
The orbit is determined in an iterative process, in which dynamical parameters are fitted to the observations. At each iteration, GINS filters out additional data points with residuals larger than 3 σ of the residuals RMS. After the orbit has converged (i.e. the post-fit residuals RMS is stable), the entire arc is kept or not depending on the percentage of rejected observations (too much data eliminated is not acceptable because the orbit may then converge to an inaccurate solution). Only a handful of arcs (5 arcs before perihelion and 3 arcs after perihelion in total) are manually eliminated based on this criterion.
A second set of arcs has been rejected due to unexplained jumps in the Doppler signal of the order of 5 mHz at exactly midnight recording time. Since there is no way to know which data are correct (before or after the discontinuity), the whole arc is manually discarded when such a phenomena occurs. This leads to the rejection of 2 arcs pre-perihelion and 2 post-perihelion
A third set of 12 arcs between June 1st and July 20th, 2016, have all been rejected because of the numerous bad measurements identified during that period, which is thought to be essentially linked to the issue described above.
Finally, 16 arcs (15 pre- and 1 post-perihelion) with minimum distance between the spacecraft and the comet greater than 25 km have been rejected due to low sensitivity to the gravoity field.
Tab. <ref> summarises all the measurements a priori available and those actually used in the calculation for the two settings considered in this study, namely 'Case 0/2' and the 'Case 2/4' as defined above.
§.§ Parameters estimation procedure
§.§.§ Estimated Parameters setup
We now estimate the parameters of our model by fitting them to observations using a least squares inverse approach. The model has been described in the previous sections. The 13.8k estimated parameters, included the target gravity coefficients, are grouped in Tab. <ref>.
Spacecraft position and velocity components at the start of each arc are estimated. To these six initial state parameters, three additional ones are added for each of the 230 desaturations of the inertial wheels (one velocity increment per axis), to compensate for the small residual Δ V that they induce. The camera attitude is adjusted using three angles (α_X, α_Y, α_Z) representing rotations around the X, Y and Z camera axes, respectively. We introduce a weak constraint to limit the amplitude of the corrections on these angles.
The gravitational attraction of the comet is classically modeled using a spherical harmonics expansion as described above. The strategy of estimation of the C̅_lm,S̅_lm of the comet follows the definition of the cases in section <ref>. Un-estimated coefficients up to degree and order 20 are still introduced into our gravity field model (see Supplement material), but they remain fixed to the values given by the shape model under the assumption of a uniform mass distribution (to stay as close as possible to real flight conditions).
Finally, one scaling factor is estimated over the entire mission to calibrate the solar radiation pressure force and account for the uncertainties of the macro-model of the spacecraft (i.e. the bus and solar panels models).
§.§.§ Gravity field estimation procedure
As mentioned in section <ref>, some of the forces taken into account depend on parameters that are either estimated (e.g. SRP and Stokes coefficients up to degree 2 or 4) or fixed to their a priori/model value (e.g. out-gassing and third-body effects). The problem is solved using the GINS software developed at CNES <cit.>. GINS can propagate orbits and adjust parameters such as the gravity field to fit measurements at best. Because GINS can only adjust parameters arc per arc, it is combined with DYNAMO for multi-arc processing. DYNAMO is a tool belonging to the GINS software package which allows us to stack and solve systems of linear equations. The sequence of operations performed with the GINS/DYNAMO chain is as follows:
* GINS is first used to get an initial estimate of AD parameters while the GP and PD parameters (as the gravity field coefficients) are kept fixed. Using an iterative least squares procedure, the initial position and velocity of the spacecraft are typically estimated at this stage of the POD process, along with other local parameters like WoL and cameras pointing.
* Once convergence has been achieved, GINS produces the normal equations including all parameters, i.e. AD, PD and GP like the gravity field and the SRP scale factor.
* These equations are then combined using a specific weighting based on Helmert’s method <cit.> and solved with a truncated SVD method <cit.>.
* Then, the new PD and GP parameters (ie. the new gravity field and the new SRP scale factor) are injected back into GINS and a new "macro-iteration" (i.e. iteration of the complete GINS/DYNAMO chain) begins. In this case, several are necessary since the domain is not linear.
For the orbit initialization, we extract a priori value of the spacecraft states at the beginning of each arc from the SPICE kernel (see Tab. <ref>). The gravity field is initialized using the shape model of the comet, computed using GILA software developed by <cit.> assuming homogeneous mass distribution inside the comet. Therefore if a homogeneous-density solution exists, we start nearby.
The so-obtained C̅_lm,S̅_lm up to degree and order 20 are reported in the Supplement material. The other estimated parameters are initialized at zero for WoL Δ V and the SRP scale factor is set to F_S=1 (see Eq. <ref>).
The problem is non-linear due to the odd shape of the comet and the relative scarcity of the measurements, and some weak constraints are added to keep the solution within reasonable limits. In GINS and DYNAMO, constraints are quadratic penalties in the solution space that guide solutions whose parameter values are too far from their expected values (as opposed to hard constraints that would completely prohibit certain parameter values). The following constraints are imposed: the SRP scale factor is 1 ± 1, the WoL residual Δ V are 0 ± 10^-5 m/s and the camera attitude adjustments are 0 ± 0.1. No constraints are imposed on orbital and gravitational parameters.
As mentioned above, a regularisation of the inversion of the normal equation is applied using the singular value truncation method <cit.>. The threshold value selected is: 10^7.
§ RESULTS
This section gathers the numerical results of the two cases defined above: "Case 0/2" and "Case 2/4".
In the former, the degree 1 and 2 coefficients are global parameters adjusted over all the arcs, while degree 0 (i.e. GM) is a period-dependent parameter estimated once using the pre-perihelion data only and a second time using the post-perihelion data only. In the Case 2/4, degree 0, 1 and (partly) 2 are estimated per period, while the others are adjusted over the full set of data (PD and GP are separated using the 10σ boundary criterion shown in Figs. <ref>).
§.§ Adjustment of non gravitationnal parameters
On each arc, the position and initial velocity of the spacecraft are adjusted, along with the orientation of the camera for each photo taken and the residual Δ V for each wheel of loading manoeuvre. The adjustment of each of these parameters tells us about the relevance of our initial hypothesis/models.
We observe a significant correlation between the corrections of the initial states of Rosetta and the distance to the comet (see Fig. <ref>). The closer the trajectory of the arc is to the center of the comet, the smaller the corrections in the initial position. This can be explained by the fact that our a priori values for the spacecraft initial position vector are taken from the SPICE kernel, provided by the ESOC navigation team, which included, just like us, both radiometric data and images (but coming from the navigation camera instead of OSIRIS like us) as described by <cit.>. As a result, both the orbit of ESOC/SPICE and ours should be comparable at short distance to the comet where images provide more accurate position of Rosetta with respect to the comet.
The 230 Δ V residuals of the adjusted WoLs for all arcs have a mean of zero, with a standard deviation of 0.4 mm/s. This is one order of magnitude below the noise, which gives us confidence in our solution. High Δ V estimates would have indicated a limitation of our dynamical model where neglected phenomena could have been wrongly absorbed by these parameters.
The corrections in the orientation of the OSIRIS cameras are on average a few tenths of a degree, with a standard deviation of 0.4 degrees. This is of same order of magnitude as the accuracy of the comet orientation model (see Fig. <ref> and associated discussions in Sec. <ref>), which again gives us confidence in our fits.
At the end of all the iterations, the SRP scaling factor is estimated to be F_S=1.000052 with a standard deviation of 4.6 × 10^-6. The fact that F_S is very close to 1 and that its standard deviation is very low tells us that the force is well observed and determined and that one scaling factor as a global parameter (i.e. used to fit solar pressure over the full set of measurements) is well suited for Rosetta.
§.§ Convergence of gravity coefficients
The results for our degraded case (case 0/2) are obtained in a single step of 12 iterations where the coefficient C̅_00 is estimated before and after perihelion and C̅_10, C̅_11, S̅_11, C̅_20, C̅_22 and S̅_22 are estimated over the two periods combined. It should be noted that all other coefficients are left at the values deduced from the shape.
The results for our nominal case (case 2/4) are obtained in two steps: for the first 12 iterations, coefficients C̅_00, C̅_10, C̅_11, S̅_11, C̅_20, C̅_22 and S̅_22 are estimated before and after the perihelion separately and C̅_21 and S̅_21 are estimated on both period combined. For 12 additional iterations, coefficients C̅_21 and S̅_21, C̅_30, C̅_31, S̅_31 and C̅_40 are estimated for both periods combined, and the previously estimated coefficients are left free, but constrained to the value obtained at the twelfth iteration with a weight of 3σ.
Fig. <ref> shows the evolution of our `Case 2/4' estimates of C̅_lm,S̅_lm over the iterations. The convergence profile for the `Case 0/2' is quite similar to that of the `Case 2/4'. As shown on the figure, all the parameters converged well, with a clear distinction between both periods estimates of C̅_00 and C̅_10. The coefficients C̅_20 before and after perihelion both converge to the same value, unlike the C̅_22 and S̅_22 coefficients which differ by ∼1·10^-3 (i.e. ∼20σ) and ∼3.5·10^-3 (i.e. ∼70σ) respectively between the pre- and post-perihelion values. The coefficients C̅_21 and S̅_21 are released from the first step, but are not distinguished between the pre-perihelion and post-perihelion phases of the mission. While C̅_21 converges very quickly to a value that it maintains in the second step, S̅_21 estimates undergoes a jump between the two steps of the algorithm. This is the only parameter that seems to be significantly affected by our two-step adjustment strategy. However one needs to weigh that observation up, because the amplitude of S̅_21 is at least an order of magnitude lower than the others.
§.§ State of the art comparison
Our new solution of 67P/C-G gravity field (Case 2/4) is compared to previous values in Tabs. <ref>, <ref> and <ref>. From our estimate of C̅_00 using data before (Pre-P.) and after (Post-P.) perihelion, we get the following values for the mass of the comet:
M_Pre-P. = (9.980 ± 0.00025) × 10^12 kg
M_Post-P. = (9.952 ± 0.00014) × 10^12 kg
Δ M = (28.00 ± 0.29) × 10^9 kg
Δ M is the mass lost during the perihelion pass. It represents about 0.28% of the total mass of the comet. This is more than two times larger than previous estimates of 0.1% <cit.>.
Our (classical) measurement modeling assumes that all the data points are affected by a decorrelated error, but this approximation is almost certainly wrong. In fact, Doppler measurements are all derived from the same electronic process and may be affected by coloured noise due to auto-correlation of the signal. As for the landmark measurements, batches of several hundred of them are identified per image (see Fig. <ref>), making them subject to a common degree of error. This simplification of the measurement modelling therefore leads to an underestimation of the formal standard deviations obtained after solving the least-squares problem. There is no way, except through a complicated simulation, to estimate these realistic correlations. The sigmas presented here are therefore very optimistic, and should be interpreted with great caution. A possible heuristic would be to multiply all the standard deviations by ten as done for Fig. <ref>, but we prefer to provide the raw GINS/DYNAMO outputs in the tables of this section, although the central values are truncated to significant figures (i.e. ±10 times the sigmas).
As shown in Tab. <ref> and Tab. <ref>, our estimates are generally speaking in good agreement with the results of <cit.>. Some discrepancies can nevertheless be observed in the solutions of C̅_11, S̅_11, C̅_21 and S̅_21, which can be partly explained by a different frame definition since newer SPICE kernels are used here. The reference frame is also strongly linked to the landmark definition.
Our higher degrees solutions are shown in Tab. <ref>. Not estimated by <cit.>, we only compare them to those provided by <cit.>. Since the latter provided non-normalised Stokes coefficients estimates, relative to a different reference radius of 1000 m, we report here coefficients that have been converted using the same standard as the others, i.e. normalised and with a radius of 2650 m. Not provided by these authors, the coefficients C̅_11, S̅_11 and C̅_21, S̅_21 have been set to 0 for the “Godard solution”. Note that <cit.> did not distinguish between pre- and post-perihelion, so didn't estimate any loss of mass at perihelion. For these degree 3 and 4 coefficients, significant discrepancies are observed. For low degrees, the uniform distribution was indeed a legitimate conclusion given the accuracy of previous results. However the increased accuracy here due to the addition of landmarks shows some deviations from the previous conclusion.
Fig. <ref> shows the power spectrum of our gravity solutions (computed with Eq. <ref>) superimposed with that of the UMD model and of the solution of <cit.>. The power spectrum of the standard deviations, also reported on that figure, were computed as follows:
σ_P_l=√(∑_m=0^l (σ_C̅_lm^2+σ_S̅_lm^2)/2l+1).
The significant improvement of our solution with respect to <cit.> in terms of standard deviation of the gravity coefficients estimate is clearly visible in Fig. <ref>. Despite the fact that such an improvement is expected following the addition of landmark data in our process, we think that these formal errors are overoptimistic because the large number n of data points included in our fit (one million landmarks versus half a million Doppler measurements, see Tab. <ref>) are assumed decorrelated (as commonly done) in the way they are computed (with a decrease following a n^-1/2 power law), which we know is not entirely true. Furthermore, we observe in this figure an inflection of the formal errors of degree 3 and 4 which is due to the fact that only a subset of coefficients at these degrees is estimated (see Tab. <ref>).
§.§ Motion of the center of mass
Thanks to the addition of the landmarks, an accurate estimation of degree 1 coefficients is now possible and the shift between the CoR and the CoM can be inferred according to:
Δ x = C̅_11 R √(3)
Δ y = S̅_11 R √(3)
Δ z = C̅_10 R √(3).
For each solution of degree 1 reported in Tab. <ref>, we compute with the above equations the components of the CoR-CoM vector. Results are presented in Tab. <ref>.
Derived from different computations, one must be caution when comparing them, as they do not represent the same information.
For instance, the `Shape (UMD)' shift corresponds to the vector between the CoR in which the body shape is described and the centre of gravity that the body would have if the distribution of masses were uniform, i.e. the CoR-CoF offset. The shift 'Pätzold (D)', inferred from RSI measurements only <cit.>, corresponds to the CoR-CoM offset. Since RSI Doppler data are not physically linked to the body (as they only provide information about the projection of the spacecraft's velocity on the line of sight) the uncertainty on the CoR-CoM vector is very large.
Whereas the 'Case 0/2' only estimates a single value for degrees 1, which is a sort of average over all the fitted arcs, the 'Case 4/2' distinguishes between values before and after perihelion. This allows us to observe a large variation in the comet's CoM location during its trajectory around the Sun. In particular, the CoM moved along the Z axis by +35.2 m.
The loss of mass and the redeposition of dust may explain this observation.
Indeed, the orientation of the comet as it passed through perihelion was such that the southern solstice took place 34 days after perihelion, implying a strong sunlight imbalance between the comet's northern and southern hemispheres. This should have resulted in a much more pronounced out-gassing in the south
shifting the CoM northward.
§.§ Correlations
The correlations between the estimated gravity coefficients are shown in Fig. <ref> for the ‘step 1’ and for the ‘step 2’ of our inversion procedure. A large majority of them are less than 0.1, which is quite satisfactory. For ‘step 1’, the largest correlations are between C̅_00, C̅_10 and C̅_20 post-perihelion coefficients. However, they are in absolute value between 0.22 and 0.31, which is still quite small and acceptable. For ‘step 2’, as expected, the coefficients with the highest degrees are the most correlated (up to 0.41). In fact, the greater the number of parameters for the same number of measurements, the more correlated the solutions obtained will be, especially if the signature of the parameters added to the measurements decreases, which is the case with degree 3 and 4 coefficients, where we believe we have reached the limit of accessible precision. It should be noted that the correlations of the lowest degree coefficients are further reduced with respect to ‘step 1’, as a consequence of the resolution strategy that adds constraints on degrees lower than 2 at ‘step 2’.
§.§ Residuals
RMS postfit residuals are good indicators of the quality of the fit and of the validity of the solutions. Those of Case 2/4 are provided in Tab. <ref> per data type. They are satisfactory, with mean values just above the measurement noise.
Fig. <ref> shows the residual profile of the Doppler measurements of arc #8 (representative of a long arc) before perihelion and arc #23 (representative of a short arc) after perihelion. The profiles are not perfectly flat (which is classical in POD), suggesting that there are still information in the data that our estimated solution doesn't fully represent. The tendencies observed can be explained by the comet's ephemeris, which is not adjusted in our case. An optimisation of the orbit of 67P/C-G could erase this last tendency. Nevertheless, Fig. <ref> clearly shows the decrease of the postfit residuals with respect to the prefit residuals, revealing the improvement of our adjusted dynamical model and trajectory with respect to our a priori model, based on the UMD gravity field.
§ DISCUSSION
§.§ Added value of optical measurements
We here discuss and quantify the added value of the landmark measurements since they represent the main difference with respect to previous studies. The first indicator of the beneficial addition of landmarks is the number of converging arcs that is approximately twice that without landmark data, resulting in better resolution and precision of the gravity solution. Secondly, one can see in the data residuals that the quality of our orbit and dynamical model as a whole has clearly benefited from the addition of landmarks. Indeed, besides the improvement of our estimated model and trajectory over our a priori ones (Fig. <ref>), the improvement of our orbit with respect to that of ESOC is more subtle, but nevertheless perceptible as shown in Fig. <ref>. On this figure we have plotted the ratio between the residuals obtained with the orbit of ESOC and with ours. Above one, the ratio means our orbit matches the measurements better. As one can see, our landmark residuals are 2 to 10 times smaller than those calculated with the ESOC orbit. Although both solutions are obtained from a combination of Doppler and image data, this result was expected since ESOC did not use the OSIRIS landmarks as such. What is even more interesting is that the Doppler residual ratios, generally closer to one, also show a notable improvement in a large majority of low altitude orbits (the most sensitive to the gravity field), revealing the interest of including landmark data in the inversion process.
At the point of convergence, it is possible to calculate the normal equations according to each contribution: one normal equation for the Doppler measurements only, and one normal equation for the Doppler plus landmark measurements. An inversion of these two equations allows us to assess the linear impact of the landmark addition on the solution and its accuracy. Both calculations leads to parameter values that are statistically equivalent, but the standard deviations and correlations are very different, mostly because the number of measurements is drastically different between the two cases (∼ 470k Doppler measurements for both and ∼ 1M extra landmark measurements for the combined case), but also because the two types of data are complementary, so with different sensitivities to the estimated parameters. This is illustrated on Fig. <ref>, which shows the ratios per Stokes coefficient between the standard deviations of the Doppler plus landmark case and those of the Doppler only case, for ‘step 1’ and ‘step 2’ separately.
For ‘step 1’, a strong impact of landmark measurements on the standard deviation of C̅_00 and C̅_10 is observed, whereas C̅_11 and S̅_11 are less dependent on the addition of landmark observations. This is because the latter are already well constrained by the dynamics induced by degree 1 (related to the non-inertial frame acceleration, see Sec. <ref>). The uncertainties of the degree 2 coefficients are also improved by the landmark measurements, but to a lesser extent than those of the C̅_00 and C̅_10.
For ‘step 2’, the standard deviation of coefficients of degree ≤2 are constrained by the resolution strategy, so the impact of landmark measurements is only visible on the fully freed coefficients of degree 3 and 4. We observe a very strong improvement on the uncertainty of C̅_30, which is linked to the north/south asymmetry and is clearly visible thanks to the landmarks. C̅_40, which represents information that is averaged over one orbit revolution, and is therefore slightly less observable using a POD enhanced by the landmarks.
§.§ Bulk density and porosity
<cit.> interpreted their GM estimate in terms of bulk density, deducing limits on the porosity of the material making up the comet. As our estimate of GM is close to theirs (666.1 m^3/s^2 compared with 666.2 m^3/s^2), the estimate of the mean density and resulting constraints on the porosity remain unchanged.
§.§ Implications of mass loss
One of the main results of this study is the new mass loss (Δ M) estimate that is rather different from the value published by <cit.>, most likely due to our introduction of the frame acceleration (γ⃗_⃗1⃗).
The Δ M of <cit.> was used by <cit.> to determine the dust-to-gas ratio (δ_DG^V) in the coma. However, the value they obtained (i.e. δ_DG^V<1 for all volatiles) is incompatible with measurements from GIADA (see <cit.>), which reported δ_DG^V = 4 ± 2. We have reproduced the calculations carried out in <cit.> and made similar figures representing the ranges of plausible values obtained for the dust-to-gas ratio. Fig. <ref> shows the values that we obtain based on our comet mass loss estimate for the dust-to-water and dust-to-all-volatiles ratios, superimposed to those previously published. The ratios are obtained by combining our Δ M value with the loss of volatile elements, which can be observed in two ways: either in-situ or through remote sensing (see <cit.> for more details).
These two types of observations result in fundamentally incompatible values for the dust-to-water and dust-to-all-volatiles ratios. Although this intrinsic discrepancy makes it hard to reach a firm conclusion, one can see in fig. <ref> that the values derived from our Δ M estimate are overall closer to GIADA measurements than those derived from the Δ M previously given in <cit.>. The ratio based on in-situ data are in good agreement with the previous solutions of <cit.> and <cit.>, and, to a lesser extent with those deduced by <cit.> using the Δ M from <cit.>. These new findings could help put a better constraint on the dust-to-gas ratios.
§.§ (Non)-homogeneity of the nucleus
As demonstrated in Sec. <ref>, our estimated gravity field is different from that computed under uniform density assumption. In particular, a clear offset (several orders of magnitude larger than 1σ) is observed between the center of reference and the center of mass, revealing a now incontestable level of heterogeneity in the comet. The origin of such heterogeneity can be multiple, but the fact that we observed a displacement of the CoM away from the direction of the Sun during the passage of the comet at perihelion, leads us to believe that ice sublimation is probably the mechanism responsible for this heterogeneity.
Just like degree-1 coefficients, the degree-2 coefficients can also be used to constrain the interior of the comet as they depend on its moments of inertia according to the following expressions
C̅_20 M R^2 √(5) = I_xx+I_yy/2 - I_zz
C̅_21 M R^2 √(5/3) = I_xz
S̅_21 M R^2 √(5/3) = I_yz
C̅_22 M R^2 √(5/12) = I_yy-I_xx/4
S̅_22 M R^2 √(5/12) = I_xy/2.
The retrieval of the moments of inertia from degree-2 is thus possible in theory, although this is an ill-posed problem because one cannot univocally determine the six moments of inertia (I_xx,I_yy,I_zz,I_xy,I_xz,I_yz) from the five estimated degree-2 coefficients.
When no additional information is available, one can interpret the estimated coefficients in terms of interior properties simply by comparing them to the coefficients computed under UMD assumption. For the degree-2 specifically, we cross validate our own computation based on the Shape model with that inferred from the above equations using the complete inertia matrix provided by <cit.> (see their Eq. (18)) under similar UMD assumption. The result of such calculations is reported in Tab. <ref>.
The very good agreement between these two sets of coefficients provides a robust reference against which estimated degree-2 parameters can be compared to discuss the level of heterogeneity in the comet.
In the case of the 67P, we are fortunate to have additional information regarding its internal mass distribution. Indeed, as mentioned in the introduction, <cit.> concluded from the analysis of the rotational motion of 67P that the interior of the comet should not be homogeneous. An interesting outcome of their study are the two linear equations (Eq.10 in their paper) relating the main moments of inertia to each other. Combined with the above equations relating C̅_20 and C̅_22 to I_xx,I_yy and I_zz, one can univocally return to I_xx,I_yy,I_zz from our estimates of C̅_20 and C̅_22. Accounting from the uncertainties in the model parameters of <cit.> (i.e. the difference between the coefficients of their two linear equations) and in our gravity estimates (at 10σ), we get the following ranges of moments of inertia for 67P:
I_xx / (M R^2) ∈ [0.00344,0.03973],
I_yy / (M R^2) ∈ [0.12170,0.15798],
I_zz / (M R^2) ∈ [0.12355,0.19141].
As expected given the difference between our degree-2 estimated coefficients and the one reported in Tab. <ref>, these 10σ ranges of moment of inertia do not include their homogeneous counterparts (I^h_xx / (M R^2) = 0.13607;I^h_yy / (M R^2) = 0.25115, I^h_zz / (M R^2) = 0.27031), proving from another point of view (parameter-wise) the existence of large-scale heterogeneities inside the comet. A companion paper based on the method of <cit.> will discuss in more detail the inference of the internal properties of 67P from our new gravity field and the rotation state of the comet.
§ CONCLUSION
In this study we have reevaluated the gravity field of comet 67P/C-G by using optical observations in addition to traditional Doppler measurements. Thanks to the complementary of these types of data, both the accuracy and resolution of the gravity field of the comet are significantly improved. The new field resulting from this analysis is estimated with statistical significance up to degree 4. Consistent with the previous solutions, the order-of-magnitude more precise field we obtain here allows us to detect heterogeneities in the comet's nucleus that were not observed before from the less precise field. Two major results emerge from our analysis. The first is that we estimate a mass loss due to ice sublimation at perihelion that is 2.8 times larger than previously estimated <cit.>. This leads to dust-to-water and dust-to-gaz ratios that are in better agreement with those measured with GIADA than before, which may have significant implications, especially for the composition of the coma. The second major result is that we observe, for the first time, a displacement of the center of mass of the comet during its flyby of the Sun. Inferred from a precise determination of the degree-1 gravity coefficients and their variations between pre and post perihelion, this northward shift of ∼35 m could be explained by a more pronounced outgassing activity in the south of the comet than in the north, due to the orientation of its spin axis relative to the Sun.
Finally, this study highlights the benefits of combining radiometric and landmark-based techniques to better estimate geodetic parameters of small bodies. More generally, the use of positional anchors as landmarks and/or altimetry data (e.g. LIDAR) could prove essential for the precise orbit determination of spacecraft around small bodies with the ultimate goal of probing their interior (e.g. <cit.>).
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Julien Laurent-Varin: Conceptualization, Methodology, Software, Formal analysis, Writing - Review & Editing, Visualization, Supervision
Théo James: Investigation, Methodology, Software, Writing - Original Draft, Writing - Review & Editing, Visualization
Jean-Charles Marty: Software, Methodology, Writing - Review & Editing
Laurent Jorda: Resources, Writing - Review & Editing
Sebastien Le Maistre: Resources, Writing - Review & Editing, Visualization, Formal analysis
Robert Gaskell: Resources, Writing - Review & Editing
§ DECLARATION OF COMPETING INTEREST
The authors declare that there is no competing interest...
§ ACKNOWLEDGMENTS
The authors thanks A. Caldiero for his help and constructive discussions regarding the shape gravity field.
§ ARCS DEFINITION
The Tabs. <ref> and <ref> show the details of each arc chosen for the calculations. For each arc, the start and end dates and the duration are given. The root mean square residuals are also provided for the Doppler and landmark measurements for the last resolution of the 'Case 2/4'. Finally, the minimum, mean and maximum distances are provided for each arc.
|
http://arxiv.org/abs/2409.02759v1 | 20240904143606 | Measurement of $\itΛ_\it{b}^0$, $\itΛ_\it{c}^+$ and $\itΛ$ decay parameters using $\itΛ_\it{b}^0 \to \itΛ_\it{c}^+ h^-$ decays | [
"LHCb collaboration",
"R. Aaij",
"A. S. W. Abdelmotteleb",
"C. Abellan Beteta",
"F. Abudinén",
"T. Ackernley",
"A. A. Adefisoye",
"B. Adeva",
"M. Adinolfi",
"P. Adlarson",
"C. Agapopoulou",
"C. A. Aidala",
"Z. Ajaltouni",
"S. Akar",
"K. Akiba",
"P. Albicocco",
"J. Albrecht",
"F. Alessio",
"M. Alexander",
"Z. Aliouche",
"P. Alvarez Cartelle",
"R. Amalric",
"S. Amato",
"J. L. Amey",
"Y. Amhis",
"L. An",
"L. Anderlini",
"M. Andersson",
"A. Andreianov",
"P. Andreola",
"M. Andreotti",
"D. Andreou",
"A. Anelli",
"D. Ao",
"F. Archilli",
"M. Argenton",
"S. Arguedas Cuendis",
"A. Artamonov",
"M. Artuso",
"E. Aslanides",
"R. Ataide Da Silva",
"M. Atzeni",
"B. Audurier",
"D. Bacher",
"I. Bachiller Perea",
"S. Bachmann",
"M. Bachmayer",
"J. J. Back",
"P. Baladron Rodriguez",
"V. Balagura",
"W. Baldini",
"L. Balzani",
"H. Bao",
"J. Baptista de Souza Leite",
"C. Barbero Pretel",
"M. Barbetti",
"I. R. Barbosa",
"R. J. Barlow",
"M. Barnyakov",
"S. Barsuk",
"W. Barter",
"M. Bartolini",
"J. Bartz",
"J. M. Basels",
"S. Bashir",
"G. Bassi",
"B. Batsukh",
"P. B. Battista",
"A. Bay",
"A. Beck",
"M. Becker",
"F. Bedeschi",
"I. B. Bediaga",
"N. B. Behling",
"S. Belin",
"V. Bellee",
"K. Belous",
"I. Belov",
"I. Belyaev",
"G. Benane",
"G. Bencivenni",
"E. Ben-Haim",
"A. Berezhnoy",
"R. Bernet",
"S. Bernet Andres",
"A. Bertolin",
"C. Betancourt",
"F. Betti",
"J. Bex",
"Ia. Bezshyiko",
"J. Bhom",
"M. S. Bieker",
"N. V. Biesuz",
"P. Billoir",
"A. Biolchini",
"M. Birch",
"F. C. R. Bishop",
"A. Bitadze",
"A. Bizzeti",
"T. Blake",
"F. Blanc",
"J. E. Blank",
"S. Blusk",
"V. Bocharnikov",
"J. A. Boelhauve",
"O. Boente Garcia",
"T. Boettcher",
"A. Bohare",
"A. Boldyrev",
"C. S. Bolognani",
"R. Bolzonella",
"N. Bondar",
"A. Bordelius",
"F. Borgato",
"S. Borghi",
"M. Borsato",
"J. T. Borsuk",
"S. A. Bouchiba",
"M. Bovill",
"T. J. V. Bowcock",
"A. Boyer",
"C. Bozzi",
"A. Brea Rodriguez",
"N. Breer",
"J. Brodzicka",
"A. Brossa Gonzalo",
"J. Brown",
"D. Brundu",
"E. Buchanan",
"A. Buonaura",
"L. Buonincontri",
"A. T. Burke",
"C. Burr",
"A. Butkevich",
"J. S. Butter",
"J. Buytaert",
"W. Byczynski",
"S. Cadeddu",
"H. Cai",
"A. C. Caillet",
"R. Calabrese",
"S. Calderon Ramirez",
"L. Calefice",
"S. Cali",
"M. Calvi",
"M. Calvo Gomez",
"P. Camargo Magalhaes",
"J. I. Cambon Bouzas",
"P. Campana",
"D. H. Campora Perez",
"A. F. Campoverde Quezada",
"S. Capelli",
"L. Capriotti",
"R. Caravaca-Mora",
"A. Carbone",
"L. Carcedo Salgado",
"R. Cardinale",
"A. Cardini",
"P. Carniti",
"L. Carus",
"A. Casais Vidal",
"R. Caspary",
"G. Casse",
"J. Castro Godinez",
"M. Cattaneo",
"G. Cavallero",
"V. Cavallini",
"S. Celani",
"D. Cervenkov",
"S. Cesare",
"A. J. Chadwick",
"I. Chahrour",
"M. Charles",
"Ph. Charpentier",
"E. Chatzianagnostou",
"C. A. Chavez Barajas",
"M. Chefdeville",
"C. Chen",
"S. Chen",
"Z. Chen",
"A. Chernov",
"S. Chernyshenko",
"X. Chiotopoulos",
"V. Chobanova",
"S. Cholak",
"M. Chrzaszcz",
"A. Chubykin",
"V. Chulikov",
"P. Ciambrone",
"X. Cid Vidal",
"G. Ciezarek",
"P. Cifra",
"P. E. L. Clarke",
"M. Clemencic",
"H. V. Cliff",
"J. Closier",
"C. Cocha Toapaxi",
"V. Coco",
"J. Cogan",
"E. Cogneras",
"L. Cojocariu",
"P. Collins",
"T. Colombo",
"M. C. Colonna",
"A. Comerma-Montells",
"L. Congedo",
"A. Contu",
"N. Cooke",
"I. Corredoira",
"A. Correia",
"G. Corti",
"J. J. Cottee Meldrum",
"B. Couturier",
"D. C. Craik",
"M. Cruz Torres",
"E. Curras Rivera",
"R. Currie",
"C. L. Da Silva",
"S. Dadabaev",
"L. Dai",
"X. Dai",
"E. Dall'Occo",
"J. Dalseno",
"C. D'Ambrosio",
"J. Daniel",
"A. Danilina",
"P. d'Argent",
"A. Davidson",
"J. E. Davies",
"A. Davis",
"O. De Aguiar Francisco",
"C. De Angelis",
"F. De Benedetti",
"J. de Boer",
"K. De Bruyn",
"S. De Capua",
"M. De Cian",
"U. De Freitas Carneiro Da Graca",
"E. De Lucia",
"J. M. De Miranda",
"L. De Paula",
"M. De Serio",
"P. De Simone",
"F. De Vellis",
"J. A. de Vries",
"F. Debernardis",
"D. Decamp",
"V. Dedu",
"S. Dekkers",
"L. Del Buono",
"B. Delaney",
"H. -P. Dembinski",
"J. Deng",
"V. Denysenko",
"O. Deschamps",
"F. Dettori",
"B. Dey",
"P. Di Nezza",
"I. Diachkov",
"S. Didenko",
"S. Ding",
"L. Dittmann",
"V. Dobishuk",
"A. D. Docheva",
"C. Dong",
"A. M. Donohoe",
"F. Dordei",
"A. C. dos Reis",
"A. D. Dowling",
"W. Duan",
"P. Duda",
"M. W. Dudek",
"L. Dufour",
"V. Duk",
"P. Durante",
"M. M. Duras",
"J. M. Durham",
"O. D. Durmus",
"A. Dziurda",
"A. Dzyuba",
"S. Easo",
"E. Eckstein",
"U. Egede",
"A. Egorychev",
"V. Egorychev",
"S. Eisenhardt",
"E. Ejopu",
"L. Eklund",
"M. Elashri",
"J. Ellbracht",
"S. Ely",
"A. Ene",
"E. Epple",
"J. Eschle",
"S. Esen",
"T. Evans",
"F. Fabiano",
"L. N. Falcao",
"Y. Fan",
"B. Fang",
"L. Fantini",
"M. Faria",
"K. Farmer",
"D. Fazzini",
"L. Felkowski",
"M. Feng",
"M. Feo",
"A. Fernandez Casani",
"M. Fernandez Gomez",
"A. D. Fernez",
"F. Ferrari",
"F. Ferreira Rodrigues",
"M. Ferrillo",
"M. Ferro-Luzzi",
"S. Filippov",
"R. A. Fini",
"M. Fiorini",
"K. M. Fischer",
"D. S. Fitzgerald",
"C. Fitzpatrick",
"F. Fleuret",
"M. Fontana",
"L. F. Foreman",
"R. Forty",
"D. Foulds-Holt",
"M. Franco Sevilla",
"M. Frank",
"E. Franzoso",
"G. Frau",
"C. Frei",
"D. A. Friday",
"J. Fu",
"Q. Fuehring",
"Y. Fujii",
"T. Fulghesu",
"E. Gabriel",
"G. Galati",
"M. D. Galati",
"A. Gallas Torreira",
"D. Galli",
"S. Gambetta",
"M. Gandelman",
"P. Gandini",
"B. Ganie",
"H. Gao",
"R. Gao",
"Y. Gao",
"Y. Gao",
"Y. Gao",
"M. Garau",
"L. M. Garcia Martin",
"P. Garcia Moreno",
"J. García Pardiñas",
"K. G. Garg",
"L. Garrido",
"C. Gaspar",
"R. E. Geertsema",
"L. L. Gerken",
"E. Gersabeck",
"M. Gersabeck",
"T. Gershon",
"S. G. Ghizzo",
"Z. Ghorbanimoghaddam",
"L. Giambastiani",
"F. I. Giasemis",
"V. Gibson",
"H. K. Giemza",
"A. L. Gilman",
"M. Giovannetti",
"A. Gioventù",
"L. Girardey",
"P. Gironella Gironell",
"C. Giugliano",
"M. A. Giza",
"E. L. Gkougkousis",
"F. C. Glaser",
"V. V. Gligorov",
"C. Göbel",
"E. Golobardes",
"D. Golubkov",
"A. Golutvin",
"A. Gomes",
"S. Gomez Fernandez",
"F. Goncalves Abrantes",
"M. Goncerz",
"G. Gong",
"J. A. Gooding",
"I. V. Gorelov",
"C. Gotti",
"J. P. Grabowski",
"L. A. Granado Cardoso",
"E. Graugés",
"E. Graverini",
"L. Grazette",
"G. Graziani",
"A. T. Grecu",
"L. M. Greeven",
"N. A. Grieser",
"L. Grillo",
"S. Gromov",
"C. Gu",
"M. Guarise",
"L. Guerry",
"M. Guittiere",
"V. Guliaeva",
"P. A. Günther",
"A. -K. Guseinov",
"E. Gushchin",
"Y. Guz",
"T. Gys",
"K. Habermann",
"T. Hadavizadeh",
"C. Hadjivasiliou",
"G. Haefeli",
"C. Haen",
"J. Haimberger",
"M. Hajheidari",
"G. H. Hallett",
"M. M. Halvorsen",
"P. M. Hamilton",
"J. Hammerich",
"Q. Han",
"X. Han",
"S. Hansmann-Menzemer",
"L. Hao",
"N. Harnew",
"M. Hartmann",
"S. Hashmi",
"J. He",
"F. Hemmer",
"C. Henderson",
"R. D. L. Henderson",
"A. M. Hennequin",
"K. Hennessy",
"L. Henry",
"J. Herd",
"P. Herrero Gascon",
"J. Heuel",
"A. Hicheur",
"G. Hijano Mendizabal",
"D. Hill",
"S. E. Hollitt",
"J. Horswill",
"R. Hou",
"Y. Hou",
"N. Howarth",
"J. Hu",
"J. Hu",
"W. Hu",
"X. Hu",
"W. Huang",
"W. Hulsbergen",
"R. J. Hunter",
"M. Hushchyn",
"D. Hutchcroft",
"D. Ilin",
"P. Ilten",
"A. Inglessi",
"A. Iniukhin",
"A. Ishteev",
"K. Ivshin",
"R. Jacobsson",
"H. Jage",
"S. J. Jaimes Elles",
"S. Jakobsen",
"E. Jans",
"B. K. Jashal",
"A. Jawahery",
"V. Jevtic",
"E. Jiang",
"X. Jiang",
"Y. Jiang",
"Y. J. Jiang",
"M. John",
"A. John Rubesh Rajan",
"D. Johnson",
"C. R. Jones",
"T. P. Jones",
"S. Joshi",
"B. Jost",
"J. Juan Castella",
"N. Jurik",
"I. Juszczak",
"D. Kaminaris",
"S. Kandybei",
"M. Kane",
"Y. Kang",
"C. Kar",
"M. Karacson",
"D. Karpenkov",
"A. Kauniskangas",
"J. W. Kautz",
"F. Keizer",
"M. Kenzie",
"T. Ketel",
"B. Khanji",
"A. Kharisova",
"S. Kholodenko",
"G. Khreich",
"T. Kirn",
"V. S. Kirsebom",
"O. Kitouni",
"S. Klaver",
"N. Kleijne",
"K. Klimaszewski",
"M. R. Kmiec",
"S. Koliiev",
"L. Kolk",
"A. Konoplyannikov",
"P. Kopciewicz",
"P. Koppenburg",
"M. Korolev",
"I. Kostiuk",
"O. Kot",
"S. Kotriakhova",
"A. Kozachuk",
"P. Kravchenko",
"L. Kravchuk",
"M. Kreps",
"P. Krokovny",
"W. Krupa",
"W. Krzemien",
"O. K. Kshyvanskyi",
"J. Kubat",
"S. Kubis",
"M. Kucharczyk",
"V. Kudryavtsev",
"E. Kulikova",
"A. Kupsc",
"B. K. Kutsenko",
"D. Lacarrere",
"P. Laguarta Gonzalez",
"A. Lai",
"A. Lampis",
"D. Lancierini",
"C. Landesa Gomez",
"J. J. Lane",
"R. Lane",
"G. Lanfranchi",
"C. Langenbruch",
"J. Langer",
"O. Lantwin",
"T. Latham",
"F. Lazzari",
"C. Lazzeroni",
"R. Le Gac",
"H. Lee",
"R. Lefèvre",
"A. Leflat",
"S. Legotin",
"M. Lehuraux",
"E. Lemos Cid",
"O. Leroy",
"T. Lesiak",
"B. Leverington",
"A. Li",
"C. Li",
"H. Li",
"K. Li",
"L. Li",
"P. Li",
"P. -R. Li",
"Q. Li",
"S. Li",
"T. Li",
"T. Li",
"Y. Li",
"Y. Li",
"Z. Lian",
"X. Liang",
"S. Libralon",
"C. Lin",
"T. Lin",
"R. Lindner",
"V. Lisovskyi",
"R. Litvinov",
"F. L. Liu",
"G. Liu",
"K. Liu",
"S. Liu",
"W. Liu",
"Y. Liu",
"Y. Liu",
"Y. L. Liu",
"A. Lobo Salvia",
"A. Loi",
"J. Lomba Castro",
"T. Long",
"J. H. Lopes",
"A. Lopez Huertas",
"S. López Soliño",
"Q. Lu",
"C. Lucarelli",
"D. Lucchesi",
"M. Lucio Martinez",
"V. Lukashenko",
"Y. Luo",
"A. Lupato",
"E. Luppi",
"K. Lynch",
"X. -R. Lyu",
"G. M. Ma",
"R. Ma",
"S. Maccolini",
"F. Machefert",
"F. Maciuc",
"B. Mack",
"I. Mackay",
"L. M. Mackey",
"L. R. Madhan Mohan",
"M. M. Madurai",
"A. Maevskiy",
"D. Magdalinski",
"D. Maisuzenko",
"M. W. Majewski",
"J. J. Malczewski",
"S. Malde",
"L. Malentacca",
"A. Malinin",
"T. Maltsev",
"G. Manca",
"G. Mancinelli",
"C. Mancuso",
"R. Manera Escalero",
"D. Manuzzi",
"D. Marangotto",
"J. F. Marchand",
"R. Marchevski",
"U. Marconi",
"E. Mariani",
"S. Mariani",
"C. Marin Benito",
"J. Marks",
"A. M. Marshall",
"L. Martel",
"G. Martelli",
"G. Martellotti",
"L. Martinazzoli",
"M. Martinelli",
"D. Martinez Santos",
"F. Martinez Vidal",
"A. Massafferri",
"R. Matev",
"A. Mathad",
"V. Matiunin",
"C. Matteuzzi",
"K. R. Mattioli",
"A. Mauri",
"E. Maurice",
"J. Mauricio",
"P. Mayencourt",
"J. Mazorra de Cos",
"M. Mazurek",
"M. McCann",
"L. Mcconnell",
"T. H. McGrath",
"N. T. McHugh",
"A. McNab",
"R. McNulty",
"B. Meadows",
"G. Meier",
"D. Melnychuk",
"F. M. Meng",
"M. Merk",
"A. Merli",
"L. Meyer Garcia",
"D. Miao",
"H. Miao",
"M. Mikhasenko",
"D. A. Milanes",
"A. Minotti",
"E. Minucci",
"T. Miralles",
"B. Mitreska",
"D. S. Mitzel",
"A. Modak",
"R. A. Mohammed",
"R. D. Moise",
"S. Mokhnenko",
"T. Mombächer",
"M. Monk",
"S. Monteil",
"A. Morcillo Gomez",
"G. Morello",
"M. J. Morello",
"M. P. Morgenthaler",
"A. B. Morris",
"A. G. Morris",
"R. Mountain",
"H. Mu",
"Z. M. Mu",
"E. Muhammad",
"F. Muheim",
"M. Mulder",
"K. Müller",
"F. Muñoz-Rojas",
"R. Murta",
"P. Naik",
"T. Nakada",
"R. Nandakumar",
"T. Nanut",
"I. Nasteva",
"M. Needham",
"N. Neri",
"S. Neubert",
"N. Neufeld",
"P. Neustroev",
"J. Nicolini",
"D. Nicotra",
"E. M. Niel",
"N. Nikitin",
"P. Nogarolli",
"P. Nogga",
"N. S. Nolte",
"C. Normand",
"J. Novoa Fernandez",
"G. Nowak",
"C. Nunez",
"H. N. Nur",
"A. Oblakowska-Mucha",
"V. Obraztsov",
"T. Oeser",
"S. Okamura",
"A. Okhotnikov",
"O. Okhrimenko",
"R. Oldeman",
"F. Oliva",
"M. Olocco",
"C. J. G. Onderwater",
"R. H. O'Neil",
"D. Osthues",
"J. M. Otalora Goicochea",
"P. Owen",
"A. Oyanguren",
"O. Ozcelik",
"F. Paciolla",
"A. Padee",
"K. O. Padeken",
"B. Pagare",
"P. R. Pais",
"T. Pajero",
"A. Palano",
"M. Palutan",
"G. Panshin",
"L. Paolucci",
"A. Papanestis",
"M. Pappagallo",
"L. L. Pappalardo",
"C. Pappenheimer",
"C. Parkes",
"B. Passalacqua",
"G. Passaleva",
"D. Passaro",
"A. Pastore",
"M. Patel",
"J. Patoc",
"C. Patrignani",
"A. Paul",
"C. J. Pawley",
"A. Pellegrino",
"J. Peng",
"M. Pepe Altarelli",
"S. Perazzini",
"D. Pereima",
"H. Pereira Da Costa",
"A. Pereiro Castro",
"P. Perret",
"A. Perro",
"K. Petridis",
"A. Petrolini",
"J. P. Pfaller",
"H. Pham",
"L. Pica",
"M. Piccini",
"B. Pietrzyk",
"G. Pietrzyk",
"D. Pinci",
"F. Pisani",
"M. Pizzichemi",
"V. Placinta",
"M. Plo Casasus",
"T. Poeschl",
"F. Polci",
"M. Poli Lener",
"A. Poluektov",
"N. Polukhina",
"I. Polyakov",
"E. Polycarpo",
"S. Ponce",
"D. Popov",
"S. Poslavskii",
"K. Prasanth",
"C. Prouve",
"V. Pugatch",
"G. Punzi",
"S. Qasim",
"Q. Q. Qian",
"W. Qian",
"N. Qin",
"S. Qu",
"R. Quagliani",
"R. I. Rabadan Trejo",
"J. H. Rademacker",
"M. Rama",
"M. Ramírez García",
"V. Ramos De Oliveira",
"M. Ramos Pernas",
"M. S. Rangel",
"F. Ratnikov",
"G. Raven",
"M. Rebollo De Miguel",
"F. Redi",
"J. Reich",
"F. Reiss",
"Z. Ren",
"P. K. Resmi",
"R. Ribatti",
"G. R. Ricart",
"D. Riccardi",
"S. Ricciardi",
"K. Richardson",
"M. Richardson-Slipper",
"K. Rinnert",
"P. Robbe",
"G. Robertson",
"E. Rodrigues",
"E. Rodriguez Fernandez",
"J. A. Rodriguez Lopez",
"E. Rodriguez Rodriguez",
"J. Roensch",
"A. Rogachev",
"A. Rogovskiy",
"D. L. Rolf",
"P. Roloff",
"V. Romanovskiy",
"M. Romero Lamas",
"A. Romero Vidal",
"G. Romolini",
"F. Ronchetti",
"T. Rong",
"M. Rotondo",
"S. R. Roy",
"M. S. Rudolph",
"T. Ruf",
"M. Ruiz Diaz",
"R. A. Ruiz Fernandez",
"J. Ruiz Vidal",
"A. Ryzhikov",
"J. Ryzka",
"J. J. Saavedra-Arias",
"J. J. Saborido Silva",
"R. Sadek",
"N. Sagidova",
"D. Sahoo",
"N. Sahoo",
"B. Saitta",
"M. Salomoni",
"C. Sanchez Gras",
"I. Sanderswood",
"R. Santacesaria",
"C. Santamarina Rios",
"M. Santimaria",
"L. Santoro",
"E. Santovetti",
"A. Saputi",
"D. Saranin",
"A. S. Sarnatskiy",
"G. Sarpis",
"M. Sarpis",
"C. Satriano",
"A. Satta",
"M. Saur",
"D. Savrina",
"H. Sazak",
"L. G. Scantlebury Smead",
"A. Scarabotto",
"S. Schael",
"S. Scherl",
"M. Schiller",
"H. Schindler",
"M. Schmelling",
"B. Schmidt",
"S. Schmitt",
"H. Schmitz",
"O. Schneider",
"A. Schopper",
"N. Schulte",
"S. Schulte",
"M. H. Schune",
"R. Schwemmer",
"G. Schwering",
"B. Sciascia",
"A. Sciuccati",
"S. Sellam",
"A. Semennikov",
"T. Senger",
"M. Senghi Soares",
"A. Sergi",
"N. Serra",
"L. Sestini",
"A. Seuthe",
"Y. Shang",
"D. M. Shangase",
"M. Shapkin",
"R. S. Sharma",
"I. Shchemerov",
"L. Shchutska",
"T. Shears",
"L. Shekhtman",
"Z. Shen",
"S. Sheng",
"V. Shevchenko",
"B. Shi",
"Q. Shi",
"Y. Shimizu",
"E. Shmanin",
"R. Shorkin",
"J. D. Shupperd",
"R. Silva Coutinho",
"G. Simi",
"S. Simone",
"N. Skidmore",
"T. Skwarnicki",
"M. W. Slater",
"J. C. Smallwood",
"E. Smith",
"K. Smith",
"M. Smith",
"A. Snoch",
"L. Soares Lavra",
"M. D. Sokoloff",
"F. J. P. Soler",
"A. Solomin",
"A. Solovev",
"I. Solovyev",
"R. Song",
"Y. Song",
"Y. Song",
"Y. S. Song",
"F. L. Souza De Almeida",
"B. Souza De Paula",
"E. Spadaro Norella",
"E. Spedicato",
"J. G. Speer",
"E. Spiridenkov",
"P. Spradlin",
"V. Sriskaran",
"F. Stagni",
"M. Stahl",
"S. Stahl",
"S. Stanislaus",
"E. N. Stein",
"O. Steinkamp",
"O. Stenyakin",
"H. Stevens",
"D. Strekalina",
"Y. Su",
"F. Suljik",
"J. Sun",
"L. Sun",
"Y. Sun",
"D. S. Sundfeld Lima",
"W. Sutcliffe",
"P. N. Swallow",
"F. Swystun",
"A. Szabelski",
"T. Szumlak",
"Y. Tan",
"M. D. Tat",
"A. Terentev",
"F. Terzuoli",
"F. Teubert",
"E. Thomas",
"D. J. D. Thompson",
"H. Tilquin",
"V. Tisserand",
"S. T'Jampens",
"M. Tobin",
"L. Tomassetti",
"G. Tonani",
"X. Tong",
"D. Torres Machado",
"L. Toscano",
"D. Y. Tou",
"C. Trippl",
"G. Tuci",
"N. Tuning",
"L. H. Uecker",
"A. Ukleja",
"D. J. Unverzagt",
"E. Ursov",
"A. Usachov",
"A. Ustyuzhanin",
"U. Uwer",
"V. Vagnoni",
"G. Valenti",
"N. Valls Canudas",
"H. Van Hecke",
"E. van Herwijnen",
"C. B. Van Hulse",
"R. Van Laak",
"M. van Veghel",
"G. Vasquez",
"R. Vazquez Gomez",
"P. Vazquez Regueiro",
"C. Vázquez Sierra",
"S. Vecchi",
"J. J. Velthuis",
"M. Veltri",
"A. Venkateswaran",
"M. Vesterinen",
"D. Vico Benet",
"M. Vieites Diaz",
"X. Vilasis-Cardona",
"E. Vilella Figueras",
"A. Villa",
"P. Vincent",
"F. C. Volle",
"D. vom Bruch",
"N. Voropaev",
"K. Vos",
"G. Vouters",
"C. Vrahas",
"J. Wagner",
"J. Walsh",
"E. J. Walton",
"G. Wan",
"C. Wang",
"G. Wang",
"J. Wang",
"J. Wang",
"J. Wang",
"J. Wang",
"M. Wang",
"N. W. Wang",
"R. Wang",
"X. Wang",
"X. Wang",
"X. W. Wang",
"Y. Wang",
"Z. Wang",
"Z. Wang",
"Z. Wang",
"J. A. Ward",
"M. Waterlaat",
"N. K. Watson",
"D. Websdale",
"Y. Wei",
"J. Wendel",
"B. D. C. Westhenry",
"C. White",
"M. Whitehead",
"E. Whiter",
"A. R. Wiederhold",
"D. Wiedner",
"G. Wilkinson",
"M. K. Wilkinson",
"M. Williams",
"M. R. J. Williams",
"R. Williams",
"Z. Williams",
"F. F. Wilson",
"W. Wislicki",
"M. Witek",
"L. Witola",
"C. P. Wong",
"G. Wormser",
"S. A. Wotton",
"H. Wu",
"J. Wu",
"Y. Wu",
"K. Wyllie",
"S. Xian",
"Z. Xiang",
"Y. Xie",
"A. Xu",
"J. Xu",
"L. Xu",
"L. Xu",
"M. Xu",
"Z. Xu",
"Z. Xu",
"Z. Xu",
"D. Yang",
"K. Yang",
"S. Yang",
"X. Yang",
"Y. Yang",
"Z. Yang",
"Z. Yang",
"V. Yeroshenko",
"H. Yeung",
"H. Yin",
"C. Y. Yu",
"J. Yu",
"X. Yuan",
"Y Yuan",
"E. Zaffaroni",
"M. Zavertyaev",
"M. Zdybal",
"C. Zeng",
"M. Zeng",
"C. Zhang",
"D. Zhang",
"J. Zhang",
"L. Zhang",
"S. Zhang",
"S. Zhang",
"Y. Zhang",
"Y. Z. Zhang",
"Y. Zhao",
"A. Zharkova",
"A. Zhelezov",
"S. Z. Zheng",
"X. Z. Zheng",
"Y. Zheng",
"T. Zhou",
"X. Zhou",
"Y. Zhou",
"V. Zhovkovska",
"L. Z. Zhu",
"X. Zhu",
"X. Zhu",
"V. Zhukov",
"J. Zhuo",
"Q. Zou",
"D. Zuliani",
"G. Zunica"
] | hep-ex | [
"hep-ex"
] |
roman
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
pdflatex
CERN-EP-2024-200
LHCb-PAPER-2024-017
September 4, 2024
[Authors are listed at the end of this paper.]
§ ABSTRACT
A comprehensive study of the angular distributions in the bottom-baryon decays → h^-(h=π, K), followed by → h^+ with → or → decays, is performed
using a data sample of proton-proton collisions corresponding to an integrated luminosity of 9 collected by the LHCb experiment at center-of-mass energies of 7, 8 and 13 .
The decay parameters and the associated charge-parity () asymmetries are measured,
with no significant violation observed.
For the first time, the → h^- decay parameters are measured.
The most precise measurements of the decay parameters α, β and γ are obtained for decays and
an independent measurement of the decay parameters for the strange-baryon decay is provided.
The results deepen our understanding of weak decay dynamics in baryon decays.
Submitted to
Phys. Rev. Lett.
. .
plain
arabic
Hadronic weak decays of baryons provide an excellent platform for
studying baryon decay dynamics and the origin of the asymmetry between matter and antimatter <cit.>.
Among them, the decay of a spin-half baryon to a spin-half baryon and a pseudoscalar meson is of special interest.
For this type of decay, three decay parameters, first proposed by Lee and Yang to search for parity violation <cit.>, can be defined as
α≡2(s^*p)/|s|^2+|p|^2, β≡2(s^*p)/|s|^2+|p|^2, γ≡|s|^2-|p|^2/|s|^2+|p|^2 ,
satisfying α^2+β^2+γ^2=1, where s and p denote the parity-violating S-wave and parity-conserving P-wave amplitudes, respectively.
The interference between the two amplitudes may
generate differences between the differential decay rates of baryons and antibaryons, allowing -violation phenomena to be probed via angular analyses <cit.>.
The amount of violation can be quantified by the asymmetries
A_α=(α+α̅)/(α-α̅) and
R_β=(β+β̅)/(α-α̅),
where α̅ and β̅ denote the decay parameters of the antibaryons, and should have signs opposite to their baryonic counterparts.
At leading order, these asymmetries are related to the weak and strong phase differences between the S- and P-wave amplitudes, Δϕ and Δδ, via the relations
A_α=-tanΔδtanΔϕ and
R_β=tanΔϕ <cit.>.
Many phenomenological models have been used to calculate baryon decay parameters.
For some two-body beauty-baryon decays, factorization is assumed to hold in model calculations <cit.>, which predict that , consistent with the V-A nature of the weak current and maximal parity violation.
For charm-baryon decays, model calculations are complicated by the presence of nonfactorizable contributions and often do not agree with each other <cit.>.
For strange-baryon decays, nonfactorizable contributions may dominate, making theoretical calculations even more challenging <cit.>.
Decay parameters have been measured for several hyperon and charm-baryon decays <cit.>,
while beauty decays are much less explored.
The α parameter of the → decay was recently updated by the <cit.> and CLAS <cit.> collaborations, which resulted in a significantly larger value compared to the previous world average <cit.>.
The α parameters of several decays were precisely measured by the FOCUS <cit.>, <cit.> and <cit.> collaborations,
while the precision of the β and γ measurements is still very limited <cit.>.
To date, there is no
decay parameter measurement for any decay to a baryon and a pseudoscalar meson, despite the observation of many such decay modes.
The decay parameter of the → decay was measured in proton-proton (pp) collisions at the <cit.>,
together with the polarization, which is found to be consistent with zero.
Moreover, the photon polarization of the →γ decay was measured by LHCb <cit.>, suggesting the dominance of left-handed photons.
In this Letter, the decay parameters and asymmetries of and decays are measured through an angular analysis.
Three decays are analyzed:
→ p,
→
and → with the subsequent decays → and →π^+π^-.
The decay parameters and associated asymmetries of the
, and decays are determined simultaneously.
The analysis is performed using data from pp collisions at center-of-mass energies of =7, 8 and 13,
corresponding to an integrated luminosity of 9 collected with the detector. Inclusion of charge-conjugate processes is implied, unless otherwise stated.
The LHCb detector, designed for the study of particles containing or quarks, is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, described in detail in Refs. <cit.>.
The online event selection for decays is performed by a trigger <cit.>, which consists of a hardware stage followed by a software stage <cit.>.
The hardware trigger requires a muon with high transverse momentum or a hadron, photon or electron with high transverse energy in the calorimeters.
The software trigger requires a
two-, three- or four-track secondary vertex with a significant displacement from any primary vertex (PV).
Simulated samples of decays are produced to optimize event selection, study potential backgrounds and model the detector acceptance.
These samples are generated using the software described in Refs. <cit.>.
The products of each decay in the cascades are distributed uniformly in the allowed phase space.
In the offline selection, all tracks in the final state are required to have a large transverse momentum and be inconsistent with being directly produced from any PV.
The and candidates are reconstructed using → and → decays,
where the final-state tracks are required to form a vertex with a good fit quality that is significantly displaced from any PV,
and their invariant mass is consistent with the known value <cit.>.
The () candidate is combined with a kaon/pion (proton) track to form the candidate.
The invariant mass is required to be within ±26 (20) of the known value <cit.> for the → and → (→) decays. The smaller mass region for the → decay is used to suppress the →(→γ) background, where the photon is not reconstructed.
The candidate is formed by combining a candidate with a kaon or pion.
The invariant mass, m( h^-), is required to be larger than 5500 to reject background due to partially reconstructed decays.
Two types of background peaking in the signal mass region are identified.
For the first type, or mesons are observed in the invariant-mass distributions of the two charged companion tracks of and decays.
The second type involves a genuine () decay reconstructed as the () decay.
These background candidates are suppressed using information from particle identification (PID) detectors or rejected by specific vetoes in the corresponding mass spectra.
A boosted decision tree (BDT) classifier implemented in the TMVA toolkit <cit.> is then used to separate the signal from the background of random combinations of final-state particles.
The BDT analysis is performed independently for → p and → h^+ decays.
Each BDT classifier is trained on simulated signal decays and background from data in the high-mass region m( h^-)>5900,
using a combination of kinematic, topological and isolation variables of the , , or hadrons.
In the final stage of the event selection,
a simultaneous optimization of the final-state PID and BDT classifier requirements is performed to maximize the figure of merit, N_S^2/(N_S+N_B)^3/2,
chosen to favour a high signal purity with small decay-parameter uncertainties.
Here,
N_S and N_B represent the signal and background yields in the signal region chosen to be ±32 around the known mass <cit.>, estimated with simulated signal decays and data in the high-mass region.
The invariant mass is calculated with a kinematic fit <cit.> constraining the masses of all intermediate particles to their known values
and the momentum to point back to its best-matched PV.
The invariant-mass distributions of the five significant cascade decays to
(), (), (), () and () final states,
where decay products are shown in brackets,
are shown in Fig. <ref> for candidates passing all selection criteria.
The signal yields of the five decays are determined to be
(8.635± 0.032)×10^4, (4.16±0.07)×10^3, (2.475±0.017)×10^4, (1.19±0.04)×10^3 and (1.010±0.034)×10^3, respectively, from
unbinned maximum-likelihood fits performed to the mass distributions.
The signal component is described by a Hypatia function <cit.> and
the combinatorial background by an exponential function.
The → decay misidentified as → decay, or vice versa, is also modelled by a Hypatia function,
whose parameters are fixed to those obtained from the simulated samples.
The relative yields of these cross-feed contributions are constrained using relative experimental efficiencies.
For every decay mode, the fit result is used to determine the weight for each candidate <cit.>, applied to subtract the background for the subsequent angular analysis.
The decay parameters are determined by analyzing the angular distributions of the cascade decays.
The angular variables are calculated with the
invariant mass constrained to the known value <cit.>.
The kinematics of the three-step cascade decays are fully described by five angular variables
Ω≡(θ_0,θ_1,ϕ_1,θ_2,ϕ_2), depicted in Fig. <ref>.
The variable θ_0 is the polar angle between the normal P⃗_z of the production plane formed by the beam and momenta in the laboratory frame,
and the momentum p⃗_ in the rest frame.
The variable θ_1 (θ_2) is the polar angle between p⃗_ (p⃗_) and p⃗_, where particle momenta are defined in the rest frames of the () and baryons, respectively.
The variable ϕ_1 (ϕ_2) is the angle between the () decay plane and the decay plane, spanned by the momenta of their respective decay products.
Similarly, for the two-step cascade decays, , the kinematics are described by three angular variables
Ω≡(θ_0,θ_1,ϕ_1), which are the same as the first three variables of the three-step cascade.
The angular distributions can be expanded through the helicity formalism <cit.>.
Based on previous studies at the LHC <cit.>,
the baryon is considered to be unpolarized,
in which case the angular distributions become uniform in θ_0 and ϕ_1.
The impact of polarization is considered as a source of systematic uncertainty.
The reduced angular distributions are thus expressed as
d^3Γ/dcosθ_1dcosθ_2dϕ_2∝ (1+cosθ_1
+cosθ_2
+cosθ_1cosθ_2
-sinθ_1sinθ_2cosϕ_2
+sinθ_1sinθ_2sinϕ_2),
for decays,
and
dΓ/dcosθ_1∝
1+cosθ_1,
for decays,
where the subscript of the decay parameters denotes the decaying particle.
The decay parameters in this analysis
are determined from simultaneous unbinned maximum-likelihood fits to the five () cascade decays,
imposing the constraint ()^2 + ()^2 + ()^2 = 1.
The and parameters are related to the and parameters by
= √(1 - ()^2) sin, = √(1 - ()^2) cos,
where is the phase difference between the two helicity amplitudes of the → h^+ decay.
This leads to two equivalent sets of fit parameters for a → h^+ decay.
The fit is performed for each set of parameters independently to directly determine their values and uncertainties.
To test violation, an additional joint fit of and samples is applied with -related fit parameters, which are the asymmetries A_α, R_β, and averages , .
At leading order, the weak and strong phase differences are determined using R_β=tanΔϕ and R'_β=tanΔδ <cit.>.
The logarithm of the likelihood function (logℒ) is constructed as
logℒ(ν) = ∑_k=1^5(
𝒞_k
∑_i=1^N_kw_k,i×log[ 𝒫_k(Ω_k^i|ν)]),
where ν is the set of decay parameters, Ω is the set of angular variables,
and 𝒫(Ω|ν) represents the signal probability density function (PDF).
The subscript k runs over the five cascade decays,
and the subscript i runs over all the N_k candidates of the k-th decay.
The weight w_k,i in the logℒ is used to remove the contribution of background candidates <cit.>, while
the constants 𝒞_k≡∑_i∈data_kw_k,i/∑_i∈data_kw_k,i^2 are scale factors needed to correct the obtained statistical uncertainties <cit.>.
The signal PDF 𝒫_k(Ω_k|ν) is formulated as
𝒫_k(Ω_k|ν) = ϵ_k(Ω_k) · f_k(Ω⃗_k|ν)/∫dΩ_k ϵ_k(Ω_k) · f_k(Ω⃗_k|ν),
where f_k(Ω⃗_k|ν) represents the angular distribution given in
Eq. <ref> or <ref>,
and ϵ_k(Ω_k) is the angular acceptance.
The denominator is calculated numerically using the Monte Carlo integration method <cit.>
beginning with the corresponding simulated signal decays after full selection.
The distributions of the transverse momentum and pseudorapidity, and the number of
tracks per event in the simulation samples are corrected to match those in data.
In Fig. <ref>, the angular distributions of and decays are shown, superimposed by the fit result.
Distributions for all decays are provided in Ref. <cit.>.
A binned test between the data and the fit gives a p-value of 28%.
Various sources of systematic uncertainty on the decay parameters are studied.
Possible biases introduced by the angular fit method are evaluated using pseudoexperiments. Mass and angular distributions of pseudosamples are generated according to the baseline fit results, and then the whole fit procedure is repeated to extract decay parameters.
The parameter's systematic uncertainty is taken to be the mean of its pull distribution times its nominal statistical uncertainty.
The method is used to subtract the background,
hence the choice of the invariant-mass fit model introduces systematic uncertainties.
These are estimated by repeating the invariant-mass fit with alternative fit models,
including alternative descriptions of mass-shape functions and removing the constraints on yields,
then using the corresponding updated weights to determine decay parameters.
As the PID variables in simulation samples are calibrated to match data <cit.>,
the uncertainty on the calibration procedure introduces systematic uncertainties which are estimated with alternative calibration configurations.
The limited size of simulation samples introduces an uncertainty on the efficiency propagated to the decay parameters,
which is estimated with bootstrapped pseudoexperiments <cit.>.
The influence of the production asymmetry for baryons and detection asymmetries on the final-state particles <cit.> are taken into account.
Following the prescription of measurements <cit.>,
these asymmetries are introduced in the angular acceptance,
and the angular fit is repeated to verify their impact on the measurements.
The polarization of baryons is considered as a source of systematic uncertainty.
The angular fit is repeated with additional terms in the PDF incorporating the transverse polarization measured by <cit.>
(see appendix for details on this PDF).
The impact of the experimental angular resolution is considered as a systematic uncertainty and found to be negligible.
The spin of the baryon undergoes a precession in the magnetic field of the detector,
which modifies its angular distribution depending on the decay length <cit.>.
The systematic uncertainty arising from the precession is examined using pseudoexperiments,
and found to be negligible.
A summary of the contributions from the various sources is given in Ref. <cit.>.
The systematic uncertainties from different sources are added in quadrature,
resulting in totals that are smaller than the statistical uncertainties.
The results are listed in Table <ref> for the α parameters of , and decays,
and in Table <ref> for the β and γ parameters of → h^+ decays.
The -related parameters are also obtained,
and no violation is found.
This is the first measurement of the parity-violating parameters of two-body decays into a spin-half baryon and a pseudoscalar meson.
The results of the decay parameters are close to -1,
suggesting that baryons in → h^- decays are almost fully longitudinally polarized, which corresponds to the V-A nature of weak decays and supports the factorization hypothesis in theoretical calculations <cit.>.
The decay parameters are consistent with, and more precise than, the <cit.> and <cit.> results.
The parameters are found to significantly deviate from -1, which may suggest that nonfactorizable contributions are substantial in hadronic decays of charm baryons.
The β, γ and Δ parameters of → h^+ decays are precisely measured for the first time,
and will serve as essential inputs to theoretical models <cit.>.
The weak and strong phase differences are determined to be
Δϕ=0.01± 0.02 and Δδ =-0.448± 0.017 for the → decay,
and Δϕ=-0.03± 0.15 and Δδ =-0.57± 0.19 for the → decay,
where a possible ambiguity of +π rad due to the inverse of tangent function is not included.
The α parameter and the corresponding asymmetry of the → decay in this analysis are consistent with the results <cit.>.
In conclusion, based on pp collision data collected by the experiment, corresponding to an integrated luminosity of 9,
a comprehensive study of the angular distributions in cascade decays
is performed.
The analysis provides the first measurements of the decay parameters for → h^- decays, and the most precise measurements for the decay parameters.
The weak and strong phase differences for → h^+ decays are also determined.
The asymmetries are studied between the decay parameters of baryon and antibaryon decays, and no hint of violation is observed.
The results provide valuable insights into the weak decay dynamics of baryons.
§ ACKNOWLEDGEMENTS
We express our gratitude to our colleagues in the CERN
accelerator departments for the excellent performance of the LHC. We
thank the technical and administrative staff at the LHCb
institutes.
We acknowledge support from CERN and from the national agencies:
CAPES, CNPq, FAPERJ and FINEP (Brazil);
MOST and NSFC (China);
CNRS/IN2P3 (France);
BMBF, DFG and MPG (Germany);
INFN (Italy);
NWO (Netherlands);
MNiSW and NCN (Poland);
MCID/IFA (Romania);
MICIU and AEI (Spain);
SNSF and SER (Switzerland);
NASU (Ukraine);
STFC (United Kingdom);
DOE NP and NSF (USA).
We acknowledge the computing resources that are provided by CERN, IN2P3
(France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands),
PIC (Spain), GridPP (United Kingdom),
CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil),
and Polish WLCG (Poland).
We are indebted to the communities behind the multiple open-source
software packages on which we depend.
Individual groups or members have received support from
ARC and ARDC (Australia);
Key Research Program of Frontier Sciences of CAS, CAS PIFI, CAS CCEPP,
Fundamental Research Funds for the Central Universities,
and Sci. & Tech. Program of Guangzhou (China);
Minciencias (Colombia);
EPLANET, Marie Skłodowska-Curie Actions, ERC and NextGenerationEU (European Union);
A*MIDEX, ANR, IPhU and Labex P2IO, and Région Auvergne-Rhône-Alpes (France);
AvH Foundation (Germany);
ICSC (Italy);
Severo Ochoa and María de Maeztu Units of Excellence, GVA, XuntaGal, GENCAT, InTalent-Inditex and Prog. Atracción Talento CM (Spain);
SRC (Sweden);
the Leverhulme Trust, the Royal Society
and UKRI (United Kingdom).
§ END MATTER
§ ANGULAR DISTRIBUTIONS
The helicity formalism is employed to describe the angular distributions of the decays in this Letter.
For the decay of a spin-half baryon to a spin-half baryon and a pseudoscalar
meson, two helicity amplitudes are involved with the respective couplings H_±,
where the subscript represents the sign of the helicity of the final-state spin-half baryon.
The helicity couplings are related to the S-wave (s) and P-wave (p) couplings as s=(H_+ + H_-)/√(2) and p=(H_+-H_-)/√(2).
The decay parameters are defined using the helicity amplitudes as
α = |H_+|^2 - |H_-|^2/|H_+|^2 + |H_-|^2, β = √(1 - α^2)sinΔ, γ = √(1 - α^2)cosΔ,
where Δ = (H_+ / H_-) is the phase angle difference between the two helicity amplitudes.
The angular distribution is determined by the sum of all possible helicity amplitudes as
dΓ/dΩ∝ |M|^2 = ∑_λ_0, λ'_0, λ_nρ_λ_0, λ'_0 M_λ_0, λ_n M^*_λ'_0, λ_n,
where λ_0^(') and λ_n run over the helicities of the initial and final baryons, ρ_λ_0, λ'_0 is the polarization density matrix of the decaying baryon,
and M_λ_0, λ_n, M^*_λ'_0, λ_n are the amplitude matrix elements.
For the baryon promptly produced in pp collisions,
the possible polarization is expected to be perpendicular to the production plane due to parity conservation in strong interactions.
Defining the polarization axis as the z-axis, and the magnitude of the polarization as P_z,
the polarization density matrix is expressed as
ρ = [ 1 + P_z 0; 0 1-P_z ].
§.§ Angular distribution for decays
For decays,
the helicity amplitude is determined as
M_λ_b, λ_ = ∑_λ_c H^b_λ_c d^1/2_λ_b, λ_c(θ_0) ·
H^c_λ_ e^iλ_cϕ_1 d^1/2_λ_c, λ_(θ_1),
where d^J_λ, λ'(θ) is the Wigner d-matrix,
λ_b, λ_c and λ_ refer to the helicities of , and baryons, and
H^b_λ_c and H^c_λ_ are the helicity couplings of and decays.
The total amplitude squared is calculated by
|M|^2 ∝∑_λ_[ (1 + P_z) · |M_1/2, λ_|^2 + (1 - P_z) · |M_-1/2, λ_|^2],
which leads to
d^3Γ/dcosθ_0dcosθ_1dϕ_1∝ 1+cosθ_1
+P_z· (cosθ_0+cosθ_0cosθ_1
-sinθ_0sinθ_1cosϕ_1
+sinθ_0sinθ_1sinϕ_1),
where ,, are the decay parameters defined by H_±^b, and
is the decay parameter related to H_±^c.
§.§ Angular distribution for decays
For decays,
the relevant angles are (θ_0, θ_1, ϕ_1, θ_2, ϕ_2), which are defined in Fig. <ref>.
The helicity amplitude is expressed as
M_λ_b, λ_ = ∑_λ_c H^b_λ_c d^1/2_λ_b, λ_c(θ_0) ·
H^c_λ_s e^iλ_cϕ_1 d^1/2_λ_c, λ_s(θ_1) ·
H^s_λ_ e^iλ_sϕ_2 d^1/2_λ_s, λ_(θ_2),
where λ_s refers to the helicity of baryons, and
H^c_λ_s and H^s_λ_ are the helicity couplings of and decays.
The total amplitude is calculated by Eq. <ref>, which leads to
d^5Γ/dcosθ_0dcosθ_1dϕ_1dcosθ_2dϕ_2
∝ (1+cosθ_1
+cosθ_2
+cosθ_1cosθ_2
-sinθ_1sinθ_2cosϕ_2
+sinθ_1sinθ_2sinϕ_2)
+P_z · (cosθ_0
+cosθ_0cosθ_1
+cosθ_0cosθ_2
+cosθ_0cosθ_1cosθ_2
-sinθ_0sinθ_1cosϕ_1
+sinθ_0sinθ_1sinϕ_1
-cosθ_0sinθ_1sinθ_2cosϕ_2
+cosθ_0sinθ_1sinθ_2sinϕ_2
-sinθ_0sinθ_1cosθ_2cosϕ_1
+sinθ_0sinθ_1cosθ_2sinϕ_1
+sinθ_0sinθ_2cosϕ_1cosϕ_2
+sinθ_0sinθ_2cosϕ_1sinϕ_2
+sinθ_0sinθ_2sinϕ_1cosϕ_2
+sinθ_0sinθ_2sinϕ_1sinϕ_2
-sinθ_0cosθ_1sinθ_2cosϕ_1cosϕ_2
+sinθ_0cosθ_1sinθ_2cosϕ_1sinϕ_2
+sinθ_0cosθ_1sinθ_2sinϕ_1cosϕ_2
-sinθ_0cosθ_1sinθ_2sinϕ_1sinϕ_2),
where
is the decay parameter related to H_±^s.
tocsectionReferences
LHCb
§ PRL JUSTIFICATION
This letter presents a comprehensive study of parity and charge-parity () violation in decays of bottom, charm and strange baryons using the angular analysis method initially proposed by T.D. Lee and C.N. Yang. The major results include the first measurement of the decay parameters for → h^+ (h=K,π) decays, the most precise determinations of the decay parameters α,β, γ
for →Λ^0 h^+ (h=K,π) decays, α for → decay
and the associated asymmetries, as well as an independent confirmation of the decay parameter
α (Λ^0→ pπ^-) measured by the BESIII experiment, which was significantly higher than the previous world average.
These results support the factorization hypothesis in theoretical calculations of beauty baryon hadronic decays, and indicate the importance of nonfactorizable contributions in hadronic decays of charm baryons.
This is the first study of its kind at and at a hadron collider, demonstrating ’s great potential to study violation in baryon decays via the angular analysis approach.
§.§ Word count
* Text (captions included): 3022
* Math display: 128
* Tables: 188.5
* Figures: 431
* All: 3770
Supplemental material
§ SUMMARY OF SYSTEMATIC UNCERTAINTIES
The various sources of systematic uncertainties on the decay parameter measurements, including fit procedure, mass fit model, PID calibration, limited size of simulation samples, production and detection asymmetries and polarization, are summarized in Table <ref>, Table <ref> and Table <ref>.
The total systematic uncertainties correspond to the sum in quadrature of all sources.
§ PROJECTIONS OF ANGULAR DISTRIBUTIONS
Projections of angular distributions and fit results for various decays studied in the Letter are shown in Figs. <ref>, <ref>, <ref>, <ref> and <ref>.
§ CORRELATION MATRICES FOR PARAMETERS
Correlation matrices between decay parameters without accounting for systematic uncertainties are provided in Tables <ref>-<ref>.
The correlation matrix between -related parameters is provided in Table <ref>.
Correlation matrix for the -related parameters.
!
1.000 -0.033 0.159 -0.013 -0.326 0.030 0.026 -0.153 -0.057 0.018 0.009 -0.025 -0.668 0.025 0.405 0.044
1.000 -0.007 0.158 -0.026 -0.326 -0.097 -0.031 -0.022 -0.054 -0.010 -0.014 -0.011 -0.666 -0.032 -0.404
1.000 -0.164 -0.077 0.036 -0.017 -0.029 -0.004 -0.009 -0.002 -0.004 -0.214 0.001 -0.101 -0.001
1.000 -0.039 -0.078 -0.015 -0.024 -0.004 -0.009 -0.002 -0.003 -0.005 -0.213 -0.000 -0.100
1.000 -0.023 -0.020 -0.531 0.019 -0.007 -0.003 -0.005 0.220 -0.022 -0.122 -0.012
1.000 0.367 0.005 0.018 0.008 0.003 0.005 0.019 0.219 0.009 0.122
1.000 0.043 -0.010 0.004 0.002 0.004 -0.102 -0.015 0.065 0.010
1.000 -0.010 0.001 0.002 0.003 -0.018 0.023 0.065 0.015
1.000 0.095 -0.077 -0.428 0.038 -0.015 -0.033 -0.017
1.000 0.147 0.025 0.011 0.036 0.013 0.032
1.000 0.188 0.006 0.036 0.010 0.010
1.000 -0.017 0.009 0.013 0.015
1.000 -0.005 -0.275 -0.023
1.000 0.021 0.273
1.000 0.046
1.000
LHCb collaboration
R. Aaij^360000-0003-0533-1952,
A.S.W. Abdelmotteleb^550000-0001-7905-0542,
C. Abellan Beteta^49,
F. Abudinén^550000-0002-6737-3528,
T. Ackernley^590000-0002-5951-3498,
A. A. Adefisoye^670000-0003-2448-1550,
B. Adeva^450000-0001-9756-3712,
M. Adinolfi^530000-0002-1326-1264,
P. Adlarson^800000-0001-6280-3851,
C. Agapopoulou^130000-0002-2368-0147,
C.A. Aidala^810000-0001-9540-4988,
Z. Ajaltouni^11,
S. Akar^640000-0003-0288-9694,
K. Akiba^360000-0002-6736-471X,
P. Albicocco^260000-0001-6430-1038,
J. Albrecht^180000-0001-8636-1621,
F. Alessio^470000-0001-5317-1098,
M. Alexander^580000-0002-8148-2392,
Z. Aliouche^610000-0003-0897-4160,
P. Alvarez Cartelle^540000-0003-1652-2834,
R. Amalric^150000-0003-4595-2729,
S. Amato^30000-0002-3277-0662,
J.L. Amey^530000-0002-2597-3808,
Y. Amhis^13,470000-0003-4282-1512,
L. An^60000-0002-3274-5627,
L. Anderlini^250000-0001-6808-2418,
M. Andersson^490000-0003-3594-9163,
A. Andreianov^420000-0002-6273-0506,
P. Andreola^490000-0002-3923-431X,
M. Andreotti^240000-0003-2918-1311,
D. Andreou^670000-0001-6288-0558,
A. Anelli^29,o0000-0002-6191-934X,
D. Ao^70000-0003-1647-4238,
F. Archilli^35,u0000-0002-1779-6813,
M. Argenton^240009-0006-3169-0077,
S. Arguedas Cuendis^9,470000-0003-4234-7005,
A. Artamonov^420000-0002-2785-2233,
M. Artuso^670000-0002-5991-7273,
E. Aslanides^120000-0003-3286-683X,
R. Ataíde Da Silva^480009-0005-1667-2666,
M. Atzeni^630000-0002-3208-3336,
B. Audurier^140000-0001-9090-4254,
D. Bacher^620000-0002-1249-367X,
I. Bachiller Perea^100000-0002-3721-4876,
S. Bachmann^200000-0002-1186-3894,
M. Bachmayer^480000-0001-5996-2747,
J.J. Back^550000-0001-7791-4490,
P. Baladron Rodriguez^450000-0003-4240-2094,
V. Balagura^140000-0002-1611-7188,
W. Baldini^240000-0001-7658-8777,
L. Balzani^180009-0006-5241-1452,
H. Bao^70009-0002-7027-021X,
J. Baptista de Souza Leite^590000-0002-4442-5372,
C. Barbero Pretel^45,820009-0001-1805-6219,
M. Barbetti^25,l0000-0002-6704-6914,
I. R. Barbosa^680000-0002-3226-8672,
R.J. Barlow^610000-0002-8295-8612,
M. Barnyakov^230009-0000-0102-0482,
S. Barsuk^130000-0002-0898-6551,
W. Barter^570000-0002-9264-4799,
M. Bartolini^540000-0002-8479-5802,
J. Bartz^670000-0002-2646-4124,
J.M. Basels^160000-0001-5860-8770,
S. Bashir^380000-0001-9861-8922,
G. Bassi^33,r0000-0002-2145-3805,
B. Batsukh^50000-0003-1020-2549,
P. B. Battista^13,
A. Bay^480000-0002-4862-9399,
A. Beck^550000-0003-4872-1213,
M. Becker^180000-0002-7972-8760,
F. Bedeschi^330000-0002-8315-2119,
I.B. Bediaga^20000-0001-7806-5283,
N. A. Behling^180000-0003-4750-7872,
S. Belin^450000-0001-7154-1304,
V. Bellee^490000-0001-5314-0953,
K. Belous^420000-0003-0014-2589,
I. Belov^270000-0003-1699-9202,
I. Belyaev^340000-0002-7458-7030,
G. Benane^120000-0002-8176-8315,
G. Bencivenni^260000-0002-5107-0610,
E. Ben-Haim^150000-0002-9510-8414,
A. Berezhnoy^420000-0002-4431-7582,
R. Bernet^490000-0002-4856-8063,
S. Bernet Andres^430000-0002-4515-7541,
A. Bertolin^310000-0003-1393-4315,
C. Betancourt^490000-0001-9886-7427,
F. Betti^570000-0002-2395-235X,
J. Bex^540000-0002-2856-8074,
Ia. Bezshyiko^490000-0002-4315-6414,
J. Bhom^390000-0002-9709-903X,
M.S. Bieker^180000-0001-7113-7862,
N.V. Biesuz^240000-0003-3004-0946,
P. Billoir^150000-0001-5433-9876,
A. Biolchini^360000-0001-6064-9993,
M. Birch^600000-0001-9157-4461,
F.C.R. Bishop^100000-0002-0023-3897,
A. Bitadze^610000-0001-7979-1092,
A. Bizzeti^0000-0001-5729-5530,
T. Blake^550000-0002-0259-5891,
F. Blanc^480000-0001-5775-3132,
J.E. Blank^180000-0002-6546-5605,
S. Blusk^670000-0001-9170-684X,
V. Bocharnikov^420000-0003-1048-7732,
J.A. Boelhauve^180000-0002-3543-9959,
O. Boente Garcia^140000-0003-0261-8085,
T. Boettcher^640000-0002-2439-9955,
A. Bohare^570000-0003-1077-8046,
A. Boldyrev^420000-0002-7872-6819,
C.S. Bolognani^770000-0003-3752-6789,
R. Bolzonella^24,k0000-0002-0055-0577,
N. Bondar^420000-0003-2714-9879,
A. Bordelius^470009-0002-3529-8524,
F. Borgato^31,p0000-0002-3149-6710,
S. Borghi^610000-0001-5135-1511,
M. Borsato^29,o0000-0001-5760-2924,
J.T. Borsuk^390000-0002-9065-9030,
S.A. Bouchiba^480000-0002-0044-6470,
M. Bovill^620009-0006-2494-8287,
T.J.V. Bowcock^590000-0002-3505-6915,
A. Boyer^470000-0002-9909-0186,
C. Bozzi^240000-0001-6782-3982,
A. Brea Rodriguez^480000-0001-5650-445X,
N. Breer^180000-0003-0307-3662,
J. Brodzicka^390000-0002-8556-0597,
A. Brossa Gonzalo^45,55,44,†0000-0002-4442-1048,
J. Brown^590000-0001-9846-9672,
D. Brundu^300000-0003-4457-5896,
E. Buchanan^57,
A. Buonaura^490000-0003-4907-6463,
L. Buonincontri^31,p0000-0002-1480-454X,
A.T. Burke^610000-0003-0243-0517,
C. Burr^470000-0002-5155-1094,
A. Butkevich^420000-0001-9542-1411,
J.S. Butter^540000-0002-1816-536X,
J. Buytaert^470000-0002-7958-6790,
W. Byczynski^470009-0008-0187-3395,
S. Cadeddu^300000-0002-7763-500X,
H. Cai^72,
A. C. Caillet^15,
R. Calabrese^24,k0000-0002-1354-5400,
S. Calderon Ramirez^90000-0001-9993-4388,
L. Calefice^440000-0001-6401-1583,
S. Cali^260000-0001-9056-0711,
M. Calvi^29,o0000-0002-8797-1357,
M. Calvo Gomez^430000-0001-5588-1448,
P. Camargo Magalhaes^2,y0000-0003-3641-8110,
J. I. Cambon Bouzas^450000-0002-2952-3118,
P. Campana^260000-0001-8233-1951,
D.H. Campora Perez^770000-0001-8998-9975,
A.F. Campoverde Quezada^70000-0003-1968-1216,
S. Capelli^290000-0002-8444-4498,
L. Capriotti^240000-0003-4899-0587,
R. Caravaca-Mora^90000-0001-8010-0447,
A. Carbone^23,i0000-0002-7045-2243,
L. Carcedo Salgado^450000-0003-3101-3528,
R. Cardinale^27,m0000-0002-7835-7638,
A. Cardini^300000-0002-6649-0298,
P. Carniti^29,o0000-0002-7820-2732,
L. Carus^20,
A. Casais Vidal^630000-0003-0469-2588,
R. Caspary^200000-0002-1449-1619,
G. Casse^590000-0002-8516-237X,
J. Castro Godinez^90000-0003-4808-4904,
M. Cattaneo^470000-0001-7707-169X,
G. Cavallero^24,470000-0002-8342-7047,
V. Cavallini^24,k0000-0001-7601-129X,
S. Celani^200000-0003-4715-7622,
D. Cervenkov^620000-0002-1865-741X,
S. Cesare^28,n0000-0003-0886-7111,
A.J. Chadwick^590000-0003-3537-9404,
I. Chahrour^810000-0002-1472-0987,
M. Charles^150000-0003-4795-498X,
Ph. Charpentier^470000-0001-9295-8635,
E. Chatzianagnostou^360009-0009-3781-1820,
C.A. Chavez Barajas^590000-0002-4602-8661,
M. Chefdeville^100000-0002-6553-6493,
C. Chen^120000-0002-3400-5489,
S. Chen^50000-0002-8647-1828,
Z. Chen^70000-0002-0215-7269,
A. Chernov^390000-0003-0232-6808,
S. Chernyshenko^510000-0002-2546-6080,
X. Chiotopoulos^770009-0006-5762-6559,
V. Chobanova^790000-0002-1353-6002,
S. Cholak^480000-0001-8091-4766,
M. Chrzaszcz^390000-0001-7901-8710,
A. Chubykin^420000-0003-1061-9643,
V. Chulikov^420000-0002-7767-9117,
P. Ciambrone^260000-0003-0253-9846,
X. Cid Vidal^450000-0002-0468-541X,
G. Ciezarek^470000-0003-1002-8368,
P. Cifra^470000-0003-3068-7029,
P.E.L. Clarke^570000-0003-3746-0732,
M. Clemencic^470000-0003-1710-6824,
H.V. Cliff^540000-0003-0531-0916,
J. Closier^470000-0002-0228-9130,
C. Cocha Toapaxi^200000-0001-5812-8611,
V. Coco^470000-0002-5310-6808,
J. Cogan^120000-0001-7194-7566,
E. Cogneras^110000-0002-8933-9427,
L. Cojocariu^410000-0002-1281-5923,
P. Collins^470000-0003-1437-4022,
T. Colombo^470000-0002-9617-9687,
M. C. Colonna^180009-0000-1704-4139,
A. Comerma-Montells^440000-0002-8980-6048,
L. Congedo^220000-0003-4536-4644,
A. Contu^300000-0002-3545-2969,
N. Cooke^580000-0002-4179-3700,
I. Corredoira ^450000-0002-6089-0899,
A. Correia^150000-0002-6483-8596,
G. Corti^470000-0003-2857-4471,
J.J. Cottee Meldrum^53,
B. Couturier^470000-0001-6749-1033,
D.C. Craik^490000-0002-3684-1560,
M. Cruz Torres^2,f0000-0003-2607-131X,
E. Curras Rivera^480000-0002-6555-0340,
R. Currie^570000-0002-0166-9529,
C.L. Da Silva^660000-0003-4106-8258,
S. Dadabaev^420000-0002-0093-3244,
L. Dai^690000-0002-4070-4729,
X. Dai^60000-0003-3395-7151,
E. Dall'Occo^180000-0001-9313-4021,
J. Dalseno^450000-0003-3288-4683,
C. D'Ambrosio^470000-0003-4344-9994,
J. Daniel^110000-0002-9022-4264,
A. Danilina^420000-0003-3121-2164,
P. d'Argent^220000-0003-2380-8355,
A. Davidson^550009-0002-0647-2028,
J.E. Davies^610000-0002-5382-8683,
A. Davis^610000-0001-9458-5115,
O. De Aguiar Francisco^610000-0003-2735-678X,
C. De Angelis^30,j0009-0005-5033-5866,
F. De Benedetti^470000-0002-7960-3116,
J. de Boer^360000-0002-6084-4294,
K. De Bruyn^760000-0002-0615-4399,
S. De Capua^610000-0002-6285-9596,
M. De Cian^20,470000-0002-1268-9621,
U. De Freitas Carneiro Da Graca^2,b0000-0003-0451-4028,
E. De Lucia^260000-0003-0793-0844,
J.M. De Miranda^20009-0003-2505-7337,
L. De Paula^30000-0002-4984-7734,
M. De Serio^22,g0000-0003-4915-7933,
P. De Simone^260000-0001-9392-2079,
F. De Vellis^180000-0001-7596-5091,
J.A. de Vries^770000-0003-4712-9816,
F. Debernardis^220009-0001-5383-4899,
D. Decamp^100000-0001-9643-6762,
V. Dedu^120000-0001-5672-8672,
S. Dekkers^10000-0001-9598-875X,
L. Del Buono^150000-0003-4774-2194,
B. Delaney^630009-0007-6371-8035,
H.-P. Dembinski^180000-0003-3337-3850,
J. Deng^80000-0002-4395-3616,
V. Denysenko^490000-0002-0455-5404,
O. Deschamps^110000-0002-7047-6042,
F. Dettori^30,j0000-0003-0256-8663,
B. Dey^750000-0002-4563-5806,
P. Di Nezza^260000-0003-4894-6762,
I. Diachkov^420000-0001-5222-5293,
S. Didenko^420000-0001-5671-5863,
S. Ding^670000-0002-5946-581X,
L. Dittmann^200009-0000-0510-0252,
V. Dobishuk^510000-0001-9004-3255,
A. D. Docheva^580000-0002-7680-4043,
C. Dong^40000-0003-3259-6323,
A.M. Donohoe^210000-0002-4438-3950,
F. Dordei^300000-0002-2571-5067,
A.C. dos Reis^20000-0001-7517-8418,
A. D. Dowling^670009-0007-1406-3343,
W. Duan^700000-0003-1765-9939,
P. Duda^780000-0003-4043-7963,
M.W. Dudek^390000-0003-3939-3262,
L. Dufour^470000-0002-3924-2774,
V. Duk^320000-0001-6440-0087,
P. Durante^470000-0002-1204-2270,
M. M. Duras^780000-0002-4153-5293,
J.M. Durham^660000-0002-5831-3398,
O. D. Durmus^750000-0002-8161-7832,
A. Dziurda^390000-0003-4338-7156,
A. Dzyuba^420000-0003-3612-3195,
S. Easo^560000-0002-4027-7333,
E. Eckstein^17,
U. Egede^10000-0001-5493-0762,
A. Egorychev^420000-0001-5555-8982,
V. Egorychev^420000-0002-2539-673X,
S. Eisenhardt^570000-0002-4860-6779,
E. Ejopu^610000-0003-3711-7547,
L. Eklund^800000-0002-2014-3864,
M. Elashri^640000-0001-9398-953X,
J. Ellbracht^180000-0003-1231-6347,
S. Ely^600000-0003-1618-3617,
A. Ene^410000-0001-5513-0927,
E. Epple^640000-0002-6312-3740,
J. Eschle^670000-0002-7312-3699,
S. Esen^200000-0003-2437-8078,
T. Evans^610000-0003-3016-1879,
F. Fabiano^30,j0000-0001-6915-9923,
L.N. Falcao^20000-0003-3441-583X,
Y. Fan^70000-0002-3153-430X,
B. Fang^720000-0003-0030-3813,
L. Fantini^32,q,470000-0002-2351-3998,
M. Faria^480000-0002-4675-4209,
K. Farmer^570000-0003-2364-2877,
D. Fazzini^29,o0000-0002-5938-4286,
L. Felkowski^780000-0002-0196-910X,
M. Feng^5,70000-0002-6308-5078,
M. Feo^18,470000-0001-5266-2442,
A. Fernandez Casani^460000-0003-1394-509X,
M. Fernandez Gomez^450000-0003-1984-4759,
A.D. Fernez^650000-0001-9900-6514,
F. Ferrari^230000-0002-3721-4585,
F. Ferreira Rodrigues^30000-0002-4274-5583,
M. Ferrillo^490000-0003-1052-2198,
M. Ferro-Luzzi^470009-0008-1868-2165,
S. Filippov^420000-0003-3900-3914,
R.A. Fini^220000-0002-3821-3998,
M. Fiorini^24,k0000-0001-6559-2084,
K.L. Fischer^620009-0000-8700-9910,
D.S. Fitzgerald^810000-0001-6862-6876,
C. Fitzpatrick^610000-0003-3674-0812,
F. Fleuret^140000-0002-2430-782X,
M. Fontana^230000-0003-4727-831X,
L. F. Foreman^610000-0002-2741-9966,
R. Forty^470000-0003-2103-7577,
D. Foulds-Holt^540000-0001-9921-687X,
M. Franco Sevilla^650000-0002-5250-2948,
M. Frank^470000-0002-4625-559X,
E. Franzoso^24,k0000-0003-2130-1593,
G. Frau^610000-0003-3160-482X,
C. Frei^470000-0001-5501-5611,
D.A. Friday^610000-0001-9400-3322,
J. Fu^70000-0003-3177-2700,
Q. Fuehring^18,540000-0003-3179-2525,
Y. Fujii^10000-0002-0813-3065,
T. Fulghesu^150000-0001-9391-8619,
E. Gabriel^360000-0001-8300-5939,
G. Galati^220000-0001-7348-3312,
M.D. Galati^360000-0002-8716-4440,
A. Gallas Torreira^450000-0002-2745-7954,
D. Galli^23,i0000-0003-2375-6030,
S. Gambetta^570000-0003-2420-0501,
M. Gandelman^30000-0001-8192-8377,
P. Gandini^280000-0001-7267-6008,
B. Ganie^610009-0008-7115-3940,
H. Gao^70000-0002-6025-6193,
R. Gao^620009-0004-1782-7642,
Y. Gao^80000-0002-6069-8995,
Y. Gao^60000-0003-1484-0943,
Y. Gao^8,
M. Garau^30,j0000-0002-0505-9584,
L.M. Garcia Martin^480000-0003-0714-8991,
P. Garcia Moreno^440000-0002-3612-1651,
J. García Pardiñas^470000-0003-2316-8829,
K. G. Garg^80000-0002-8512-8219,
L. Garrido^440000-0001-8883-6539,
C. Gaspar^470000-0002-8009-1509,
R.E. Geertsema^360000-0001-6829-7777,
L.L. Gerken^180000-0002-6769-3679,
E. Gersabeck^610000-0002-2860-6528,
M. Gersabeck^610000-0002-0075-8669,
T. Gershon^550000-0002-3183-5065,
S. G. Ghizzo^27,m,
Z. Ghorbanimoghaddam^53,
L. Giambastiani^31,p0000-0002-5170-0635,
F. I. Giasemis^15,e0000-0003-0622-1069,
V. Gibson^540000-0002-6661-1192,
H.K. Giemza^400000-0003-2597-8796,
A.L. Gilman^620000-0001-5934-7541,
M. Giovannetti^260000-0003-2135-9568,
A. Gioventù^440000-0001-5399-326X,
L. Girardey^610000-0002-8254-7274,
P. Gironella Gironell^440000-0001-5603-4750,
C. Giugliano^24,k0000-0002-6159-4557,
M.A. Giza^390000-0002-0805-1561,
E.L. Gkougkousis^600000-0002-2132-2071,
F.C. Glaser^13,200000-0001-8416-5416,
V.V. Gligorov^15,470000-0002-8189-8267,
C. Göbel^680000-0003-0523-495X,
E. Golobardes^430000-0001-8080-0769,
D. Golubkov^420000-0001-6216-1596,
A. Golutvin^60,42,470000-0003-2500-8247,
A. Gomes^2,a,†0009-0005-2892-2968,
S. Gomez Fernandez^440000-0002-3064-9834,
F. Goncalves Abrantes^620000-0002-7318-482X,
M. Goncerz^390000-0002-9224-914X,
G. Gong^40000-0002-7822-3947,
J. A. Gooding^180000-0003-3353-9750,
I.V. Gorelov^420000-0001-5570-0133,
C. Gotti^290000-0003-2501-9608,
J.P. Grabowski^170000-0001-8461-8382,
L.A. Granado Cardoso^470000-0003-2868-2173,
E. Graugés^440000-0001-6571-4096,
E. Graverini^48,s0000-0003-4647-6429,
L. Grazette^550000-0001-7907-4261,
G. Graziani^0000-0001-8212-846X,
A. T. Grecu^410000-0002-7770-1839,
L.M. Greeven^360000-0001-5813-7972,
N.A. Grieser^640000-0003-0386-4923,
L. Grillo^580000-0001-5360-0091,
S. Gromov^420000-0002-8967-3644,
C. Gu^140000-0001-5635-6063,
M. Guarise^240000-0001-8829-9681,
L. Guerry^110009-0004-8932-4024,
M. Guittiere^130000-0002-2916-7184,
V. Guliaeva^420000-0003-3676-5040,
P. A. Günther^200000-0002-4057-4274,
A.-K. Guseinov^480000-0002-5115-0581,
E. Gushchin^420000-0001-8857-1665,
Y. Guz^6,42,470000-0001-7552-400X,
T. Gys^470000-0002-6825-6497,
K. Habermann^170009-0002-6342-5965,
T. Hadavizadeh^10000-0001-5730-8434,
C. Hadjivasiliou^650000-0002-2234-0001,
G. Haefeli^480000-0002-9257-839X,
C. Haen^470000-0002-4947-2928,
J. Haimberger^470000-0002-3363-7783,
M. Hajheidari^47,
G. Hallett^550009-0005-1427-6520,
M.M. Halvorsen^470000-0003-0959-3853,
P.M. Hamilton^650000-0002-2231-1374,
J. Hammerich^590000-0002-5556-1775,
Q. Han^80000-0002-7958-2917,
X. Han^200000-0001-7641-7505,
S. Hansmann-Menzemer^200000-0002-3804-8734,
L. Hao^70000-0001-8162-4277,
N. Harnew^620000-0001-9616-6651,
M. Hartmann^130009-0005-8756-0960,
S. Hashmi^380000-0003-2714-2706,
J. He^7,c0000-0002-1465-0077,
F. Hemmer^470000-0001-8177-0856,
C. Henderson^640000-0002-6986-9404,
R.D.L. Henderson^1,550000-0001-6445-4907,
A.M. Hennequin^470009-0008-7974-3785,
K. Hennessy^590000-0002-1529-8087,
L. Henry^480000-0003-3605-832X,
J. Herd^600000-0001-7828-3694,
P. Herrero Gascon^200000-0001-6265-8412,
J. Heuel^160000-0001-9384-6926,
A. Hicheur^30000-0002-3712-7318,
G. Hijano Mendizabal^49,
D. Hill^480000-0003-2613-7315,
S.E. Hollitt^180000-0002-4962-3546,
J. Horswill^610000-0002-9199-8616,
R. Hou^80000-0002-3139-3332,
Y. Hou^110000-0001-6454-278X,
N. Howarth^59,
J. Hu^20,
J. Hu^700000-0002-8227-4544,
W. Hu^60000-0002-2855-0544,
X. Hu^40000-0002-5924-2683,
W. Huang^70000-0002-1407-1729,
W. Hulsbergen^360000-0003-3018-5707,
R.J. Hunter^550000-0001-7894-8799,
M. Hushchyn^420000-0002-8894-6292,
D. Hutchcroft^590000-0002-4174-6509,
D. Ilin^420000-0001-8771-3115,
P. Ilten^640000-0001-5534-1732,
A. Inglessi^420000-0002-2522-6722,
A. Iniukhin^420000-0002-1940-6276,
A. Ishteev^420000-0003-1409-1428,
K. Ivshin^420000-0001-8403-0706,
R. Jacobsson^470000-0003-4971-7160,
H. Jage^160000-0002-8096-3792,
S.J. Jaimes Elles^46,730000-0003-0182-8638,
S. Jakobsen^470000-0002-6564-040X,
E. Jans^360000-0002-5438-9176,
B.K. Jashal^460000-0002-0025-4663,
A. Jawahery^65,470000-0003-3719-119X,
V. Jevtic^180000-0001-6427-4746,
E. Jiang^650000-0003-1728-8525,
X. Jiang^5,70000-0001-8120-3296,
Y. Jiang^70000-0002-8964-5109,
Y. J. Jiang^60000-0002-0656-8647,
M. John^620000-0002-8579-844X,
A. John Rubesh Rajan^210000-0002-9850-4965,
D. Johnson^520000-0003-3272-6001,
C.R. Jones^540000-0003-1699-8816,
T.P. Jones^550000-0001-5706-7255,
S. Joshi^400000-0002-5821-1674,
B. Jost^470009-0005-4053-1222,
J. Juan Castella^540009-0009-5577-1308,
N. Jurik^470000-0002-6066-7232,
I. Juszczak^390000-0002-1285-3911,
D. Kaminaris^480000-0002-8912-4653,
S. Kandybei^500000-0003-3598-0427,
M. Kane^57 0009-0006-5064-966X,
Y. Kang^40000-0002-6528-8178,
C. Kar^110000-0002-6407-6974,
M. Karacson^470009-0006-1867-9674,
D. Karpenkov^420000-0001-8686-2303,
A. Kauniskangas^480000-0002-4285-8027,
J.W. Kautz^640000-0001-8482-5576,
F. Keizer^470000-0002-1290-6737,
M. Kenzie^540000-0001-7910-4109,
T. Ketel^360000-0002-9652-1964,
B. Khanji^670000-0003-3838-281X,
A. Kharisova^420000-0002-5291-9583,
S. Kholodenko^33,470000-0002-0260-6570,
G. Khreich^130000-0002-6520-8203,
T. Kirn^160000-0002-0253-8619,
V.S. Kirsebom^29,o0009-0005-4421-9025,
O. Kitouni^630000-0001-9695-8165,
S. Klaver^370000-0001-7909-1272,
N. Kleijne^33,r0000-0003-0828-0943,
K. Klimaszewski^400000-0003-0741-5922,
M.R. Kmiec^400000-0002-1821-1848,
S. Koliiev^510009-0002-3680-1224,
L. Kolk^180000-0003-2589-5130,
A. Konoplyannikov^420009-0005-2645-8364,
P. Kopciewicz^38,470000-0001-9092-3527,
P. Koppenburg^360000-0001-8614-7203,
M. Korolev^420000-0002-7473-2031,
I. Kostiuk^360000-0002-8767-7289,
O. Kot^51,
S. Kotriakhova^0000-0002-1495-0053,
A. Kozachuk^420000-0001-6805-0395,
P. Kravchenko^420000-0002-4036-2060,
L. Kravchuk^420000-0001-8631-4200,
M. Kreps^550000-0002-6133-486X,
P. Krokovny^420000-0002-1236-4667,
W. Krupa^670000-0002-7947-465X,
W. Krzemien^400000-0002-9546-358X,
O.K. Kshyvanskyi^51,
J. Kubat^20,
S. Kubis^780000-0001-8774-8270,
M. Kucharczyk^390000-0003-4688-0050,
V. Kudryavtsev^420009-0000-2192-995X,
E. Kulikova^420009-0002-8059-5325,
A. Kupsc^800000-0003-4937-2270,
B. K. Kutsenko^120000-0002-8366-1167,
D. Lacarrere^470009-0005-6974-140X,
P. Laguarta Gonzalez^440009-0005-3844-0778,
A. Lai^300000-0003-1633-0496,
A. Lampis^300000-0002-5443-4870,
D. Lancierini^540000-0003-1587-4555,
C. Landesa Gomez^450000-0001-5241-8642,
J.J. Lane^10000-0002-5816-9488,
R. Lane^530000-0002-2360-2392,
G. Lanfranchi^260000-0002-9467-8001,
C. Langenbruch^200000-0002-3454-7261,
J. Langer^180000-0002-0322-5550,
O. Lantwin^420000-0003-2384-5973,
T. Latham^550000-0002-7195-8537,
F. Lazzari^33,s0000-0002-3151-3453,
C. Lazzeroni^520000-0003-4074-4787,
R. Le Gac^120000-0002-7551-6971,
H. Lee^590009-0003-3006-2149,
R. Lefèvre^110000-0002-6917-6210,
A. Leflat^420000-0001-9619-6666,
S. Legotin^420000-0003-3192-6175,
M. Lehuraux^550000-0001-7600-7039,
E. Lemos Cid^470000-0003-3001-6268,
O. Leroy^120000-0002-2589-240X,
T. Lesiak^390000-0002-3966-2998,
B. Leverington^200000-0001-6640-7274,
A. Li^40000-0001-5012-6013,
C. Li^120000-0002-3554-5479,
H. Li^700000-0002-2366-9554,
K. Li^80000-0002-2243-8412,
L. Li^610000-0003-4625-6880,
P. Li^470000-0003-2740-9765,
P.-R. Li^710000-0002-1603-3646,
Q. Li^5,70009-0004-1932-8580,
S. Li^80000-0001-5455-3768,
T. Li^5,d0000-0002-5241-2555,
T. Li^700000-0002-5723-0961,
Y. Li^8,
Y. Li^50000-0003-2043-4669,
Z. Lian^40000-0003-4602-6946,
X. Liang^670000-0002-5277-9103,
S. Libralon^460009-0002-5841-9624,
C. Lin^70000-0001-7587-3365,
T. Lin^560000-0001-6052-8243,
R. Lindner^470000-0002-5541-6500,
V. Lisovskyi^480000-0003-4451-214X,
R. Litvinov^30,470000-0002-4234-435X,
F. L. Liu^10009-0002-2387-8150,
G. Liu^700000-0001-5961-6588,
K. Liu^710000-0003-4529-3356,
S. Liu^5,70000-0002-6919-227X,
W. Liu^8,
Y. Liu^570000-0003-3257-9240,
Y. Liu^71,
Y. L. Liu^600000-0001-9617-6067,
A. Lobo Salvia^440000-0002-2375-9509,
A. Loi^300000-0003-4176-1503,
J. Lomba Castro^450000-0003-1874-8407,
T. Long^540000-0001-7292-848X,
J.H. Lopes^30000-0003-1168-9547,
A. Lopez Huertas^440000-0002-6323-5582,
S. López Soliño^450000-0001-9892-5113,
Q. Lu^140000-0002-6598-1941,
C. Lucarelli^25,l0000-0002-8196-1828,
D. Lucchesi^31,p0000-0003-4937-7637,
M. Lucio Martinez^770000-0001-6823-2607,
V. Lukashenko^36,510000-0002-0630-5185,
Y. Luo^60009-0001-8755-2937,
A. Lupato^31,h0000-0003-0312-3914,
E. Luppi^24,k0000-0002-1072-5633,
K. Lynch^210000-0002-7053-4951,
X.-R. Lyu^70000-0001-5689-9578,
G. M. Ma^40000-0001-8838-5205,
R. Ma^70000-0002-0152-2412,
S. Maccolini^180000-0002-9571-7535,
F. Machefert^130000-0002-4644-5916,
F. Maciuc^410000-0001-6651-9436,
B. Mack^670000-0001-8323-6454,
I. Mackay^620000-0003-0171-7890,
L. M. Mackey^670000-0002-8285-3589,
L.R. Madhan Mohan^540000-0002-9390-8821,
M. J. Madurai^520000-0002-6503-0759,
A. Maevskiy^420000-0003-1652-8005,
D. Magdalinski^360000-0001-6267-7314,
D. Maisuzenko^420000-0001-5704-3499,
M.W. Majewski^38,
J.J. Malczewski^390000-0003-2744-3656,
S. Malde^620000-0002-8179-0707,
L. Malentacca^47,
A. Malinin^420000-0002-3731-9977,
T. Maltsev^420000-0002-2120-5633,
G. Manca^30,j0000-0003-1960-4413,
G. Mancinelli^120000-0003-1144-3678,
C. Mancuso^28,13,n0000-0002-2490-435X,
R. Manera Escalero^440000-0003-4981-6847,
D. Manuzzi^230000-0002-9915-6587,
D. Marangotto^28,n0000-0001-9099-4878,
J.F. Marchand^100000-0002-4111-0797,
R. Marchevski^480000-0003-3410-0918,
U. Marconi^230000-0002-5055-7224,
E. Mariani^15,
S. Mariani^470000-0002-7298-3101,
C. Marin Benito^440000-0003-0529-6982,
J. Marks^200000-0002-2867-722X,
A.M. Marshall^530000-0002-9863-4954,
L. Martel^620000-0001-8562-0038,
G. Martelli^32,q0000-0002-6150-3168,
G. Martellotti^340000-0002-8663-9037,
L. Martinazzoli^470000-0002-8996-795X,
M. Martinelli^29,o0000-0003-4792-9178,
D. Martinez Santos^450000-0002-6438-4483,
F. Martinez Vidal^460000-0001-6841-6035,
A. Massafferri^20000-0002-3264-3401,
R. Matev^470000-0001-8713-6119,
A. Mathad^470000-0002-9428-4715,
V. Matiunin^420000-0003-4665-5451,
C. Matteuzzi^670000-0002-4047-4521,
K.R. Mattioli^140000-0003-2222-7727,
A. Mauri^600000-0003-1664-8963,
E. Maurice^140000-0002-7366-4364,
J. Mauricio^440000-0002-9331-1363,
P. Mayencourt^480000-0002-8210-1256,
J. Mazorra de Cos^460000-0003-0525-2736,
M. Mazurek^400000-0002-3687-9630,
M. McCann^600000-0002-3038-7301,
L. Mcconnell^210009-0004-7045-2181,
T.H. McGrath^610000-0001-8993-3234,
N.T. McHugh^580000-0002-5477-3995,
A. McNab^610000-0001-5023-2086,
R. McNulty^210000-0001-7144-0175,
B. Meadows^640000-0002-1947-8034,
G. Meier^180000-0002-4266-1726,
D. Melnychuk^400000-0003-1667-7115,
F. M. Meng^40009-0004-1533-6014,
M. Merk^36,770000-0003-0818-4695,
A. Merli^480000-0002-0374-5310,
L. Meyer Garcia^650000-0002-2622-8551,
D. Miao^5,70000-0003-4232-5615,
H. Miao^70000-0002-1936-5400,
M. Mikhasenko^740000-0002-6969-2063,
D.A. Milanes^730000-0001-7450-1121,
A. Minotti^29,o0000-0002-0091-5177,
E. Minucci^670000-0002-3972-6824,
T. Miralles^110000-0002-4018-1454,
B. Mitreska^180000-0002-1697-4999,
D.S. Mitzel^180000-0003-3650-2689,
A. Modak^560000-0003-1198-1441,
R.A. Mohammed^620000-0002-3718-4144,
R.D. Moise^160000-0002-5662-8804,
S. Mokhnenko^420000-0002-1849-1472,
T. Mombächer^470000-0002-5612-979X,
M. Monk^55,10000-0003-0484-0157,
S. Monteil^110000-0001-5015-3353,
A. Morcillo Gomez^450000-0001-9165-7080,
G. Morello^260000-0002-6180-3697,
M.J. Morello^33,r0000-0003-4190-1078,
M.P. Morgenthaler^200000-0002-7699-5724,
A.B. Morris^470000-0002-0832-9199,
A.G. Morris^120000-0001-6644-9888,
R. Mountain^670000-0003-1908-4219,
H. Mu^40000-0001-9720-7507,
Z. M. Mu^60000-0001-9291-2231,
E. Muhammad^550000-0001-7413-5862,
F. Muheim^570000-0002-1131-8909,
M. Mulder^760000-0001-6867-8166,
K. Müller^490000-0002-5105-1305,
F. Muñoz-Rojas^90000-0002-4978-602X,
R. Murta^600000-0002-6915-8370,
P. Naik^590000-0001-6977-2971,
T. Nakada^480009-0000-6210-6861,
R. Nandakumar^560000-0002-6813-6794,
T. Nanut^470000-0002-5728-9867,
I. Nasteva^30000-0001-7115-7214,
M. Needham^570000-0002-8297-6714,
N. Neri^28,n0000-0002-6106-3756,
S. Neubert^170000-0002-0706-1944,
N. Neufeld^470000-0003-2298-0102,
P. Neustroev^42,
J. Nicolini^18,130000-0001-9034-3637,
D. Nicotra^770000-0001-7513-3033,
E.M. Niel^480000-0002-6587-4695,
N. Nikitin^420000-0003-0215-1091,
P. Nogarolli^30009-0001-4635-1055,
P. Nogga^17,
N.S. Nolte^630000-0003-2536-4209,
C. Normand^530000-0001-5055-7710,
J. Novoa Fernandez^450000-0002-1819-1381,
G. Nowak^640000-0003-4864-7164,
C. Nunez^810000-0002-2521-9346,
H. N. Nur^580000-0002-7822-523X,
A. Oblakowska-Mucha^380000-0003-1328-0534,
V. Obraztsov^420000-0002-0994-3641,
T. Oeser^160000-0001-7792-4082,
S. Okamura^24,k0000-0003-1229-3093,
A. Okhotnikov^42,
O. Okhrimenko^510000-0002-0657-6962,
R. Oldeman^30,j0000-0001-6902-0710,
F. Oliva^570000-0001-7025-3407,
M. Olocco^180000-0002-6968-1217,
C.J.G. Onderwater^770000-0002-2310-4166,
R.H. O'Neil^570000-0002-9797-8464,
D. Osthues^18,
J.M. Otalora Goicochea^30000-0002-9584-8500,
P. Owen^490000-0002-4161-9147,
A. Oyanguren^460000-0002-8240-7300,
O. Ozcelik^570000-0003-3227-9248,
F. Paciolla^33,v0000-0002-6001-600X,
A. Padee^400000-0002-5017-7168,
K.O. Padeken^170000-0001-7251-9125,
B. Pagare^550000-0003-3184-1622,
P.R. Pais^200009-0005-9758-742X,
T. Pajero^470000-0001-9630-2000,
A. Palano^220000-0002-6095-9593,
M. Palutan^260000-0001-7052-1360,
G. Panshin^420000-0001-9163-2051,
L. Paolucci^550000-0003-0465-2893,
A. Papanestis^560000-0002-5405-2901,
M. Pappagallo^22,g0000-0001-7601-5602,
L.L. Pappalardo^24,k0000-0002-0876-3163,
C. Pappenheimer^640000-0003-0738-3668,
C. Parkes^610000-0003-4174-1334,
B. Passalacqua^240000-0003-3643-7469,
G. Passaleva^250000-0002-8077-8378,
D. Passaro^33,r0000-0002-8601-2197,
A. Pastore^220000-0002-5024-3495,
M. Patel^600000-0003-3871-5602,
J. Patoc^620009-0000-1201-4918,
C. Patrignani^23,i0000-0002-5882-1747,
A. Paul^670009-0006-7202-0811,
C.J. Pawley^770000-0001-9112-3724,
A. Pellegrino^360000-0002-7884-345X,
J. Peng^5,70009-0005-4236-4667,
M. Pepe Altarelli^260000-0002-1642-4030,
S. Perazzini^230000-0002-1862-7122,
D. Pereima^420000-0002-7008-8082,
H. Pereira Da Costa^660000-0002-3863-352X,
A. Pereiro Castro^450000-0001-9721-3325,
P. Perret^110000-0002-5732-4343,
A. Perro^470000-0002-1996-0496,
K. Petridis^530000-0001-7871-5119,
A. Petrolini^27,m0000-0003-0222-7594,
J. P. Pfaller^640009-0009-8578-3078,
H. Pham^670000-0003-2995-1953,
L. Pica^33,r0000-0001-9837-6556,
M. Piccini^320000-0001-8659-4409,
B. Pietrzyk^100000-0003-1836-7233,
G. Pietrzyk^130000-0001-9622-820X,
D. Pinci^340000-0002-7224-9708,
F. Pisani^470000-0002-7763-252X,
M. Pizzichemi^29,o,470000-0001-5189-230X,
V. Placinta^410000-0003-4465-2441,
M. Plo Casasus^450000-0002-2289-918X,
T. Poeschl^470000-0003-3754-7221,
F. Polci^15,470000-0001-8058-0436,
M. Poli Lener^260000-0001-7867-1232,
A. Poluektov^120000-0003-2222-9925,
N. Polukhina^420000-0001-5942-1772,
I. Polyakov^470000-0002-6855-7783,
E. Polycarpo^30000-0002-4298-5309,
S. Ponce^470000-0002-1476-7056,
D. Popov^70000-0002-8293-2922,
S. Poslavskii^420000-0003-3236-1452,
K. Prasanth^570000-0001-9923-0938,
C. Prouve^450000-0003-2000-6306,
V. Pugatch^510000-0002-5204-9821,
G. Punzi^33,s0000-0002-8346-9052,
S. Qasim^490000-0003-4264-9724,
Q. Q. Qian^60000-0001-6453-4691,
W. Qian^70000-0003-3932-7556,
N. Qin^40000-0001-8453-658X,
S. Qu^40000-0002-7518-0961,
R. Quagliani^470000-0002-3632-2453,
R.I. Rabadan Trejo^550000-0002-9787-3910,
J.H. Rademacker^530000-0003-2599-7209,
M. Rama^330000-0003-3002-4719,
M. Ramírez García^810000-0001-7956-763X,
V. Ramos De Oliveira^680000-0003-3049-7866,
M. Ramos Pernas^550000-0003-1600-9432,
M.S. Rangel^30000-0002-8690-5198,
F. Ratnikov^420000-0003-0762-5583,
G. Raven^370000-0002-2897-5323,
M. Rebollo De Miguel^460000-0002-4522-4863,
F. Redi^28,h0000-0001-9728-8984,
J. Reich^530000-0002-2657-4040,
F. Reiss^610000-0002-8395-7654,
Z. Ren^70000-0001-9974-9350,
P.K. Resmi^620000-0001-9025-2225,
R. Ribatti^480000-0003-1778-1213,
G. R. Ricart^14,820000-0002-9292-2066,
D. Riccardi^33,r0009-0009-8397-572X,
S. Ricciardi^560000-0002-4254-3658,
K. Richardson^630000-0002-6847-2835,
M. Richardson-Slipper^570000-0002-2752-001X,
K. Rinnert^590000-0001-9802-1122,
P. Robbe^130000-0002-0656-9033,
G. Robertson^580000-0002-7026-1383,
E. Rodrigues^590000-0003-2846-7625,
E. Rodriguez Fernandez^450000-0002-3040-065X,
J.A. Rodriguez Lopez^730000-0003-1895-9319,
E. Rodriguez Rodriguez^450000-0002-7973-8061,
J. Roensch^18,
A. Rogachev^420000-0002-7548-6530,
A. Rogovskiy^560000-0002-1034-1058,
D.L. Rolf^470000-0001-7908-7214,
P. Roloff^470000-0001-7378-4350,
V. Romanovskiy^420000-0003-0939-4272,
M. Romero Lamas^450000-0002-1217-8418,
A. Romero Vidal^450000-0002-8830-1486,
G. Romolini^240000-0002-0118-4214,
F. Ronchetti^480000-0003-3438-9774,
T. Rong^60000-0002-5479-9212,
M. Rotondo^260000-0001-5704-6163,
S. R. Roy^200000-0002-3999-6795,
M.S. Rudolph^670000-0002-0050-575X,
M. Ruiz Diaz^200000-0001-6367-6815,
R.A. Ruiz Fernandez^450000-0002-5727-4454,
J. Ruiz Vidal^80,z0000-0001-8362-7164,
A. Ryzhikov^420000-0002-3543-0313,
J. Ryzka^380000-0003-4235-2445,
J. J. Saavedra-Arias^90000-0002-2510-8929,
J.J. Saborido Silva^450000-0002-6270-130X,
R. Sadek^140000-0003-0438-8359,
N. Sagidova^420000-0002-2640-3794,
D. Sahoo^750000-0002-5600-9413,
N. Sahoo^520000-0001-9539-8370,
B. Saitta^30,j0000-0003-3491-0232,
M. Salomoni^29,o,470009-0007-9229-653X,
C. Sanchez Gras^360000-0002-7082-887X,
I. Sanderswood^460000-0001-7731-6757,
R. Santacesaria^340000-0003-3826-0329,
C. Santamarina Rios^450000-0002-9810-1816,
M. Santimaria^26,470000-0002-8776-6759,
L. Santoro ^20000-0002-2146-2648,
E. Santovetti^350000-0002-5605-1662,
A. Saputi^24,470000-0001-6067-7863,
D. Saranin^420000-0002-9617-9986,
A. Sarnatskiy^760009-0007-2159-3633,
G. Sarpis^570000-0003-1711-2044,
M. Sarpis^610000-0002-6402-1674,
C. Satriano^34,t0000-0002-4976-0460,
A. Satta^350000-0003-2462-913X,
M. Saur^60000-0001-8752-4293,
D. Savrina^420000-0001-8372-6031,
H. Sazak^160000-0003-2689-1123,
F. Sborzacchi^47,260009-0004-7916-2682,
L.G. Scantlebury Smead^620000-0001-8702-7991,
A. Scarabotto^180000-0003-2290-9672,
S. Schael^160000-0003-4013-3468,
S. Scherl^590000-0003-0528-2724,
M. Schiller^580000-0001-8750-863X,
H. Schindler^470000-0002-1468-0479,
M. Schmelling^190000-0003-3305-0576,
B. Schmidt^470000-0002-8400-1566,
S. Schmitt^160000-0002-6394-1081,
H. Schmitz^17,
O. Schneider^480000-0002-6014-7552,
A. Schopper^470000-0002-8581-3312,
N. Schulte^180000-0003-0166-2105,
S. Schulte^480009-0001-8533-0783,
M.H. Schune^130000-0002-3648-0830,
R. Schwemmer^470009-0005-5265-9792,
G. Schwering^160000-0003-1731-7939,
B. Sciascia^260000-0003-0670-006X,
A. Sciuccati^470000-0002-8568-1487,
S. Sellam^450000-0003-0383-1451,
A. Semennikov^420000-0003-1130-2197,
T. Senger^490009-0006-2212-6431,
M. Senghi Soares^370000-0001-9676-6059,
A. Sergi^27,m,470000-0001-9495-6115,
N. Serra^490000-0002-5033-0580,
L. Sestini^310000-0002-1127-5144,
A. Seuthe^180000-0002-0736-3061,
Y. Shang^60000-0001-7987-7558,
D.M. Shangase^810000-0002-0287-6124,
M. Shapkin^420000-0002-4098-9592,
R. S. Sharma^670000-0003-1331-1791,
I. Shchemerov^420000-0001-9193-8106,
L. Shchutska^480000-0003-0700-5448,
T. Shears^590000-0002-2653-1366,
L. Shekhtman^420000-0003-1512-9715,
Z. Shen^60000-0003-1391-5384,
S. Sheng^5,70000-0002-1050-5649,
V. Shevchenko^420000-0003-3171-9125,
B. Shi^70000-0002-5781-8933,
Q. Shi^70000-0001-7915-8211,
Y. Shimizu^130000-0002-4936-1152,
E. Shmanin^420000-0002-8868-1730,
R. Shorkin^420000-0001-8881-3943,
J.D. Shupperd^670009-0006-8218-2566,
R. Silva Coutinho^670000-0002-1545-959X,
G. Simi^31,p0000-0001-6741-6199,
S. Simone^22,g0000-0003-3631-8398,
N. Skidmore^550000-0003-3410-0731,
T. Skwarnicki^670000-0002-9897-9506,
M.W. Slater^520000-0002-2687-1950,
J.C. Smallwood^620000-0003-2460-3327,
E. Smith^630000-0002-9740-0574,
K. Smith^660000-0002-1305-3377,
M. Smith^600000-0002-3872-1917,
A. Snoch^360000-0001-6431-6360,
L. Soares Lavra^570000-0002-2652-123X,
M.D. Sokoloff^640000-0001-6181-4583,
F.J.P. Soler^580000-0002-4893-3729,
A. Solomin^42,530000-0003-0644-3227,
A. Solovev^420000-0002-5355-5996,
I. Solovyev^420000-0003-4254-6012,
R. Song^10000-0002-8854-8905,
Y. Song^480000-0003-0256-4320,
Y. Song^40000-0003-1959-5676,
Y. S. Song^60000-0003-3471-1751,
F.L. Souza De Almeida^670000-0001-7181-6785,
B. Souza De Paula^30009-0003-3794-3408,
E. Spadaro Norella^27,m0000-0002-1111-5597,
E. Spedicato^230000-0002-4950-6665,
J.G. Speer^180000-0002-6117-7307,
E. Spiridenkov^42,
P. Spradlin^580000-0002-5280-9464,
V. Sriskaran^470000-0002-9867-0453,
F. Stagni^470000-0002-7576-4019,
M. Stahl^470000-0001-8476-8188,
S. Stahl^470000-0002-8243-400X,
S. Stanislaus^620000-0003-1776-0498,
E.N. Stein^470000-0001-5214-8865,
O. Steinkamp^490000-0001-7055-6467,
O. Stenyakin^42,
H. Stevens^180000-0002-9474-9332,
D. Strekalina^420000-0003-3830-4889,
Y. Su^70000-0002-2739-7453,
F. Suljik^620000-0001-6767-7698,
J. Sun^300000-0002-6020-2304,
L. Sun^720000-0002-0034-2567,
Y. Sun^650000-0003-4933-5058,
D. Sundfeld^20000-0002-5147-3698,
W. Sutcliffe^49,
P.N. Swallow^520000-0003-2751-8515,
F. Swystun^540009-0006-0672-7771,
A. Szabelski^400000-0002-6604-2938,
T. Szumlak^380000-0002-2562-7163,
Y. Tan^40000-0003-3860-6545,
M.D. Tat^620000-0002-6866-7085,
A. Terentev^420000-0003-2574-8560,
F. Terzuoli^33,v,470000-0002-9717-225X,
F. Teubert^470000-0003-3277-5268,
E. Thomas^470000-0003-0984-7593,
D.J.D. Thompson^520000-0003-1196-5943,
H. Tilquin^600000-0003-4735-2014,
V. Tisserand^110000-0003-4916-0446,
S. T'Jampens^100000-0003-4249-6641,
M. Tobin^5,470000-0002-2047-7020,
L. Tomassetti^24,k0000-0003-4184-1335,
G. Tonani^28,n,470000-0001-7477-1148,
X. Tong^60000-0002-5278-1203,
D. Torres Machado^20000-0001-7030-6468,
L. Toscano^180009-0007-5613-6520,
D.Y. Tou^40000-0002-4732-2408,
C. Trippl^430000-0003-3664-1240,
G. Tuci^200000-0002-0364-5758,
N. Tuning^360000-0003-2611-7840,
L.H. Uecker^200000-0003-3255-9514,
A. Ukleja^380000-0003-0480-4850,
D.J. Unverzagt^200000-0002-1484-2546,
E. Ursov^420000-0002-6519-4526,
A. Usachov^370000-0002-5829-6284,
A. Ustyuzhanin^420000-0001-7865-2357,
U. Uwer^200000-0002-8514-3777,
V. Vagnoni^230000-0003-2206-311X,
G. Valenti^230000-0002-6119-7535,
N. Valls Canudas^470000-0001-8748-8448,
H. Van Hecke^660000-0001-7961-7190,
E. van Herwijnen^600000-0001-8807-8811,
C.B. Van Hulse^45,x0000-0002-5397-6782,
R. Van Laak^480000-0002-7738-6066,
M. van Veghel^360000-0001-6178-6623,
G. Vasquez^490000-0002-3285-7004,
R. Vazquez Gomez^440000-0001-5319-1128,
P. Vazquez Regueiro^450000-0002-0767-9736,
C. Vázquez Sierra^450000-0002-5865-0677,
S. Vecchi^240000-0002-4311-3166,
J.J. Velthuis^530000-0002-4649-3221,
M. Veltri^25,w0000-0001-7917-9661,
A. Venkateswaran^480000-0001-6950-1477,
M. Vesterinen^550000-0001-7717-2765,
D. Vico Benet^620009-0009-3494-2825,
M. Vieites Diaz^470000-0002-0944-4340,
X. Vilasis-Cardona^430000-0002-1915-9543,
E. Vilella Figueras^590000-0002-7865-2856,
A. Villa^230000-0002-9392-6157,
P. Vincent^150000-0002-9283-4541,
F.C. Volle^520000-0003-1828-3881,
D. vom Bruch^120000-0001-9905-8031,
N. Voropaev^420000-0002-2100-0726,
K. Vos^770000-0002-4258-4062,
G. Vouters^10,470009-0008-3292-2209,
C. Vrahas^570000-0001-6104-1496,
J. Wagner^180000-0002-9783-5957,
J. Walsh^330000-0002-7235-6976,
E.J. Walton^1,550000-0001-6759-2504,
G. Wan^60000-0003-0133-1664,
C. Wang^200000-0002-5909-1379,
G. Wang^80000-0001-6041-115X,
J. Wang^60000-0001-7542-3073,
J. Wang^50000-0002-6391-2205,
J. Wang^40000-0002-3281-8136,
J. Wang^720000-0001-6711-4465,
M. Wang^280000-0003-4062-710X,
N. W. Wang^70000-0002-6915-6607,
R. Wang^530000-0002-2629-4735,
X. Wang^8,
X. Wang^700000-0002-2399-7646,
X. W. Wang^600000-0001-9565-8312,
Y. Wang^60009-0003-2254-7162,
Z. Wang^130000-0002-5041-7651,
Z. Wang^40000-0003-0597-4878,
Z. Wang^280000-0003-4410-6889,
J.A. Ward^55,10000-0003-4160-9333,
M. Waterlaat^47,
N.K. Watson^520000-0002-8142-4678,
D. Websdale^600000-0002-4113-1539,
Y. Wei^60000-0001-6116-3944,
J. Wendel^790000-0003-0652-721X,
B.D.C. Westhenry^530000-0002-4589-2626,
C. White^540009-0002-6794-9547,
M. Whitehead^580000-0002-2142-3673,
E. Whiter^520009-0003-3902-8123,
A.R. Wiederhold^550000-0002-1023-1086,
D. Wiedner^180000-0002-4149-4137,
G. Wilkinson^620000-0001-5255-0619,
M.K. Wilkinson^640000-0001-6561-2145,
M. Williams^630000-0001-8285-3346,
M.R.J. Williams^570000-0001-5448-4213,
R. Williams^540000-0002-2675-3567,
Z. Williams^530009-0009-9224-4160,
F.F. Wilson^560000-0002-5552-0842,
W. Wislicki^400000-0001-5765-6308,
M. Witek^390000-0002-8317-385X,
L. Witola^200000-0001-9178-9921,
C.P. Wong^660000-0002-9839-4065,
G. Wormser^130000-0003-4077-6295,
S.A. Wotton^540000-0003-4543-8121,
H. Wu^670000-0002-9337-3476,
J. Wu^80000-0002-4282-0977,
Y. Wu^60000-0003-3192-0486,
Z. Wu^70000-0001-6756-9021,
K. Wyllie^470000-0002-2699-2189,
S. Xian^70,
Z. Xiang^50000-0002-9700-3448,
Y. Xie^80000-0001-5012-4069,
A. Xu^330000-0002-8521-1688,
J. Xu^70000-0001-6950-5865,
L. Xu^40000-0003-2800-1438,
L. Xu^40000-0002-0241-5184,
M. Xu^550000-0001-8885-565X,
Z. Xu^110000-0002-7531-6873,
Z. Xu^70000-0001-9558-1079,
Z. Xu^50000-0001-9602-4901,
D. Yang^0009-0002-2675-4022,
K. Yang^600000-0001-5146-7311,
S. Yang^70000-0003-2505-0365,
X. Yang^60000-0002-7481-3149,
Y. Yang^27,m0000-0002-8917-2620,
Z. Yang^60000-0003-2937-9782,
Z. Yang^650000-0003-0572-2021,
V. Yeroshenko^130000-0002-8771-0579,
H. Yeung^610000-0001-9869-5290,
H. Yin^80000-0001-6977-8257,
C. Y. Yu^60000-0002-4393-2567,
J. Yu^690000-0003-1230-3300,
X. Yuan^50000-0003-0468-3083,
Y Yuan^5,70009-0000-6595-7266,
E. Zaffaroni^480000-0003-1714-9218,
M. Zavertyaev^190000-0002-4655-715X,
M. Zdybal^390000-0002-1701-9619,
F. Zenesini^23,i0009-0001-2039-9739,
C. Zeng^5,70009-0007-8273-2692,
M. Zeng^40000-0001-9717-1751,
C. Zhang^60000-0002-9865-8964,
D. Zhang^80000-0002-8826-9113,
J. Zhang^70000-0001-6010-8556,
L. Zhang^40000-0003-2279-8837,
S. Zhang^690000-0002-9794-4088,
S. Zhang^620000-0002-2385-0767,
Y. Zhang^60000-0002-0157-188X,
Y. Z. Zhang^40000-0001-6346-8872,
Y. Zhao^200000-0002-8185-3771,
A. Zharkova^420000-0003-1237-4491,
A. Zhelezov^200000-0002-2344-9412,
S. Z. Zheng^60009-0001-4723-095X,
X. Z. Zheng^40000-0001-7647-7110,
Y. Zheng^70000-0003-0322-9858,
T. Zhou^60000-0002-3804-9948,
X. Zhou^80009-0005-9485-9477,
Y. Zhou^70000-0003-2035-3391,
V. Zhovkovska^550000-0002-9812-4508,
L. Z. Zhu^70000-0003-0609-6456,
X. Zhu^40000-0002-9573-4570,
X. Zhu^80000-0002-4485-1478,
V. Zhukov^160000-0003-0159-291X,
J. Zhuo^460000-0002-6227-3368,
Q. Zou^5,70000-0003-0038-5038,
D. Zuliani^31,p0000-0002-1478-4593,
G. Zunica^480000-0002-5972-6290.
^1School of Physics and Astronomy, Monash University, Melbourne, Australia
^2Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
^3Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
^4Center for High Energy Physics, Tsinghua University, Beijing, China
^5Institute Of High Energy Physics (IHEP), Beijing, China
^6School of Physics State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China
^7University of Chinese Academy of Sciences, Beijing, China
^8Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China
^9Consejo Nacional de Rectores (CONARE), San Jose, Costa Rica
^10Université Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France
^11Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France
^12Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France
^13Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
^14Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris, Palaiseau, France
^15LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris, France
^16I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany
^17Universität Bonn - Helmholtz-Institut für Strahlen und Kernphysik, Bonn, Germany
^18Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
^19Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
^20Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
^21School of Physics, University College Dublin, Dublin, Ireland
^22INFN Sezione di Bari, Bari, Italy
^23INFN Sezione di Bologna, Bologna, Italy
^24INFN Sezione di Ferrara, Ferrara, Italy
^25INFN Sezione di Firenze, Firenze, Italy
^26INFN Laboratori Nazionali di Frascati, Frascati, Italy
^27INFN Sezione di Genova, Genova, Italy
^28INFN Sezione di Milano, Milano, Italy
^29INFN Sezione di Milano-Bicocca, Milano, Italy
^30INFN Sezione di Cagliari, Monserrato, Italy
^31INFN Sezione di Padova, Padova, Italy
^32INFN Sezione di Perugia, Perugia, Italy
^33INFN Sezione di Pisa, Pisa, Italy
^34INFN Sezione di Roma La Sapienza, Roma, Italy
^35INFN Sezione di Roma Tor Vergata, Roma, Italy
^36Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands
^37Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands
^38AGH - University of Krakow, Faculty of Physics and Applied Computer Science, Kraków, Poland
^39Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland
^40National Center for Nuclear Research (NCBJ), Warsaw, Poland
^41Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania
^42Affiliated with an institute covered by a cooperation agreement with CERN
^43DS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain
^44ICCUB, Universitat de Barcelona, Barcelona, Spain
^45Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
^46Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain
^47European Organization for Nuclear Research (CERN), Geneva, Switzerland
^48Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
^49Physik-Institut, Universität Zürich, Zürich, Switzerland
^50NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
^51Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine
^52School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom
^53H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom
^54Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
^55Department of Physics, University of Warwick, Coventry, United Kingdom
^56STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
^57School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom
^58School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom
^59Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
^60Imperial College London, London, United Kingdom
^61Department of Physics and Astronomy, University of Manchester, Manchester, United Kingdom
^62Department of Physics, University of Oxford, Oxford, United Kingdom
^63Massachusetts Institute of Technology, Cambridge, MA, United States
^64University of Cincinnati, Cincinnati, OH, United States
^65University of Maryland, College Park, MD, United States
^66Los Alamos National Laboratory (LANL), Los Alamos, NM, United States
^67Syracuse University, Syracuse, NY, United States
^68Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^3
^69School of Physics and Electronics, Hunan University, Changsha City, China, associated to ^8
^70Guangdong Provincial Key Laboratory of Nuclear Science, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Institute of Quantum Matter, South China Normal University, Guangzhou, China, associated to ^4
^71Lanzhou University, Lanzhou, China, associated to ^5
^72School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^4
^73Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^15
^74Ruhr Universitaet Bochum, Fakultaet f. Physik und Astronomie, Bochum, Germany, associated to ^18
^75Eotvos Lorand University, Budapest, Hungary, associated to ^47
^76Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to ^36
^77Universiteit Maastricht, Maastricht, Netherlands, associated to ^36
^78Tadeusz Kosciuszko Cracow University of Technology, Cracow, Poland, associated to ^39
^79Universidade da Coruña, A Coruna, Spain, associated to ^43
^80Department of Physics and Astronomy, Uppsala University, Uppsala, Sweden, associated to ^58
^81University of Michigan, Ann Arbor, MI, United States, associated to ^67
^82Département de Physique Nucléaire (DPhN), Gif-Sur-Yvette, France
^aUniversidade de Brasília, Brasília, Brazil
^bCentro Federal de Educacão Tecnológica Celso Suckow da Fonseca, Rio De Janeiro, Brazil
^cHangzhou Institute for Advanced Study, UCAS, Hangzhou, China
^dSchool of Physics and Electronics, Henan University , Kaifeng, China
^eLIP6, Sorbonne Université, Paris, France
^fUniversidad Nacional Autónoma de Honduras, Tegucigalpa, Honduras
^gUniversità di Bari, Bari, Italy
^hUniversità di Bergamo, Bergamo, Italy
^iUniversità di Bologna, Bologna, Italy
^jUniversità di Cagliari, Cagliari, Italy
^kUniversità di Ferrara, Ferrara, Italy
^lUniversità di Firenze, Firenze, Italy
^mUniversità di Genova, Genova, Italy
^nUniversità degli Studi di Milano, Milano, Italy
^oUniversità degli Studi di Milano-Bicocca, Milano, Italy
^pUniversità di Padova, Padova, Italy
^qUniversità di Perugia, Perugia, Italy
^rScuola Normale Superiore, Pisa, Italy
^sUniversità di Pisa, Pisa, Italy
^tUniversità della Basilicata, Potenza, Italy
^uUniversità di Roma Tor Vergata, Roma, Italy
^vUniversità di Siena, Siena, Italy
^wUniversità di Urbino, Urbino, Italy
^xUniversidad de Alcalá, Alcalá de Henares , Spain
^yFacultad de Ciencias Fisicas, Madrid, Spain
^zDepartment of Physics/Division of Particle Physics, Lund, Sweden
^†Deceased
|
http://arxiv.org/abs/2409.02293v1 | 20240903210936 | Types of Size-Dependent Melting in Fe Nanoclusters: a Molecular Dynamics Study | [
"Louis E. S. Hoffenberg",
"Alexander Khrabry",
"Yuri Barsukov",
"Igor D. Kaganovich",
"David B. Graves"
] | physics.atm-clus | [
"physics.atm-clus",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08540
Andlinger Center for Energy and the Environment, Princeton University, Princeton, New Jersey 08540
Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540
[email protected]
Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540
[email protected]
Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08540
§ ABSTRACT
Metallic nanoclusters are of interest in many fields because of their size-dependent catalytic activity. This activity can, in part, be influenced by their melting properties. In this work, the melting phase transitions of Fe_n nanoclusters with n ≤ 100 atoms were investigated using classical many-body molecular dynamics simulations.
Adding a single atom to many cluster sizes induced strong variations in melting point (T_melt), latent heat of melting (Δ H_melt), and onset temperature of isomerization (T_iso). Clusters with size-dependent melting behavior were classified into 3 distinct cluster types: closed-shell, near-closed-shell, and far-from-closed-shell clusters. First-order-like phase transitions were observed only for cluster sizes with particularly symmetric closed shells and near-closed shells with up to a few missing or extra atoms. Near-closed-shell clusters had very low T_iso relative to their T_melt. Far-from-closed-shell clusters exhibited second-order-like phase transitions. Variations in the melting and isomerization behavior of neighboring cluster sizes may have implications for catalytic systems such as the growth of single-wall carbon nanotubes.
Types of Size-Dependent Melting in Fe Nanoclusters:
a Molecular Dynamics Study
David B. Graves
September 9, 2024
===============================================================================
§ INTRODUCTION
Nanoparticles (NPs) – i.e., particulate material with characteristic dimensions under 100 nm – have interesting properties that make them desirable for catalytic<cit.>, optoelectronic<cit.>, and biomedical<cit.> applications, among many others. These properties can depend on many factors like nanoparticle size, composition, degree of crystallinity, and other structural elements. The phase of the NP can have a significant effect on properties since atoms in liquid particles have considerably more mobility than in solid form. For example, it has been proposed that carbon nanotube (CNT) growth on catalytic iron NPs depends in part on carbon precursor adsorption, surface diffusion, and dissolution into the NP<cit.> – all of which can be influenced by the NP phase<cit.>. The transition between solid and liquid NP phases is therefore of potential importance in multiple applications.
The melting points for NPs are known to differ from those of their corresponding bulk materials<cit.>. NP melting temperatures are lower than the bulk melting point due to larger surface-atom-to-volume-atom ratios<cit.>. Surface atoms are bonded to fewer atoms than the inner atoms, so smaller NPs require less energy to melt than larger ones. Their melting points scale according to the Gibbs-Thomson equation:
T_m, NP = T_m, bulk(1 - 2 σ_sl/Δ H_mρ_s r)
where σ_sl is the solid-liquid interfacial energy, Δ H_m is the bulk latent heat of the melting, ρ is the bulk solid density, and r is the radius of the NP<cit.>.
It is also known that when the NP radius is below some threshold size range (i.e., approaching the nanocluster regime), melting points no longer follow the Gibbs-Thompson equation. Instead, melting temperatures can fluctuate strongly with cluster size, with the addition or subtraction of a single atom (cf. Fig.<ref>). Sometimes, the nanocluster melting points can exceed those of the bulk solid<cit.>.
This variation in melting points is attributed to interrelated geometric and electronic quantum size effects<cit.>, collectively called magic number effects. Magic number clusters are nanoclusters with particularly stable structures due to either configurational symmetry that maximizes bonding between atoms (corresponding to geometric magic numbers) or electronic effects that stabilize certain cluster geometries (i.e., electronic magic numbers)<cit.>. Magic numbers have been documented to affect melting behavior and the Gibbs free energies of formation for small metal clusters<cit.>.
In larger systems and NPs, atoms are frequently approximated as surface or bulk. The transition regime between Gibbs-Thomson NP scaling and the nanocluster fluctuation regime illustrated in Fig.<ref> is thought to occur when the cluster's atoms can no longer fit the surface-bulk binary<cit.>. When clusters fall in this regime, there are fewer bulk atoms and distinct types of surface atoms with different binding energies, which determine cluster energetics. Because certain structures have perfectly closed atomic shells (geometric magic numbers)<cit.>, nanoclusters with more or fewer atoms have notably different binding energies per atom and therefore different melting temperatures<cit.>. Ion calorimetry measurements of Al nanoclusters have revealed that the transition between Gibbs-Thompson NP scaling and nanocluster variation of melting temperatures occurs between clusters of 150 and 342 atoms<cit.>. Simulations of Ni nanoclusters observed the Gibbs-Thompson NP scaling in clusters as small as 90 atoms<cit.>.
The nanocluster size range, which is the focus of this work, is particularly interesting for some catalysis applications such as the catalytic growth of carbon nanotubes with Fe or Fe-containing alloys in floating catalyst chemical vapor deposition (FCCVD)<cit.>. Fe nanoclusters of up to ∼100 atoms (∼1.2 nm in diameter) are most relevant to the growth of single-wall CNTs (SWCNTs)<cit.>. This work focuses on Fe nanoclusters. Despite interest in iron nanoparticles for the catalytic growth of CNTs, among other applications, the melting behavior of Fe nanoclusters has garnered few dedicated studies<cit.>. Furthermore, no study has analyzed the majority of the Fe nanocluster size range. The caloric curves describing Fe cluster melting behavior in this work are used in the accompanying paper<cit.> to determine the free energies of cluster formation in the kinetic modeling of nucleation and growth from condensing vapor.
The process of melting in nanoclusters differs from bulk material melting and NP melting. Bulk melting is described by a sharp increase in atomic mobility of all atoms<cit.> and a steep rise in a caloric curve (a graph of cluster energy vs. temperature) at the melting point, indicating a first-order phase transition. NP melting, on the other hand, is often characterized by surface melting followed by melting of the NP core<cit.>. For nanoclusters, the process of ”melting“ involves a dynamic coexistence between ordered and disordered phases, a phenomenon generally not seen in nanoparticles or bulk materials<cit.>.
Because NP and nanocluster melting generally occurs on length scales and time scales that are difficult to resolve experimentally, molecular simulation is often employed. Monte Carlo (MC) methods can efficiently sample configurational potential energy surfaces and construct caloric curves to describe cluster phase transitions<cit.>. Molecular dynamics (MD) simulations construct time trajectories of atoms by directly integrating Newton’s equations of motion. Forces between atoms are calculated with an interatomic potential (F = -∇ E). Classical molecular dynamics uses models of the interatomic potential with parameters fit to some combination of experimental data and quantum mechanical calculations, such as density functional theory (DFT).
It is possible to use DFT to compute interatomic potentials at each time step in an MD simulation – sometimes referred to as Born-Oppenheimer MD (BOMD) or ab initio MD<cit.>. This method is more accurate but is considerably more computationally expensive (prohibitively expensive for clusters of tens of atoms). BOMD has been used to simulate the melting of palladium clusters<cit.> and gallium clusters with changing electronic properties or competing stable solid phases<cit.>.
This study uses classical MD simulation because of its accessibility and computational feasibility. Moreover, classical MD lends itself more readily to subsequent analyses involving more complex processes relevant to CNT growth (e.g., surface adsorption/desorption, diffusion, carbon dissolution, and formation of graphitic carbon). Magic numbers in Fe have been studied in small nanoclusters with both experiments<cit.> and spin polarization DFT simulations to capture magnetic properties <cit.>. Although classical MD simulations do not capture detailed electronic magic number effects (e.g., Fe_7 and Fe_15), geometric magic numbers (e.g., Fe_13) and their properties can be extracted and may be relevant to other transition metal atoms apart from Fe.
The paper is organized as follows. Section <ref> details the MD simulation method and the associated nanocluster structural and thermodynamic analysis. Section <ref> summarizes the results of the calculations of Fe nanocluster melting and phase transition characteristics. Section <ref> discusses the relationship between nanocluster structure and melting behavior. Finally, concluding remarks are summarized in Section <ref>.
§ METHODS
Molecular dynamics (MD) simulations were used to investigate cluster melting. In MD, discrete atoms are simulated in a periodic box and their motion is integrated forward in time, abiding by Newton's equations of motion (F = m a).
Classical MD, which uses simple and cheap interatomic potentials, can access larger lengthscales and timescales than quantum mechanical methods. However, some long-time- and length-scale phenomena (e.g., vapor condensation, leading to nucleation and growth of large numbers of NPs) are still prohibitively expensive because MD must account for every atom's movement. Despite these limitations, MD is still useful for gaining insights into atomic-scale phenomena that cannot be experimentally observed, such as the individual nanocluster phase transitions analyzed in this work.
Cluster melting data was obtained with classical MD simulations using the open-source LAMMPS (Large Atomic/Molecular Massively Parallel Simulator) software<cit.> with an embedded atom method Finnis-Sinclair (EAM-FS)<cit.> many-body interatomic potential for Fe. The potential energy of a given atom i is given by:
E_i = F_α(∑_j ≠ iρ_αβ (r_ij) ) + 1/2∑_j ≠ iϕ_αβ (r_ij)
where F_α is the embedding energy, a function of the (modeled) electron density, ρ_αβ, contributed by neighboring atom j of element β at the site of atom i of element α. ϕ_αβ is a simple pair potential between atoms i and j. The potential was parameterized to reproduce solid and liquid characteristics of Fe<cit.>. Solid state binding energies agreed with those calculated using density functional theory (DFT) (supplementary material Fig.S1-3). Furthermore, the boiling point and latent heat of vaporization were validated using MD simulations of direct vapor-liquid co-existence (supplementary material Fig.S4).
Global minimum energy configurations for Fe_n clusters for up to 100 atoms were collected from Elliot et al.<cit.> (obtained using basin-hopping energy minimization with the EAM-FS potential). The EAM-FS potential used in this work was slightly different from the one employed in Ref.<cit.>, with a smaller cutoff distance of 5.6 Å. For this reason, the cluster configurations were further minimized in LAMMPS with the command. MD simulations of individual Fe_n clusters were run in the microcanonical ensemble (NVE – constant number of atoms, N volume, V; and energy, E) for 10 ns with a 1 fs timestep. Thermodynamic data was collected from 75 NVE runs for each cluster size at its minimum energy configuration, initiated at temperatures from 100 - 7500 K (without whole-cluster translation and rotation), spanning the solid-liquid phase transition. Time-averaged cluster temperatures were ∼50 K to 3500 K for most cluster sizes because around half of the energy is transformed into potential energy. Canonical ensemble (NVT – constant number of atoms, volume, and temperature) simulations could have been used instead for thermodynamic sampling; however, NVE simulations were sufficient. Sampling with the NVE MD simulations was benchmarked with data from Frantz et al.<cit.> on Lennard-Jones Ar clusters, obtained with a parallel-tempering Monte Carlo approach (supplementary material, Fig.S5).
§ RESULTS
Molecular dynamics simulations were performed for individual clusters with up to 100 atoms at temperatures spanning the melting transition. Upon analysis of cluster melting, four types of clusters were delineated (Fig.<ref>) and will be used throughout this work: 1) Closed-shell clusters are clusters with highly symmetric, closed-shell global minimum energy structures. 2) Near-closed-shell clusters are 1 to a few atoms away from closed-shell structures. Both closed-shell and near-closed-shell clusters are considered magic number clusters. 3) Far-from-closed-shell clusters are distant in size to highly symmetric minimum-energy structures. These are considered non-magic number clusters.
Snapshots of two closed-shell clusters Fe_13 and Fe_78 before, in the middle of, and after melting are shown in Fig.<ref>a. Closed-shell (magic number) sizes – such as Fe_13 and Fe_78 – can have deeper global minima energy isomers in configuration space than far-from-closed-shell nanoclusters like Fe_8 (Fig.<ref>b). For this reason, the range of temperatures over which a cluster melts (escapes the global minimum potential energy well) can vary strongly. Moreover, due to the presence of many configurations that are close in energy and the dynamic-coexistence (isomerization) nature of nanocluster melting<cit.>, a single unambiguous melting point temperature can be difficult to identify.
To quantify the melting behavior of iron clusters, caloric curves of specific [total = potential and kinetic] energy (eV/atom) vs. temperature (K) were obtained for each cluster size; each data point corresponding to an NVE MD simulation at one cluster size and temperature (Fig.<ref>). Temperature was determined from time-averaged cluster kinetic energies (T_MD = 2 E_kin/k_B), corrected for whole-cluster rotational and translational degrees of freedom absent in the single-cluster MD simulations: T = T_MD/3 n - 6 for n ≥ 3 and T = T_MD/3 n - 5 for n = 2.
Caloric curves qualitatively differ by cluster size. Closed-shell magic number clusters (Fe_13, Fe_19, Fe_23, and more not shown – all have perfect or interpenetrating icosahedral structures) exhibit phase transitions with large increases in specific energy upon melting relative to neighboring clusters. This phenomenon can be seen in the large vertical spaces between the curves in the solid region (left) of Fig.<ref>. Many other cluster sizes exhibit no extra increase in specific energy upon melting.
In general, specific energy decreases with cluster size, n: E_n-1 > E_n > E_n+1. A cluster with a higher n has more bonds per atom due to a smaller surface-to-volume ratio, leading to a lower specific energy. The difference in specific energy between adjacent sizes (spacing between curves) decreases with increasing n, as the difference in surface-to-volume ratios decreases with greater n. The trend is violated in the solid phase regime (left) for closed-shell magic number clusters, whose energy curves then cross their n -1 neighbor cluster (Fe_12, Fe_18, and Fe_22 in Fig.<ref>) upon phase transition into the liquid phase regime (right), restoring the trend.
The melting point (T_melt) and the latent heat of melting (Δ H_melt) were calculated from individual caloric curves. Figs.<ref>a-c shows caloric curves for Fe_13, Fe_15, and Fe_96. The curve for Fe_13 (<ref>a) is representative of magic number cluster sizes, consisting of both closed-shell clusters and near-closed-shell clusters. Magic number clusters have first-order-like phase transitions with a relatively large T_melt (1415 K for Fe_13) and a large Δ H_melt. The curve for Fe_96 (<ref>b) represents far-from-closed-shell cluster sizes, which have a second-order-like melting transition featuring an abrupt change in slope. Far-from-closed-shell clusters have a small T_melt (555 K for Fe_96) and a negligible Δ H_melt. The caloric curve for Fe_15 represents those of 3 anomalous clusters constituting a subset of near-closed-shell cluster sizes (Fe_15, Fe_16, and Fe_17) with qualitatively different caloric curves. Anomalous caloric curves feature an elongated second-order phase transition with a gradual shift from the solid slope (C_v,solid) to the liquid slope (C_v,liquid). Anomalous clusters have ambiguous but high melting points (932 K for Fe_15) and small latent heat.
Because cluster caloric curves appear different from bulk ones (which usually feature a discontinuity at a first-order phase transition temperature), there is some freedom in determining cluster melting points and latent heat of melting. Linear fits were used on the solid and liquid ends of the caloric curves (red and orange lines in Figs.<ref>a-c) to help define T_melt and Δ H_melt. The fit lines were constructed to incorporate the number of data points (from each end) that minimized the uncertainty on the fit parameters. T_melt was defined as the point on the curve with the maximum distance from the solid and liquid fit lines. This definition worked well for caloric curves with first-order-like transitions. The Δ H_melt was defined as the difference in energy between the solid and liquid fit lines at T_melt. If the maximum distance between the data and the fit lines was below the noise threshold in the data (i.e., outliers determined the melting point of the curve), then the melting point was taken to be the data point associated with the intersection of the fit lines. This definition worked well for second-order-like phase transitions. Uncertainties for T_melt and Δ H_melt were calculated via propagation of the standard deviations of the best-fit line parameters interpolated at T_melt.
These two definitions for T_melt, while sensitive to the number of points taken in the liquid fit line (red), are a reasonable proxy for finding the maximum in C_v for a first-order-like transition and a maximum in dC_v/dT (which is d^2E/dT^2) for a second-order-like transition. C_v calculated directly from the MD data was very noisy when differentiated, especially at higher temperatures for smaller clusters. Clusters below 10 atoms showed more ambiguous melting behavior due to noise in high-temperature MD simulations, so the melting data for Fe_2-Fe_9 can be found in the supplementary material (caloric-curves folder).
Melting points and latent heat had strong variability between cluster sizes (T_melt ranged from 500 K to 1500 K) and generally correlated with each other. Scatter-plotted data of T_melt and Δ H_melt (Fig.<ref>a) show distinct separation between non-magic number clusters, clustered near the origin, and magic number clusters, which have positively correlated T_melt with Δ H_melt. The anomalous clusters are separate, spread just above the T_melt axis with a high T_melt, but a small Δ H_melt. Fig.<ref>b exhibits mountain-like “shelves” of magic number cluster sizes with high melting points and latent heats (Fe_19 - Fe_30, Fe_55 - Fe_69, and Fe_76 - Fe_83).
To understand the transitions in melting behavior in the T_melt and Δ H_melt shelves, the structures of the initial cluster configurations were inspected at the start and ends of the Fe_55 - Fe_69 and Fe_76 - Fe_83 shelves (Fig.<ref>). The increase of latent heat of clusters in the Fe_55 - Fe_69 shelf may be explained by a change away from a 180° symmetric structure in Fe_54 (right) to a configuration with 120° symmetry in Fe_55 (left), which is conserved until Fe_64. Images of these clusters can be found in the supplementary material (cluster-snapshots folder).
The increase in latent heat of the Fe_76 - Fe_83 shelf may be due to a strong shift from Fe_75's staggered 4-ring structure (left) and a 120° symmetric structure (right) to Fe_76's stacked 4-ring structure, and polyicosahedral-like structure (right – this polyicosahedral structure is complete in Fe_78). The decrease in melting point and latent heat at the end of the shelves are not easily apparent in the structural motifs of the minimum energy clusters.
Further investigating the cluster phase transitions, the Lindemann index<cit.> (δ) was calculated from each MD trajectory. δ is often used to quantify the melting of the clusters because it measures changes in atomic mobility in a solid, increasing rapidly with an increase in atom movement. In the case of nanoclusters, δ effectively measures the onset of cluster isomerization:
δ = 2/N(N-1)∑_i < j√(⟨ r_ij^2⟩_t - ⟨ r_ij⟩_t^2)/⟨ r_ij⟩_t,
where N is the number of atoms in the cluster, r_ij is the pairwise distance between two atoms i and j, and ⟨⟩_t indicates an average over the entire MD trajectory. The temperature at which a cluster's δ increases sharply is defined here as the cluster's isomerization temperature, T_iso.
The plots of the Lindemann index vs. temperature for Fe_13, Fe_96, and Fe_15 are given in Figs.<ref>a-c. The onset of atomic mobility described by a stark increase in δ indicates the beginning of cluster isomerization, where atoms move around while maintaining a structure similar to the solid cluster. For these 3 (and most other) clusters, this Lindemann isomerization temperature, T_iso, is much lower than the previously defined T_melt.
The different melting temperatures, T_melt, T_iso, and the solid fit limit of the caloric curve, T_m,start (the temperature at which the energy begins to rise faster due to melting) are discussed below. T_iso is much lower than T_melt for most sizes, especially anomalous and near-closed-shell cluster sizes (Fig.<ref>a). This is expected, as cluster isomerization can occur without accessing the higher potential energy configurations associated with the melted cluster. Furthermore, T_iso does not have the same “shelf”-like behavior as T_melt (Fig.<ref>b); instead, T_iso only has sharp peaks for the closed-shell magic numbers (Fe_n, n = 13, 19, 23, etc.). Although T_m,start is closer on average to T_iso, T_m,start remains far higher for smaller near-closed-shell clusters (e.g., n = 12, 14, 20, 24, etc.).
§ DISCUSSION
As mentioned above, the solid-liquid phase transition in Fe nanoclusters differs from melting in bulk Fe: the nanocluster version of melting initially consists of a crystalline cluster with atoms vibrating in place. Upon heating, stronger vibrations initiate atoms swapping with each other within the solid structure. Eventually, the cluster forms different structures (isomers) with higher potential energies and shorter lifetimes. As the cluster heats further, it enters a regime of dynamic co-existence, where higher potential energy isomers become longer-lived, eventually converting to a melted cluster with no memory of any crystalline configuration.
§.§ Cluster classes
The isomerization process that takes a cluster from solid to liquid also varies by cluster size. The clusters can be separated into 4 categories in terms of melting behavior observed from their caloric curves (energy vs. temperature, Fig.<ref>) and their Lindemann index transition indicating isomerization (Fig.<ref>).
* Closed-shell clusters are magic number clusters with highly symmetric (icosahedral or interpenetrated icosahedral) structures that resist the onset of isomerization until high temperatures (high T_melt and T_iso) and exhibit first-order-like phase transitions with large latent heats of melting.
* Near-closed-shell clusters are magic number clusters 1 to a few atoms away from closed-shell structures and isomerize at low temperatures (low T_iso). Early isomerization may delocalize the extra atoms or dangling bonds to enhance local stability. Near-closed-shell clusters still exhibit a first-order-like phase transition at a higher temperature (high T_melt) with large latent heat.
* Far-from-closed-shell clusters have low resistance to isomerization (low T_iso), rapidly lose their memory of the solid structure at low temperature, and undergo a short, kink-shaped second-order-like phase transition (low T_melt) with no latent heat.
* Anomalous clusters isomerize at very low temperatures (low T_iso) and begin visiting higher energy isomers at low temperatures, starting a stretched-out second-order-like phase transition (high T_melt). Instead of a kink-shaped phase transition, anomalous clusters have a slow, continuous increase in slope from C_v,solid to C_v,liquid, with no latent heat. Note: Despite the distinct appearance of their caloric curves, anomalous clusters may be misclassified near-closed-shell clusters with smaller first-order phase transitions not identified by the sensitive fitting procedure.
§.§ Cluster melting points and latent heats
Stark size-to-size variation in Fe cluster melting points and latent heat is broadly attributed to geometric magic number effects rooted in structural differences in the optimal cluster configuration. While the symmetry of optimal cluster configuration is suspected to underlie the variation in their melting characteristics, these structural differences don't obviously explain the larger-size ends of the latent heat “shelves” (Fig.<ref>). More subtle changes in structure may be responsible for the drops in latent heat.
Nonetheless, magic number effects in melting behavior may imply size-dependent disorder in real catalytic nanocluster systems at temperatures where some clusters are melted and others are not. Melted clusters with more disorder may have different catalytic activity from those that are solid (or more solid). For example, differences in disorder can influence carbon adsorption, diffusion, and dissolution in CNT growth, leading to different growth rates or modes of CNT growth (tangential vs. perpendicular growth<cit.>).
The presence of other species (e.g. carbon, hydrogen, sulfur, etc.) may also influence the melting behavior of clusters, potentially even increasing the melting point beyond that of the bulk (superheating<cit.>). Furthermore, electronic magic numbers (and electronic or magnetic properties, not modeled in this EAM-FS interatomic potential) may play a role in nanocluster melting behavior. Further investigation using molecular simulation in these systems would be of interest.
§.§ Isomerization temperature
There are key differences between the calculated melting points: T_melt, derived from the MD caloric curves, and T_iso, derived from the Lindemann index jump (Fig.<ref>). First, the melting points quantitatively disagree even for closed-shell magic number clusters. This is expected, since T_melt corresponds to an increase in potential energy from longer lifetimes of higher potential-energy isomers in the cluster isomerization phase, while T_iso is a measure of the onset of isomerization. Closed-shell magic number clusters with first-order-like phase transitions melt over a range of temperatures, with T_melt, start as the range's start. So T_iso should be close to T_melt, start. For most cluster sizes, T_iso lies near T_melt, start.
This trend is violated for small near-closed-shell clusters (e.g., Fe_n for n = 12, 14, 20, 24, etc.). Melting and isomerization behave qualitatively differently, with T_iso much lower than both T_melt and T_melt, start. The structural similarity of near-closed-shell clusters to closed-shell clusters may confer a deep PE well that requires a higher temperature to escape, leading to a high T_melt. Near-closed-shell clusters isomerize at lower temperatures, potentially delocalizing extra atoms or defects (dangling bonds).
T_melt and T_iso also differ qualitatively for larger clusters where “shelves” of high T_melt are observed. This may be because larger clusters have more atomic degrees of freedom and therefore a lower barrier to isomerization.
Both T_melt and T_iso may affect the catalytic activity of Fe nanoclusters. In CNT growth, mobilization of Fe atoms (T_iso) could accelerate the diffusion of C atoms on the NP surface, increasing the CNT growth rate. However, an increase in potential energy (T_melt) associated with longer lifetimes of new isomers with different structures – and therefore different crystal facets or disorder in the outer layer – may change both the adsorption energy of other species and their transport characteristics, likely influencing the rate of CNT growth. Investigation of adsorption energies and diffusion characteristics would be useful in determining the direction and magnitude of the effect on CNT growth rate.
§ CONCLUSION
Classical molecular dynamics calculations in the microcanonical ensemble (NVE) were used to calculate the melting properties of Fe nanoclusters up to 100 atoms (1.2 nm) in size. Cluster-to-cluster variations (magic number effects) were observed. The key takeaways from this work are the following:
* Addition of one to two atoms in a cluster can cause strong variations in the melting point, latent heat of melting, and onset temperature of isomerization. There may be implications for enhanced or suppressed catalytic activity through changes in species adsorption or transport on the cluster surface.
* Three types of nanoclusters with qualitatively distinct caloric curves are identified. Most clusters can be categorized as closed-shell, near-closed-shell, or far-from-closed-shell clusters.
* Cluster caloric curves revealed first-order-like phase transitions for closed-shell or near-closed-shell cluster sizes. Far-from-closed-shell sizes had second-order-like phase transitions. Second-order-like phase transitions in clusters have been reported before, but only in systems with concurrent transitions in electronic behavior<cit.> not modeled in the classical interatomic potential used in this work.
* The temperatures at which cluster isomerization begins (T_iso) and where potential energy increases (T_melt,start) differed strongly in near-closed-shell clusters.
* Geometric magic number effects alone conferred a deviation from the Gibbs-Thomson melting point depression scaling followed by nanoparticles.
§ SUPPLEMENTARY MATERIALS
Additional materials such as data used in method benchmarking can be found in SI.pdf. Caloric curves for all cluster sizes, Lindemann index curves for all cluster sizes, and snapshots of minimum energy configurations for most cluster sizes can be found in the caloric-curves, lindemann-curves, and cluster-snapshots folders, respectively.
§ ACKNOWLEDGEMENTS
The support of Princeton University's Andlinger Center for Energy and the Environment, and the Program in Plasma Science and Technology at the Princeton Plasma Physics Laboratory is gratefully acknowledged. In addition, this research utilized computing resources on the Princeton University Della and Stellar clusters.
COI statement: The authors have no conflicts of interest to disclose.
Data availability statement: The data are contained within the article and supplementary material.
Author contribution statement:
Louis E. S. Hoffenberg: conceptualization (equal); formal analysis (lead); writing – original draft preparation (lead). Alexander Khrabry: conceptualization (equal); formal analysis (supporting); review and editing (equal). Yuri Barsukov: conceptualization (equal); review and editing (equal). Igor D. Kaganovich: funding acquisition (supporting); conceptualization (equal); supervision (equal). David B. Graves: funding acquisition (lead); conceptualization (equal); review and editing (equal); supervision (equal).
|
http://arxiv.org/abs/2409.03445v1 | 20240905115042 | Neural HD Map Generation from Multiple Vectorized Tiles Locally Produced by Autonomous Vehicles | [
"Miao Fan",
"Yi Yao",
"Jianping Zhang",
"Xiangbo Song",
"Daihui Wu"
] | cs.RO | [
"cs.RO"
] |
Neural HD Map Generation from Vehicle-produced Vectorized Tiles
Fan et al.
NavInfo Co., Ltd., Beijing 100094, China
<https://en.navinfo.com/>
[email protected]
Neural HD Map Generation from Multiple Vectorized Tiles Locally Produced by Autonomous VehiclesThis work is supported by the National Natural Science Foundation of China under Grant No. U22A20104.
For more details about our recent studies, please visit corresponding author's website: <https://godfanmiao.github.io/homepage-en/>.
, Yi Yao, Jianping Zhang, Xiangbo Song, Daihui Wu
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
High-definition (HD) map is a fundamental component of autonomous driving systems, as it can provide precise environmental information about driving scenes. Recent work on vectorized map generation could produce merely 65% local map elements around the ego-vehicle at runtime by one tour with onboard sensors, leaving a puzzle of how to construct a global HD map projected in the world coordinate system under high-quality standards. To address the issue, we present GNMap as an end-to-end generative neural network to automatically construct HD maps with multiple vectorized tiles which are locally produced by autonomous vehicles through several tours. It leverages a multi-layer and attention-based autoencoder as the shared network, of which parameters are learned from two different tasks (i.e., pretraining and finetuning, respectively) to ensure both the completeness of generated maps and the correctness of element categories. Abundant qualitative evaluations are conducted on a real-world dataset and experimental results show that GNMap can surpass the SOTA method by more than 5% F1 score, reaching the level of industrial usage with a small amount of manual modification. We have already deployed it at Navinfo Co., Ltd., serving as an indispensable software to automatically build HD maps for autonomous driving systems.
§ INTRODUCTION
High-definition (HD) map <cit.> plays a pivotal role in autonomous driving <cit.>. Illustrated by Fig. <ref>, it provides high-precision vectorized elements (including pedestrian crossings, lane dividers, road boundaries, etc.) about road topologies and traffic rules, which are quite essential for the navigation of self-driving vehicles. Vectorized map elements are geometrically discretized into polylines or polygons, and conventionally produced offline by SLAM-based methods <cit.> with heavy reliance on human labor of annotation, facing both scalability and up-to-date issues.
To address the issues, recent studies <cit.> focus on developing online approaches for vectorized map construction. These methods aim at devising vehicle-mounted models that learn to generate local elements around the ego-vehicle at runtime with onboard sensors such as LiDARs <cit.> and cameras. Learning-based approaches have drawn ever-increasing attention as they can alleviate human efforts to some extent. However, even the SOTA methods <cit.> among them could merely produce 65% vehicle-around map elements by one tour, leaving a puzzle of how to construct a global HD map projected in the world coordinate system under high-quality standards.
As the first attempt to solve the puzzle, we present GNMap in this paper. It is an end-to-end generative neural network which takes vehicle-produced vectorized tiles through multiple tours as inputs and automatically generates a globalized HD map under the world coordinates as the output.
Specifically, GNMap adopts a multi-layer and attention-based autoencoder as the shared network, of which parameters are learned from two different tasks (i.e., pretraining and finetuning, respectively). At pretraining phase, the shared autoencoder is responsible for completing the masked vectorized tiles. The pretrained parameters are further leveraged as the initial weights for finetuning, which aims at assigning each pixel of map elements to the correct category. In this way, we ensure both the completeness of generated maps and the correctness of element categories.
Additionally, we build a real-world dataset to conduct qualitative assessments offline. Each instance of the data belongs to a vectorized tile mainly composed of three kinds of map elements, i.e., pedestrian crossings, lane dividers, and road boundaries. Besides that, a tile is passed through multiple tours by autonomous vehicles with a street view for each tour. Ablation studies demonstrate that it is vital to conduct pretraining on GNMap for the sake of achieving the best performance. Experimental results of abundant evaluations also show that it can surpass the SOTA method by more than 5% F1 score. So far, GNMap has already been deployed at Navinfo Co., Ltd. for industrial usage, serving as an indispensable software to automatically build HD maps of Mainland China for autonomous driving.
§ RELATED WORK
§.§ SLAM-based Methods (Offline)
HD maps are conventionally annotated manually on LiDAR point clouds of the environment. These point clouds are collected from LiDAR scans of survey vehicles with GPS <cit.> and IMU <cit.>. In order to fuse LiDAR scans into an accurate and consistent point cloud, SLAM methods <cit.> are mostly used, and they generally adopt a decoupled pipeline as follows. Pairwise alignment algorithms like ICP <cit.> and NDT <cit.> are firstly employed to match LiDAR data between two nearby timestamps. And for the purpose of constructing a globally consistent map, it is critical to estimate the accurate pose of ego-vehicle by GTSAM <cit.>. Although several machine learning methods <cit.> are further devised to extract static map elements such as pedestrian crossings, lane dividers and road boundaries from fused LiDAR point clouds, it is still laborious and costly to maintain a scalable HD map since it requires timely update for autonomous driving.
§.§ Learning-based Approaches (Online)
To get rid of offline human efforts, learning-based HD map construction has attracted ever-increasing interests. These approaches <cit.> propose to build local maps at runtime based on surround-view images captured by vehicle-mounted cameras. Specifically, HDMapNet <cit.> first produces semantic map and then groups pixel-wise semantic segmentation results in the post-processing. VectorMapNet <cit.> adopts a two-stage coarse-to-fine framework and utilizes auto-regressive decoder to predict points sequentially, leading to long inference time and the ambiguity about permutation. To alleviate the problem, BeMapNet <cit.> adopts a unified piece-wise Bezier curve to describe the geometrical shape of map elements.
InstaGraM <cit.> proposes a novel graph modeling for vectorized polylines of map elements that models geometric, semantic and instance-level information as graph representations.
MapTR <cit.> uses a fixed number of points to represent a map element, regardless of its shape complexity. PivotNet <cit.> models map elements through pivot-based representation in a set prediction framework.
However, even the SOTA methods among them could merely produce 65% vehicle-around map elements by one tour, leaving a puzzle of how to build a global HD map projected under the world coordinates.
§ MODEL
§.§ Problem Formulation
The objective of GNMap is to generate a globalized HD map under the world coordinates from several vehicle-produced tiles. The vehicle-produced tiles are represented by RGB images, and we use 𝒳 to denote the set of the images as inputs. As shown by Eq. <ref>, GNMap is formulated as ℱ(𝒳; Θ) which learns to fuse the images 𝒳 and to generate a globalized HD map as the output denoted by 𝒴:
𝒴 = ℱ(𝒳; Θ),
where Θ represents the set of best parameters that GNMap needs to explore.
§.§ Shared Autoencoder
To realize ℱ(Θ), we devise an autoencoder that is structured into two parts: a neural encoder E(𝒳; θ_e) and a neural decoder D(𝒵; θ_d). The relationship between the encoder and the decoder is shown by Eq. <ref> and Eq. <ref>:
𝒵 = E(𝒳; θ_e)
and
𝒴 = D(𝒵; θ_d),
where E(𝒳; θ_e) takes 𝒳 as inputs to produce the intermediate feature representation 𝒵 by means of the parameters θ_e of encoder, and D(𝒵; θ_d) takes intermediate feature 𝒵 as the input to generate the output 𝒴 by means of the parameters θ_d of decoder. Both θ_e and θ_d belong to Θ:
Θ = (θ_e, θ_d).
Illustrated by Fig. <ref>, both the encoder E(𝒳; θ_e) and the decoder D(𝒵; θ_d) are multi-layer networks mainly composed of multi-head self-attention functions. We will elaborate on them in the following paragraphs.
Encoder:
E(𝒳; θ_e) is composed of M layer neural blocks with the same structure. Each block includes a multi-head self-attention (MSA <cit.>), a multi-layer perceptron (MLP), and a layer normalization (LN) module. Here we use U_i to denote the intermediate output of the block at the i-th layer of encoder, and U_i is calculated by Eq. <ref> and Eq. <ref>:
U_i^' = MSA ( U_i-1 )+U_i-1, i∈{1,2,...,M}
and
U_i = LN (MLP (U_i^' )+U_i^'), i∈{1,2,...,M},
where 𝒳 = U_0 and 𝒵 = U_M.
Decoder:
D(𝒵; θ_d) has N stacked blocks with the same structure. Each block is composed of includes a multi-head self-attention (MSA <cit.>), a multi-layer perceptron (MLP), and a layer normalization (LN) function as well. If we use V_j to denote the intermediate output of the block at the j-th layer of decoder, V_j is calculated by Eq. <ref> and Eq. <ref>:
V_j^' = MSA ( V_j-1 )+V_j-1, j∈{1,2,...,N}
and
V_j = LN (MLP (V_j^' )+V_j^'), j∈{1,2,...,N},
where 𝒵 = V_0 and 𝒴 = V_N.
In order to obtain the best parameters of both θ_e and θ_e, we propose to adopt the "pretraining & finetuning" manner which divides the training procedure into two phases, corresponding to different tasks and learning objectives. Details about the two phases will be elaborated by Section <ref> and Section <ref>.
§.§ Pretraining Phase
At the pretraining phase, the learning objective of the shared autoencoder is to complete masked vectorized tiles, and the pretrained parameters are further leveraged as the initial weights for finetuning. Illustrated by Fig. <ref>, we will elaborate pretraining phase from the perspectives of input, output, ground truth, and loss function in the following paragraphs.
Input:
We split the manually annotated HD maps into multiple vectorized tiles. Each of the vectorized tiles can be transferred into a gray-scaled image denoted by 𝒳∈ℝ^h × w × 1, where h and w represent the height and the width of the image respectively. In 𝒳, each pixel of any may elements is set to 255 and the background's pixel is set to 0.
Then the image is divided into non-overlapping patches with the shape of k × l. As a result, h × w/k × l patches (each p ∈ℝ^k × l) can be obtained. We sample a subset of patches and mask (i.e., remove) the remaining ones. Our strategy is straightforward: sampling random patches without replacement, following a uniform distribution with a high masking ratio (i.e., the ratio of removed patches). In this way, we have created a task that cannot be easily solved by extrapolation from visible neighboring patches.
Output: We expect to obtain a completed gray-scale tile as the output through the shared autoencoder which takes the masked patches as inputs. The completed image is denoted by 𝒴∈ℝ^h × w × 1, where h and w represent the height and the width of the completed image, respectively. The value of each predicted pixel y_i where i ∈{1, 2, ..., h × w} ranges from 0.0 to 1.0 since it is scaled by the softmax function.
Ground Truth: Correspondingly, the ground-truth image is the unsliced one (i.e., 𝒳) used as the input. We denoted it by 𝒴̂∈ℝ^h × w × 1 since each pixel of 𝒴̂ is set by either 0 or 1 to indicate whether it belongs to the background or vectorized map elements.
Loss Function: We employ the mean squared error (MSE) as the loss function (denoted by ℒ) for pretraining.
ℒ =1/h × w∑_i=1^h × w ( y_i-ŷ_i )^2.
As shown by Eq. <ref>, it measures the overall difference between 𝒴 and 𝒴̂, by calculating the squared errors between the predicted pixels and the ground-truth pixels at the same coordinates.
§.§ Finetuning Phase
At finetuning phase, the learning objective of the shared autoencoder changes to assigning each pixel of the elements of the generated map to the correct category, leveraging the pretrained parameters as initial weights. Illustrated by Fig. <ref>, we will elaborate finetuning phase from the perspectives of input, output, ground truth, and loss function in the following paragraphs.
Input:
In this work, a tile is passed through T times of tours by autonomous vehicles with a street view for each tour. The original street views collected by the cameras mounted on survey vehicles are usually RGB images and learning-based approaches <cit.> generally transfer them into vectorized images where each pixel belongs to a certain category such as the background or land divider, etc..
As a matter of fact, we can obtain T images at the beginning of the finetuning phase. We use a shared CNN network to fetch the features from the T images and concatenate them together as the input of the shared autoencoder.
Output: We expect to achieve a fused tile from GNMap as the output at the finetuning phase. The generated image is denoted by 𝒴∈ℝ^h × w × c, where h and w represent the height and the width of the image, respectively, and c stands for the kinds of map elements. Each predicted pixel y_i is represented by a c-dimensional vector where the value at each dimension ranges from 0.0 to 1.0 to indicate the probability of the predicted category and the sum of all these values is 1.0.
Ground Truth: Correspondingly, the ground-truth image is denoted by 𝒴̂∈ℝ^h × w × c. In addition, each pixel of 𝒴̂ is denoted by a c-dimensional vector where only one of the values is set by 1.0 exclusively indicating that the pixel belongs to a certain category such as the background, pedestrian crossing, or etc.
Loss Function: We employ the cross-entropy (CE) function as the loss (denoted by ℒ') of the finetuning phase.
ℒ'=-1/h × w∑_i=1^h × wŷ'̂_i·log_ (y'_i ).
As shown by Eq. <ref>, it measures the divergence between 𝒴 and 𝒴̂, by summing up the log-likelihood at ground-truth pixels.
§ EXPERIMENTS
§.§ Dataset and Metrics
In order to conduct an offline assessment on methods of HD map generation, we build a real-world dataset that contains street views and vectorized tiles produced by autonomous vehicles through multiple tours. We randomly split the dataset into three subsets. As shown by Table <ref>, they are separately leveraged for the purpose of model training (abbr. Train), hyper-parameter tuning (abbr. Valid), and performance testing (abbr. Test). Each subset is composed of many exclusive tiles, each of which is passed through multiple tours by autonomous vehicles. For each tour, a street view is collected and a vectorized tile is produced simultaneously online by vehicle-mounted models. Following up previous work, we mainly focus on three kinds of map elements, including pedestrian crossings (abbr. as ped.), lane dividers (abbr. as div.), and road boundaries (abbr. as bou.).
For each generated tile, we use precision (P) and recall (R) to evaluate the quality of HD map reconstruction at the pixel level in one instance. Illustrated by Fig. <ref>, a predicted point is accepted as the positive pixel when it is located near a ground-truth (GT) point within the Euclidean distance of 0.5 meters and they must belong to the same category as well. More importantly, a GT point can only accept one nearest predicted point for evaluation. Assuming that the test set contains n instances, average precision (AP) and average recall (AR) are formulated by Eq. <ref> and Eq. <ref> as follows,
AP=1/n∑P
and
AR=1/n∑R
Then mAP and mAR represent the mean average precision and recall over all categories (i.e., pedestrian crossing, lane divider, and road boundary), which are shown by Eq. <ref> and Eq. <ref>.
mAP = AP_ped.+AP_div.+AP_bou./3
mAR = AR_ped.+AR_div.+AR_bou./3
To measure the overall performance of approaches on HD map generation, we adopt F1 score, as shown by Eq. <ref>, which calculates the harmonic mean of mAP and mAR.
F1 = 2 × mAP × mAR/mAP+mAR
§.§ Comparison Details
We mainly compare GNMap with two groups of approaches. One group contains vehicle-mounted models (including HDMapNet <cit.>, VectorMapNet <cit.>, InstaGraM <cit.>, BeMapNet <cit.>, MapTR <cit.>, and PivotNet <cit.>) which infer vectorized tiles online from real-time street views captured by onsite cameras. The other group represents approaches (i.e., GMM <cit.> and our GNMap) on fusing the vehicle-produced tiles to construct a global HD map.
Table <ref> reports the experimental results of these two groups of methods for HD map construction. All the approaches are tested by the real-world dataset shown in Table <ref> and measured by the metrics mentioned in Section <ref>. Based on our results, MapTR and PivotNet achieve comparable performance of online map learning through only one tour. Our GNMap outperforms GMM over 10.0% F1 score.
Even compared with the existing SOTA method of online map learning, GNMap achieves over 5.0% higher F1, demonstrating advanced performance on HD map construction.
§.§ Ablation Study
We report ablation experiments in Table <ref>, to validate the effectiveness of employing the pretraining phase, and the robustness of using different vehicle-mounted models. We select MapTR <cit.> and PivotNet <cit.>, as the SOTA one-tour vehicle-mounted models, to produce vectorized tiles for GMM <cit.> and our GNMap. Experimental results demonstrate that GNMap achieves consistent improvements over GMM regardless of the vehicle-mounted models. Moreover, the pretrained GNMap can provide at least 8.0% higher F1 score than those without pretraining.
§ CONCLUSION
In this paper, we present GNMap as an end-to-end generative framework for HD map construction, which is distinguished from recent studies on producing vectorized tiles locally by autonomous vehicles with onboard sensors such as LiDARs and cameras. GNMap is an essential research to follow up those studies, as it first attempts to fuse multiple vehicle-produced tiles to automatically build a globalized HD map under the world coordinates. To be specific, it adopts a multi-layer autoencoder purely composed of multi-head self-attentions as the shared network, where the parameters are learned from two different tasks (i.e., pretraining and finetuning, respectively) to ensure both the completeness of map generation and the correctness of element categories. Ablation studies demonstrate that it is vital to conduct pretraining on GNMap for the sake of achieving the best performance for industrial usage. And experimental results of abundant evaluations on a real-world dataset show that GNMap can surpass the SOTA method by more than 5% F1 score. So far, it has already been deployed at Navinfo Co., Ltd., serving as an indispensable software to automatically build HD maps of Mainland China for autonomous driving.
splncs04
|
http://arxiv.org/abs/2409.03090v1 | 20240904213239 | Emergence of two inertial sub-ranges in solar wind turbulence: dependence on heliospheric distance and solar activity | [
"Shiladittya Mondal",
"Supratik Banerjee",
"Luca Sorriso-Valvo"
] | physics.space-ph | [
"physics.space-ph",
"astro-ph.SR",
"physics.plasm-ph"
] |
0009-0001-4841-1103]Shiladittya Mondal
Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India
0000-0002-3746-0989]Supratik Banerjee
Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India
0000-0002-5981-7758]Luca Sorriso-Valvo
Institute for Plasma Science and Technology (ISTP), CNR, Bari, Italy
Space and Plasma Physics, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
§ ABSTRACT
The solar wind is highly turbulent, and intermittency effects are observed for fluctuations within the inertial range. By analyzing magnetic field spectra and fourth-order moments, we perform a comparative study of intermittency in different types of solar wind measured during periods of solar minima and a maximum.
Using eight fast solar wind intervals measured during solar minima between 0.3 au and 3.16 au, we found a clear signature of two inertial sub-ranges with f^-3/2 and f^-5/3 power laws in the magnetic power spectra.
The intermittency, measured through the scaling law of the kurtosis of magnetic field fluctuations, further confirms the existence of two different power laws separated by a clear break.
A systematic study on the evolution of the said sub-ranges as a function of heliospheric distance shows correlation of the break scale with both the turbulence outer scale and the typical ion scales.
During solar maximum, we analyzed five intervals for each of Alfvénic fast, Alfvénic slow and non-Alfvénic slow solar wind. Unlike the case during the solar minima, the two sub-ranges are no longer prominent and
the Alfvénic slow wind is found to be in an intermediate state of turbulence compared to that of the fast wind and the usual non-Alfvénic slow wind.
§ INTRODUCTION
The solar wind is the most accessible natural laboratory for studying turbulence in space plasmas <cit.>.
The dynamic solar activity and the diversity of the originating regions produce solar wind with a variety of characteristics, the most evident being the plasma speed.
While the fast solar wind (FSW, >550 km s^-1) mainly emanates from the polar coronal holes, the slow solar wind (SSW, <400 km s^-1) is believed to be originated from equatorial streamers <cit.>.
During high solar activity, however, both FSW and SSW are distributed at all latitudes instead of being confined exclusively to polar and equatorial regions, respectively.
Another interesting feature of FSW is the high Alfvénicity i.e. high correlation (or anti-correlation) between velocity fluctuations () and (=𝐁/√(μ_0ρ), where B the magnetic field fluctuation and ρ the mass density) in contrast with the SSW comprising of weak v-b correlations.
This one-to-one correspondence, however, does not strictly hold during high solar activity, as a third type of wind is also observed. This wind, termed as Alfvénic slow solar wind (ASSW), has speed similar to that of the slow wind but is surprisingly permeated with high Alfvénicity <cit.>.
The degree of Alfvénicity influences the nature of turbulence in different types of solar winds.
A high degree of Alfvénicity represents an imbalance between the Elsässer variables, 𝐳^± = ±, thus leading to less developed turbulence whereas low Alfvénicity corresponds to comparatively more developed turbulence owing to the balance between them.
At scales greater than the ion-inertial length (d_i), a longer k^-5/3 energy power spectrum is therefore observed for the slow wind whereas a comparatively shorter k^-5/3 spectrum is observed in the Alfvénic fast wind <cit.>.
These observed spectra are universal in solar wind turbulence and are consistent with self-similar energy cascade (Kolmogorov phenomenology) within the inertial range. In physical space, universal energy cascade is obtained in terms of the linear scaling law for the third-order moments of velocity and magnetic field fluctuations <cit.>. In order to assure a self-similar cascade, the kurtosis K (the normalised fourth-order moment of the fluctuations) should be scale invariant. For a turbulent flow, one such possibility is the case of quasi-Gaussian PDFs where the third-order moment (skewness) is non-zero but the K is roughly equal to that of a Gaussian distribution.
However, careful studies in turbulent fluids and plasmas consistently show a departure from self-similarity as one moves towards the smaller length scales within the inertial range.
This departure, known as inertial-range intermittency, is characterised by the large tails of the PDFs at those scales.
In particular, intermittency effects are quantified by the deviation from the self-similar scaling laws of the higher-order moments <cit.>.
Instead of using arbitrary higher-order moments, the kurtosis is often used as a practical measure of intermittency and a higher probability of extreme events leads to its increase with decreasing length scale ℓ.
From a physical point of view, this implies that the small-scale coherent structures, such as the vortices, current sheets, etc., generated due to nonlinear interactions, do not fill the available space in a self-similar way nor are randomly distributed, but rather tend to form inhomogeneously distributed clusters of bursts <cit.>.
While the solar wind expands and accelerates through the heliosphere, the turbulence becomes more developed, with the fluctuations being majorly energized by the nonlinear interactions between the oppositely propagating Alfvén waves <cit.>, switchbacks <cit.>, large-scale structures, and instabilities <cit.>.
Studies based on spacecraft observations have shown a variation in spectral indices of the magnetic and velocity power spectra <cit.>, a decrease in - correlations and a broadening of the inertial range <cit.>.
Recently, using high resolution in-situ data of the Parker Solar Probe it has been suggested that the magnetic spectral index evolves from -3/2 near the Sun (as close as 0.17 au) to a more developed -5/3 at 1 au <cit.>.
These observations are consistent with the idea of radial evolution of solar wind turbulence into more developed states and the non-adiabatic heating of the medium with increasing heliospheric distance <cit.>.
In addition, a power law behaviour for the kurtosis <cit.> and an increase in intermittency in solar wind turbulence have been observed at greater heliospheric distances <cit.>.
Using the magnetic data of Helios 2, <cit.> observed a break in the scaling of K of the magnetic field fluctuations in FSW, during solar minimum. They provided a plausible explanation suggesting this observed break to be associated with the f^-1 break in the magnetic power spectrum of FSW. However, clear disparity is observed between the scales corresponding to the breaks in kurtosis scaling and the power spectra.
A break in both the spectral density and the higher-order structure functions has also been observed <cit.>.
However, the nature of such break and its implications on the dynamics of the solar wind turbulence have not been investigated in detail yet.
In this paper, we revisit the aforementioned problem and carry out a systematic study to provide an explanation for the break observed in the kurtosis scaling.
Using the in-situ data of Helios and Ulysses during solar minima, we show the kurtosis break is primarily associated with an observed break between two inertial sub-regimes of magnetic power spectra, having -3/2 and -5/3 spectral indices, respectively.
In addition, we also study the radial evolution of the break scale to characterise the solar wind turbulence as a function of the heliospheric distance.
During a solar maximum, however, breaks are not prominent in the scaling of K. Nevertheless, a comparative study of FSW, ASSW and SSW shows a clear distinction of these three types of wind according to the degree of turbulence and the degree of intermittency, along with some insights on the origin of ASSW.
In Sections <ref> and <ref>, we briefly describe the data and methodologies used for the analysis. Section <ref> provides the results obtained in our study during solar minima (<ref>) and maxima (<ref>), respectively. Finally, in Section <ref>, we summarize our findings and conclude.
§ DATA SELECTION
For our analysis, we have used in-situ data from the Helios and Ulysses spacecraft data repository publicly available at NASA CDAWeb (https://cdaweb.gsfc.nasa.gov/https://cdaweb.gsfc.nasa.gov) and AMDA science analysis system (https://amda.irap.omp.eu/https://amda.irap.omp.eu).
The plasma data for Helios and Ulysses have been obtained from the E1 Plasma Experiment instrument and the Solar Wind Observations Over the Poles of the Sun (SWOOPS) instrument, respectively. For magnetic power spectrum and kurtosis scaling, we use 6 s resolution magnetic-field data from the E3 Flux-gate Magnetometer (FGM) onboard Helios and 1 s resolution magnetic field data from the Vector Helium Magnetometer (VHM) onboard Ulysses spacecraft.
During a declining phase of solar activity near a solar minimum between 1975 and 1976, Helios 1 and 2 recorded several streams of FSW from a coronal hole (or the same source), which sustained through nearly two solar rotations <cit.>.
Several intervals of the fast wind expelled from this coronal hole were also identified by <cit.>.
In particular, for our current analysis, we use the streams - A3, A6, A7, and A8, ranging from 0.3 au to 1 au, mentioned therein.
Each chosen interval (i) contains negligibly small amount of data gaps, (ii) is free of any considerable mean trend, and (iii) turns up to be reasonably stationary.
The stationarity is assured by the approximate constant average of sub-intervals of different lengths. Extending our analysis beyond 1 au, we use four intervals of FSW at varying heliospheric distances (F1 - F4 as listed in Table - <ref>), recorded by Ulysses during the years 1995-1996. Typical features of certain FSW intervals in the inner and outer heliosphere used in our analysis with high v-b correlations are shown in Fig. <ref>.
In order to interpret our findings, we also need to compute the co-spectra of cross-helicty σ_c (see Section <ref>), for which we have used the 40.5 s resolution magnetic field and proton velocity data from the E3 FGM and the E1 Plasma Experiment instrument onboard Helios. We use degraded resolution for the magnetic field data in order to keep coherence with the available plasma data from the data repository.
A similar analysis cannot be done using the plasma data of Ulysses where the data resolution is 240 s, and hence cannot be used to capture the required length scales of our interest.
During solar maximum, five Ulysses intervals each for the three types of solar wind were selected following similar methods prescribed in <cit.> based on their speed, proton density, and Alfvénic correlations (see Table <ref>).
A particular case study represents several properties of the different types of wind within a 20-day interval (see Fig. <ref>). While ASSW looks very similar to SSW with respect to the flow speed (< 400 km/sec), it is characterised by low proton density (∼ 1 particle/cm^3) and high Alfvénicity (∼ 0.6) similar to FSW.
These findings are in agreement with previous studies <cit.>.
§ ANALYSIS METHOD
Our analysis is mainly based on the computation of (i) the kurtosis (K) or the normalized fourth-order moments of magnetic field fluctuations, (ii) the magnetic power spectral density (PSD), and (iii) the cross-helicity co-spectra (σ̂_̂ĉ). All the data sets were made evenly sampled by interpolating the data gaps before using for any of the computations.
Since all the intervals used in our study are permeated by superalfvénic solar wind, one can practically use Taylor's hypothesis, which means if the turbulent fluctuations are much smaller than the bulk speed, they can be considered as frozen (or slowly evolving) as the flow sweeps the probe <cit.>.
When using single-point measurements in the form of a time series, the only accessible direction for the increments is along the bulk flow.
This provides an equivalence between the longitudinal (along the flow) length scale ℓ and the corresponding time scale τ as ℓ = V_swτ, where V_sw is the mean solar wind speed.
Therefore, we define the increments of the i^th component (with i=r,t,n) of the magnetic field as Δ b_i(t,τ)=b_i(t+τ)-b_i(t).
In order to capture both magnitudinal and directional fluctuations of , we define the n^th order structure function as:
S_n(τ) = ⟨[ ∑_i (Δ b_i )^2 ]^n/2⟩,
where ⟨·⟩ represents the ensemble average <cit.>.
The corresponding kurtosis (K) is then calculated using the standard expression:
K(τ) = S_4(τ)/[S_2(τ)]^2.
Note that, when each Δ b_i follows a Gaussian distribution with zero mean, K (τ) is equal to 5/3 (see appendix Section <ref>). For a self-similar, non intermittent flow, in the inertial range of scales (namely much smaller than the energy-injection scales and larger than the dissipative scales) the n^th order structure function is expected to scale as S_n(τ) ∝τ^np, where p is a phenomenological constant <cit.>.
It is therefore straightforward to see that K becomes independent of τ.
However, in the presence of intermittency, this linear scaling does not hold any longer and the simplest intermittency model can be given as S_n(τ) ∝τ^np + q(n), where q(n) is a nonlinear correction accounting for the intermittent structures.
For the kurtosis, this leads to a power-law scaling K(τ) ∼τ^-κ, with κ=q(4)/2q(2).
Such a scaling, universally observed in fluid turbulence, has recently been described in the case of solar wind turbulence as well <cit.>.
In this work, we study the scaling properties of K of the magnetic field fluctuations at different heliospheric distances.
Finally, the magnetic energy spectra and cross-helicity co-spectra are defined by, PSD = b̂_̂î^†b̂_̂î and σ̂_̂ĉ = (b̂_̂î^†v̂_̂î + v̂_̂î^†b̂_̂î)/2 respectively, where b̂_̂î and v̂_̂î are the Fast Fourier transforms (FFT) of the magnetic and velocity field components b_i and v_i, respectively, with summation being intended over the repeated indices (where i=r,t,n).
§ RESULTS AND DISCUSSIONS
§.§ Observations during Solar Minimum
During a period of solar minimum in 1976, using data from Helios spacecraft, we study FSW streams in the inner heliosphere (at 0.3, 0.41, 0.65, and 0.98 au) from a sustained coronal hole near the ecliptic plane.
Beyond 1 au, FSW streams are studied using Ulysses data collected during the 1995-1996 solar minimum, at varying heliospheric distances (at 1.44, 2.1, 2.75, and 3.16 au), which were also measured at different latitudes.
In Fig. <ref>, we have drawn the magnetic power spectral traces, estimated using nearly equispaced frequencies in logarithmic scales.
Top panels refer to Helios intervals, while bottom panels to Ulysses.
As typically observed in the Alfvénic solar wind, at low frequencies we can identify a large-scale, energy-containing range (white background in the figure), where the power decays as ∼ f^-1. Fitted power laws and the corresponding scaling exponents are shown as green lines.
A break identifies a clear change in the power-law scaling exponent, as indicated by vertical dashed lines.
Such break can be associated with the correlation scale of the turbulence.
The low-frequency range is clearly visible in Helios data, while it is only indicatively present in the Ulysses intervals. This is consistent with the well-known shift of the correlation scale towards lower frequency with increasing R in the solar wind <cit.>.
The f^-1 range is followed by the usual inertial range of turbulence, where the spectrum roughly follows an f^-5/3 power law dependence <cit.>.
However, a more accurate inspection shows that a further break emerges within such range, indicated by the vertical dot-dashed lines separating the light and deeper blue shaded areas in Fig. <ref>.
Although the dynamical range of frequencies is relatively short, for intervals other than that at 3.16 au, it is possible to identify two different sub-ranges with different power laws as demonstrated by the red and blue lines, with the associated scaling exponents indicated nearby.
In the lower-frequency range (light blue background), the spectral index approaches -3/2, whereas at larger frequencies (deep blue background), the spectra show a transition to a -5/3 spectral index usually observed in non-Alfvénic solar wind <cit.>.
In isotropic turbulence, whereas an f^-5/3 scaling often represents an energy cascade by eddy fragmentation in strong turbulence, f^-3/2 scaling can possibly be explained by an energy cascade through the sporadic interaction of Alfvénic wave packets in MHD turbulence <cit.>.
However, -5/3 and -3/2 power laws can also be obtained under various circumstances if anisotropy is taken into account <cit.>.
Irrespective of the true nature of energy cascade, a single power law is often assumed for the magnetic power spectra in the frequency range 10^-4 - 10^-1 Hz <cit.>,
although a few studies have found variation in the power law exponents in the inertial range of magnetic power spectra <cit.> as well as the scaling of higher order structure functions <cit.>.
In our study, the co-existence of the two sub-regimes (with -3/2 and -5/3 spectral indices) within the turbulence spectra of FSW has been consistently observed at various heliospheric distances both in the inner as well as the outer heliosphere.
The break scale between those two sub-ranges, f_b, appears to shift towards lower frequencies (approaching the correlation scale) with increasing heliospheric distance.
This is consistent with the fact that a -3/2 scaling has been observed for solar wind close to the sun, whereas a steeper -5/3 power law is obtained at and beyond 1 au <cit.>.
Finally, in the Ulysses intervals, the ion-scale breaks are visible, separating the MHD range from the sub-ion range, where Hall effects and other kinetic effects start to affect the cascade (white background) <cit.>.
Such break is usually observed at frequencies ∼ 10^-1 Hz, which is the upper cut-off for the MHD range.
However, similar breaks do not turn up in the Helios intervals, due to the low cadence of the data used here.
To further investigate on the sub-inertial range spectral break, f_b, we study the kurtosis K(τ) for all of the eight FSW intervals.
The scaling of K(τ) defined in Section <ref> for Helios and Ulysses data are depicted in Fig. <ref> top and bottom panels, respectively, for each R.
To inspect on the general radial trend of intermittency we have drawn a consolidated plot for the Helios and Ulysses intervals (see Fig. <ref>). From this figure one can conclude that the value of K at all scales increases with increasing R, thus implying higher intermittency with increasing heliospheric distance, in agreement with previous studies <cit.>.
At each given distance R, K is systematically found to decrease as one moves towards the larger scales. This is consistent with the notion that deviation from Gaussian statistics increases at smaller scales
<cit.>.
Upon reaching the typical correlation scales of the flow (τ≃ 10^4 s), corresponding to the f^-1 power law in energy spectrum (see Fig. <ref>), the kurtosis saturates to a constant value K≃ 1.67, representing a quasi-Gaussian distribution (with a non-zero skewness) of the fluctuations of the magnetic field components (see appendix Section <ref>).
Within the inertial range, from the nature of K(τ) in Fig. <ref>, a clear signature of broken power law is observed.
While two breaks are visible for Ulysses data (with 1 s resolution), the small-scale break at around τ∼ 10 s is missing for the intervals using Helios magnetic field data with 6 s resolution.
This break, corresponding to a frequency of ∼ 10^-1 Hz, is associated with the transition from the ordinary MHD range to the sub-ion kinetic or Hall MHD regime <cit.>.
The other break which occurs at a larger τ (solid vertical lines) is clearly visible for both Helios and Ulysses data.
In particular this break scale (τ_K) shifts towards larger τ as R increases.
Within the distance range of 0.3 - 2.75 au, τ_K is found to increase from ∼ 100 s to ∼ 1500 s.
It is to be emphasized here that except for certain cases, the appearance of the break τ_K is persistent in the component-wise K scaling as well (see Figs. <ref> and <ref> in appendix Section <ref>).
A detailed list of the break scale τ_K as a function of R is given in Table <ref>.
As it is evident from Fig. <ref> and <ref>, τ_K separates the steeper power law (K∼τ^-κ with κ≃ 0.37 averaged over the eight intervals) at smaller scales (dashed lines) from the less steeper one (κ≃ 0.11 on average) at large scales (dotted lines), but with an exception.
Note that for the Ulysses interval at R=3.16 au, K(τ) reaches the Gaussian regime without going through the large τ break, suggesting that the turbulence has fully developed that transforms the shallower scaling range at large scale into the steeper power law at smaller scales.
We will elucidate this point in the following.
As mentioned in the introduction, similar broken power law behaviour for K(τ) in FSW has already been observed by <cit.>.
However, those authors suggested that τ_K might correspond to the break between low-frequency f^-1 regime to Kolmogorov f^-5/3 regime in the magnetic power spectra.
This was inspired by the fact that f^-1 regime is exclusively found in FSW intervals and the f^-1 break also shows nearly similar behaviour to 1/τ_K as R changes <cit.>.
Instead, for all the intervals where the break is observed, it is systematically found in our study that 1/τ_K occurs at a higher frequency (roughly by a factor ∼ 10) than the f^-1 break scale (see Fig. <ref>).
The inverse of τ_K is typically corresponding to f_b, although with some consistent small discrepancy that could be due to the different frequency response of Fourier transform and scale-dependent increments (see Fig. <ref> where both τ_K, solid lines, and 1/f_b, dashed lines, are drawn).
The two scaling ranges in the kurtosis therefore approximately correspond to the two inertial sub-ranges observed in the spectrum.
Since PSD and kurtosis are related quantities, the observation of a double power law in both supports the robustness of the break, and therefore indicates the emergence of a new characteristic scale in the inertial range that marks the transition from f^-3/2 to f^-5/3 regime.
Summarizing, from the existence of the two turbulent inertial sub-regimes it is clear that as we move from the larger towards the smaller scales the nature of turbulence also varies.
This variation becomes more apparent when we examine the cross-helicity spectrum for the FSW intervals within the inner heliosphere (Fig. <ref> top).
The same could not be computed for the FSW beyond 1 au due to the limitation in terms of low plasma data resolution, as mentioned in Section <ref>.
Nevertheless, for all the FSW intervals in the inner heliosphere we see that the σ̂_̂ĉ power decreases as we move from larger to smaller scales (see Fig. <ref>). Thus, with the forward progression of the turbulent cascade, the imbalance between the inward and outward Alfvén modes propagating along the mean magnetic field decreases to a more balanced state.
While recent studies have shown the transition from a weak to a strong turbulence regime on moving towards smaller scales <cit.>, a transition from imbalanced (|z^+2|≫ |z^-2|, or vice-versa) to a balanced (|z^+2|∼|z^-2|) turbulent state could as well be associated with the steepening of the spectra from the low frequency f^-3/2 regime to the higher frequency f^-5/3 regime.
A similar gradual change from an imbalanced towards a relatively balanced state is also evident with increasing heliospheric distance R. Even though σ_c shows sufficiently higher values being associated with FSW, it declines slowly as understood from the straight line fit having a slope α=-0.05 (see Fig. <ref> bottom).
This is again consistent with the absence of the f^-3/2 regime at R=3.16 au and recent observations of change in the inertial range spectral index from -3/2 to -5/3 with increasing R <cit.>.
We further determine the evolutionary nature of the break scale, τ_K, with R and have investigated its relationship with the typical ion and correlation scales.
In Fig. <ref> (top), we show the radial evolution of τ_K, appearing in the scaling of K, converted from time scale to length scale (l_K) via Taylor's hypothesis as mentioned in Section <ref>.
Clearly, l_K shift towards larger scales with R, as evident from Fig. <ref> and Table <ref> as well.
We see that a strong power-law relation exists between R and l_K, with l_K evolving as l_K∝ R^ 1.18 for R<1 au and l_K∝ R^ 1.87 for R>1 au.
The central panel in Fig. <ref> shows how the break scale behaves with R when normalized to the ion-inertial length scale, d_i = c/ω_pi (where ω_pi=√(ne^2/ϵ_0 m) is the plasma frequency).
The ion-inertial scale has been found to vary between ∼45 to ∼500 km for R ranging from R ≃ 0.3—3.2 au.
After normalization, we find that the evolutionary nature is nearly lost for FSW intervals in the inner heliosphere near the ecliptic plane, with a residual weak R^ 0.13 dependence, and l_K is ∼10^3 times d_i.
A similar pattern was observed (but not shown) after normalization with the ion gyro-radius ρ_i = v_th^⊥ / Ω_i, in the inner heliosphere (the ρ_i in the outer heliosphere could not be computed again due to data limitations).
Note that the typical ion scales have an approximately linear radial increase up to 5 au <cit.> which might explain the constant radial trend of the normalized break scale.
However, beyond 1 au, it is to be noted that even after normalization, the evolutionary nature of l_K still persists, so that only the radial trend of the break decouples from that of the ion scales.
The residual power law could be associated with the variation in heliospheric latitude (and to the associated variation of the angle between the large-scale magnetic field B and the bulk speed V_sw) at which the FSW streams were sampled, indicated in the labels in Fig. <ref>.
Understanding this variation of l_K with latitude and V_sw-B angle would be interesting for a future study but is currently beyond the scope of this paper.
In order to compare the break scale l_K and the correlation scale L_c, we have drawn l_K normalized to L_c as a function of R (see Fig. <ref> bottom).
It is evident from the plot that, for R<1, a small power-law exponent is observed, l_K/L_c ∼ R^ -0.14, so that the normalization to the correlation scale removes the radial dependence, similar to what we observe when normalized to the ion scale.
Moreover, in this case, l_K is ∼ 0.4 times L_c and certainly does not correspond to scales within the f^-1 power law regime in the spectrum, contrary to what has been suggested previously <cit.>.
For R>1, l_K approaches L_c, thereby explaining the absence of the f^-3/2 regime in the R=3.16 au interval and supporting recent observations of spectral steepening of the inertial range with increasing R <cit.>.
Note that, in the inner heliosphere, break scales normalized to both the characteristic ion scale and the correlation scale follow weak radial dependence of R^ 0.13 and R^ -0.14, respectively.
§.§ Observations during Solar Maximum
We now perform a similar spectral and intermittency analysis using the set of intervals recorded during the solar maximum (see Table <ref>).
While the previous section was confined to only analyzing FSW, in this section we take into consideration the three main solar wind types, namely FSW, SSW and the ASSW.
Previous studies on spectra and intermittency mostly focused on FSW and SSW <cit.>.
More recently, the spectral properties of ASSW, which exclusively permeates the heliosphere during periods of high solar activity, were also examined <cit.>. However, such studies did not include intermittency. Moreover, a comparative analysis between FSW, SSW and ASSW at solar maxima has not yet been conducted.
Thus, in this section, we examine the intermittency properties of ASSW <cit.> in comparison with the other two types of wind using Ulysses data, during the ascending phase of solar cycle 23 (year 2001), at R≃1.5 au.
In Fig. <ref>, we show examples of the magnetic field power spectral density for three intervals at solar maximum, one for each type of solar wind.
Unlike the previous case during solar minima, in this case, there is no clear emergence of two sub-ranges, and the inertial range is consistently showing the typical Kolmogorov scaling, close to -5/3. In the fast stream only, it is possible to see a shallower range at low frequency, compatible with the usual f^-1 range. This is also not visible in the two slow wind types. Similar spectra were observed for all the other intervals (not shown).
The variation of K (defined in Section <ref>) as a function of τ is shown in Fig. <ref> for all the intervals of FSW, ASSW and SSW listed in Table <ref>.
Similar to what has been observed during solar minima, K is found to be scale dependent, decreasing with the time scale τ and approaching the Gaussian value K≃1.67 at τ>∼10^4.
This is again a clear indication of the non-universal nature of the distribution function of the magnetic field increments.
However, just like for the spectra, in this case no clear break within the inertial range is evident for any of the three types of solar wind.
The consolidated plot shown in Fig. <ref> allows to perform a comparative study of intermittency among those three types of solar wind.
As evident from the plots, turbulence in ASSW is moderately intermittent, characterized by a value of K which is intermediate between that of the SSW with the strongest intermittency and that of the FSW having the weakest intermittency.
Our observations are in agreement with the fact that, in the outer heliosphere, the SSW is in a state of more developed turbulence than ASSW and FSW.
This can also be inferred from the broad inertial range in the magnetic power spectra exhibited by SSW extending to much lower frequencies compared to FSW (see Fig. <ref>), for which we observe a f^-1 break <cit.>.
Several studies have observed similar spectral characteristics for these two types of wind <cit.>.
The power spectrum of ASSW shows similar nature as SSW, with a broad inertial range. However, it is to be noted that a recent study conducted by <cit.> did observe a f^-1 break in the spectra of ASSW at 1 au, which hints how the turbulence develops in ASSW understood by the broadening of the inertial range as it evolves with R.
While studies by <cit.> explain the high Alfvénicity of ASSW as due to its generation from coronal hole boundaries based on its composition and micro-physics, our findings on ASSW with an intermediate state of turbulence hints that the low speed of ASSW may be due to the intermixing of FSW and SSW inside the heliosphere.
§ SUMMARY AND CONCLUSION
In this paper, we report the existence of two distinct sub-regimes for the inertial range in the magnetic power spectrum of solar wind turbulence within and beyond 1 au.
Although a single inertial range spectral power law has been traditionally observed <cit.>, a few studies have also identified variations in the spectral indices of the magnetic power spectrum <cit.> and in the scaling exponents of higher-order structure functions <cit.>.
Additionally, <cit.> observed a break (τ_K) in the scaling of kurtosis (K) within FSW intervals, suggesting a possible connection between this break and the f^-1 break due to their similar behavior with R as discussed by <cit.>.
However, our findings show that τ_K in the kurtosis scaling closely coincides with the break (f_b) observed in magnetic spectra separating the two sub-regimes characterized by f^-3/2 and f^-5/3 spectral power laws in both the inner as well as the outer heliosphere (see Fig. <ref>).
The appearance of a double power-law in both the magnetic power spectrum and kurtosis (or normalized fourth-order moments) supports the robustness of this break, indicating the existence of a previously unidentified characteristic scale within the inertial range.
Whereas the most probable explanation for the f^-5/3 regime can be obtained by the isotropic Kolmogorov phenomenology or anisotropic mhd turbulence with a weak v-b alignment in non-Alfvénic solar wind, the f^-3/2 regime could be reasonably associated with the anisotropic spectra along the strong v-b alignment <cit.>. Note that, we consciously eliminate the possibility of a -3/2 spectra by Iroshnikov-Kraichnan phenomenology which is valid only for balanced MHD and cannot explain the emergence of -3/2 spectra when there is a strong v-b correlation.
A recent study by <cit.> provided evidence of a transition from a weak to a strong turbulence regime as one moves from larger to smaller scales.
In our study, an inspection of the cross-helicity co-spectra revealed that the turbulence in FSW shifts from a highly imbalanced state (|z^+2|≫|z^-2|, or vice-versa) at larger scales to a relatively balanced one (|z^+2|∼|z^-2|) on moving towards the smaller scales (see Fig. <ref>).
These observations may explain the broken power-law behavior of the spectrum and the kurtosis indicating a transition in the nature of turbulence as the cascade progresses towards the smaller scales.
We have further investigated the dependence of the sub-inertial regime break (τ_K) on the heliospheric distance (R) in comparison with the ion and correlation scales. Our findings indicate a power-law behavior for l_K <cit.> with R, which upon normalization with the typical ion scales (e.g. the ion-inertial scale d_i and the ion gyro-radius ρ_i) and the correlation scale (L_c) practically disappears in the inner heliosphere (see Fig. <ref>). Therefore, both the correlation scale and the characteristic ion scale appear to control the location of the break.
Interestingly, though, l_K appears to approach the correlation scale shifting towards larger scales as R increases, resulting in the absence of the f^-3/2 regime at 3.16 au.
This observation could explain the transition of the inertial range magnetic spectral slope from -3/2 near the Sun to -5/3 farther away <cit.>.
Note that a residual power-law radial dependence of the break scale still persists in the outer heliosphere, possibly due to variations in the latitude at which the FSW streams were sampled.
This residual behaviour of the normalized l_K must be studied in depth in a future study as functions of the latitude and also the large-scale magnetic field angle, which determines the degree of anisotropy in the measured turbulence.
Our analysis, extended to a period of year 2001 for different types of solar wind intervals, shows that the two inertial sub-ranges are no longer surviving during the period of high solar activity.
Nevertheless, the study enables us to characterize the state of turbulence in the Alfvénic slow solar wind <cit.>, as compared to traditional fast and slow winds.
We showed that during this period, when ASSW is found in abundance near the ecliptic plane, it is in an intermediate state of turbulence between those typical of fast and slow streams.
This also gives us insights on the position of the break separating the integral (f^-1) and inertial ranges <cit.> in the case of ASSW.
While <cit.> found the f^-1 break to be occurring at the same frequency for FSW as well as ASSW at 1 au, in our study it occurs at a much lower frequency for the ASSW compared to the FSW at distances greater than 1.5 au (see Fig. <ref>), thus suggesting a plausible explanation for the `slowness' of the ASSW due to strong intermixing between the FSW and the SSW during the high solar activity.
§ ACKNOWLEDGMENTS
S.M. was supported by Students-Undergraduate Research Graduate Excellence (SURGE) summer internship program at Indian Institute of Technology Kanpur.
S.B. acknowledges the financial support from the grant by Space Technology Cell-ISRO (STC/PHY/2023664O).
L.S.-V. received support by the Swedish Research Council (VR) Research Grant N. 2022-03352 and by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #23-591 (Evolution of Turbulence in the Expanding Solar Wind).
§ DATA AVAILABILITY
For our study, we have used publicly available data from NASA CDAWeb (https://cdaweb.gsfc.nasa.gov/https://cdaweb.gsfc.nasa.gov) and AMDA science analysis system (https://amda.irap.omp.eu/https://amda.irap.omp.eu).
§ ESTIMATION OF KURTOSIS OF MAGNETIC FIELD FLUCTUATIONS FOLLOWING A GAUSSIAN DISTRIBUTION
Following the definition of the n^th order structure function (S_n) given by eqn. (<ref>), the expression of S_4 and S_2 takes the form:
S_4 = ((Δ b_r)^2 + (Δ b_t)^2 + (Δ b_n)^2)^2,
and
S_2 = ((Δ b_r)^2 + (Δ b_t)^2 + (Δ b_n)^2),
respectively. Now considering that the fluctuations follow a zero mean gaussian distribution f(Δ b_i) having a standard deviation σ such that
f(Δ b_i) = 1/√(2πσ^2)exp[(Δ b_i)^2/2σ^2],
we have
S_4 = ∫∫∫ ((Δ b_r)^2 + (Δ b_t)^2 + (Δ b_n)^2)^2 f(Δ b_r) f(Δ b_t) f(Δ b_n) d(Δ b_r) d(Δ b_t) d(Δ b_n),
and
S_2 = ∫∫∫ ((Δ b_r)^2 + (Δ b_t)^2 + (Δ b_n)^2) f(Δ b_r) f(Δ b_t) f(Δ b_n) d(Δ b_r) d(Δ b_t) d(Δ b_n),
which seem a bit rigorous but can be solved easily to obtain S_4 = 15σ^4 and S_2=3σ^2. Thus, the kurtosis defined by eqn. (<ref>) takes the value K=5/3 ≃ 1.67.
§ COMPONENT-WISE KURTOSIS OF THE MAGNETIC FIELD FLUCTUATIONS IN FSW INTERVALS DURING SOLAR MINIMUM
In this appendix we show the kurtosis K for each individual RTN magnetic field component, using four example intervals from both Helios and Ulysses database, at eighth different distances from the Sun. Different colors refer to the different intervals.
Whenever present, a power law is shown as colored dashed line, and the corresponding scaling exponents are indicated in each panel.
Two power laws can be identified in all of the Helios and most of the Ulysses intervals, with the exception of the radial component at 2.75 au and of all components at 3.16 au.
The timescale τ_K of the break between the two power laws is indicated by a solid vertical grey line, while the dashed grey vertical lines indicate the location of the spectral break, 1/f_b.
*
aasjournal
|
http://arxiv.org/abs/2409.03172v2 | 20240905020633 | Further study of the maximally symmetry breaking patterns in an ${\rm SU}(8)$ theory | [
"Ning Chen",
"Zhiyuan Chen",
"Zhanpeng Hou",
"Zhaolong Teng",
"Bin Wang"
] | hep-ph | [
"hep-ph"
] |
=1
positioning
decorations.text
decorations.pathmorphing
calc
shapes.misc
>=latex,
photon/.style=decorate, decoration=snake, draw=black, thick,
fermionnoarrow/.style=draw=black, postaction=decorate, thick,
scalar/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with >, thick, dashed,
scalarnoarrow/.style=draw=black, postaction=decorate, thick, dashed,
fermion/.style=draw=black, postaction=decorate,decoration=markings,mark=at position .55 with >, thick,
gluon/.style=decorate, draw=black, decoration=coil,amplitude=4pt, segment length=5pt, thick,
vertex/.style=draw,shape=circle,fill=black,minimum size=3pt,inner sep=0pt,
fillvertex/.style=draw,shape=circle,fill=black,minimum size=5pt,inner sep=0pt,
openvertex/.style=draw,shape=circle,minimum size=5pt,inner sep=0pt,
blob/.style=draw=red,shape=circle,fill=red,minimum size=6pt,inner sep=0pt,
redvertex/.style=draw=red,shape=circle,fill=red,minimum size=3pt,inner sep=0pt,
cross/.style=cross out, draw=black,thick, minimum size=5pt, inner sep=0pt, outer sep=0pt
thmTheorem
*thm-nonTheorem
*conj[thm-non]Conjecture
*law[thm-non]Law
*ansatz[thm-non]Ansatz
*define[thm-non]Definition
*proof[thm-non]Proof
[ /1/2
[
]()
≦⩽≧⩾∂
TeV GeV MeV keVfbpb
U SU O SO Sp USp E F G
𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵
𝔄𝔞𝔅𝔟ℭ𝔠𝔇𝔡𝔈𝔢𝔉𝔣𝔊𝔤ℌ𝔥ℑ𝔦𝔍𝔧𝔎𝔨𝔏𝔩𝔐𝔪Ŋ𝔑𝔫𝔒𝔬𝔓𝔭𝔔𝔮ℜ𝔯𝔖𝔰𝔗𝔱𝔘𝔲𝔙𝔳𝔚𝔴𝔛𝔵𝔜𝔶ℨ𝔷
Further study of the maximally symmetry breaking patterns in an SU(8) theory
Ning Chen 0000-0002-0032-9012, Zhiyuan Chen, Zhanpeng Hou, Zhaolong Teng 0000-0002-7141-2331, Bin Wang
School of Physics, Nankai University, Tianjin, 300071, China
===========================================================================================================
§ ABSTRACT
The SU(8) was previously found to be the minimal simple gauge group where all three-generational Standard Model fermions can be non-trivially embedded, and it is maximally broken into SU(8)→_441≡ SU(4)_s ⊗ SU(4)_W ⊗ U(1)_X_0 at the GUT scale by the SU(8) adjoint Higgs field.
Gauge symmetries in the strong and the weak sectors are extended by one and two ranks, respectively.
The sequential strong-weak-weak (SWW) symmetry breaking stages were found to generate the observed hierarchical SM quark/lepton masses as well as the Cabibbo-Kobayashi-Maskawa (CKM) mixing pattern with the precise flavor identifications <cit.>.
We further study the possible weak-strong-weak (WSW) and weak-weak-strong (WWS) symmetry breaking patterns, and compare with the results that we have obtained by following the SWW sequence.
The two-loop RGEs following both patterns are derived, where we cannot achieve the gauge coupling unification in the field theory framework.
Based on these analyses, we suggest the gauge coupling unification to be interpreted in the context of the Kač-Moody Lie algebra.
Emails:
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
<[email protected]>
empty
§ INTRODUCTION
The Grand Unified Theory (GUT) was proposed to unify all three fundamental symmetries described by the Standard Model (SM).
Two original versions that have been widely studied over decades are the (5) Georgi-Glashow theory <cit.> and the (10) Fritzsch-Minkowski theory <cit.>, as well as their supersymmetric extensions <cit.>.
However, the minimal unified frameworks can not address much on the existing flavor puzzles of all three-generational SM fermions, unless further ingredients are included <cit.>.
From the experimental perspective, the indispensable guidance to unveil the flavor puzzle (particularly for quarks and leptons) to date is the discovery <cit.> and the measurements <cit.> of one single SM Higgs boson.
The LHC measurements have confirmed that the Yukawa couplings at the EW scale between the SM Higgs boson and the (t ,b ,τ ,μ) are consistent with the SM predictions.
Based on the experimental facts, one natural conjecture is that the hierarchical Yukawa couplings of the SM Higgs boson originate from the flavor non-universalities to all three generations beyond the SM.
Obviously, the minimal (5) or (10) theories only allows the simply repetitive flavor structure.
In an earlier study by Georgi <cit.>, he proposed to extend the gauge group beyond the minimal SU(5) so that three-generational SM fermions can be non-trivially embedded.
Consequently, three-generational SM fermions must transform differently in the UV-complete theory.
In the original third law of the SU(N) unified theory, Georgi proposed to only include the non-repetitive SU(N) irreps, which lead to a minimal three-generational SU(11) framework with the chiral fermions of
{ f_L }_ SU(11)^n_g=3 =
11 ,4
_F⊕
11 , 8
_F⊕
11 ,9
_F⊕
11 ,10
_F .
In the recent studies, we introduce a concept of chiral irreducible anomaly-free fermion sets (IRAFFSs), which reads <cit.>
a chiral IRAFFS is a set of left-handed anti-symmetric fermions of ∑_ m_ _L(), with m_ being the multiplicities of a particular fermion representation of .
Obviously, the anomaly-free condition reads ∑_ m_ Anom( _L() ) =0.
We also require the following conditions to be satisfied for a chiral IRAFFS:
* the greatest common divisor (GCD) of the { m_} should satisfy that GCD{ m_} =1;
* the fermions in a chiral IRAFFS can no longer be removed, which would otherwise bring non-vanishing gauge anomalies;
* there should not be any singlet, self-conjugate, adjoint fermions, or vectorial fermion pairs in a chiral IRAFFS.
Correspondingly, Georgi's 1979 third law <cit.> can be reformulated as follows <cit.>
only distinctive chiral IRAFFSs are allowed in the GUT.
This leads to an SU(8) theory with the minimal chiral fermions of
{ f_L }_ SU(8)^n_g=3 = [ 8_F^ω⊕28_F] ⊕[ 8_F^ω̇⊕56_F] , dim_𝐅= 156 , Ω≡ ( ω , ω̇) , ω = ( 3 , IV , V , VI) , ω̇= (1̇ , 2̇ , V̇İİ , V̇İİİ , İẊ ) ,
with undotted/dotted indices for the 8_F's in the rank-2 chiral IRAFFS and the rank-3 chiral IRAFFS, respectively.
The Roman numbers and the Arabic numbers are used for the heavy partner fermions and the SM fermions.
The SU(8) gauge group has rank seven, which predicts three intermediate symmetry breaking stages between the GUT scale and the EW scale.
The adjoint Higgs field of 63_H achieves the maximally breaking pattern of SU(8)→_441 through its VEV of
⟨63_H⟩ = 1/ 4 diag(- 𝕀_4× 4 , +𝕀_4× 4 ) v_U ,
according to L.F.Li <cit.>.
The U(1)_X_0 charges of the SU(8) fundamental representation are defined as follows
X̂_0( 8 ) ≡ diag ( - 1/4𝕀_4× 4_4_s , + 1/4𝕀_4× 4_4_W ) .
In this pattern, the gauge symmetries in the strong and the weak sectors are extended by one and two ranks beyond the SM gauge groups, respectively.
The sequential symmetry breaking patterns may follow the strong-weak-weak (SWW) <cit.>, the weak-strong-weak (WSW), and the weak-weak-strong (WWS) sequences.
These cannot be determined purely by group theory.
Other different symmetry breaking patterns of the SU(8) group were previously described in Refs. <cit.>.
Through the previous analyses of the SWW symmetry breaking pattern <cit.>, we found (i) one unique SM Higgs boson by looking for the origin of the global U(1)_B-L symmetry, (ii) the hierarchical Yukawa couplings to all three-generational SM quarks/leptons due to the intrinsic symmetry properties, and (iii) the reasonable Cabibbo-Kobayashi-Maskawa (CKM) mixing pattern <cit.> in the quark sector.
Three intermediate symmetry breaking scales beyond the EW scale enter into the SM quark/lepton mass matrices, and hence are set by requiring a reasonable matching with the experimental measurements.
Meanwhile, the renormalization group (RG) evolutions of three gauge couplings in the SWW symmetry breaking pattern together with three intermediate scales could not achieve the unification in the field theory context <cit.>.
In this work, we further analyze two other possible WSW and WWS symmetry breaking patterns of the SU(8) theory, where the extended weak symmetries breaking pattern of _441→_431 [An effective _431 was previously studied in Ref. <cit.>.] happened at the first stage.
Within each symmetry breaking pattern, we shall look for whether the reported hierarchical SM quark/lepton masses as well as the CKM mixing patterns following the SWW sequence in Ref. <cit.> can be reproduced.
Furthermore, we shall derive the RG evolutions in both WSW and WWS symmetry breaking patterns to look for the gauge coupling unification.
The rest of the paper is organized as follows.
In Sec. <ref>, we review the SU(8) framework, where three-generational SM fermions are non-trivially embedded, and hence they transform differently in the UV theory.
We focus on the WSW and the WWS symmetry breaking patterns, define various gauge U(1) and non-anomalous global U(1)_T charges at different stages, and decompose all SU(8) chiral fermions and Higgs fields accordingly.
A set of d=5 operators that generate all light SM quark/lepton mass terms other than the top quark mass are collected according to Ref. <cit.>.
In Secs. <ref> and <ref>, we analyze both WSW and WWS symmetry breaking patterns in details, where we describe the procedures to integrate out the massive vectorlike fermions, and also derive the SM quark/lepton mass terms based on a set of d=5 fermion bi-linear operators and the irreducible Higgs mixing operators.
In order to reproduce the SM quark/lepton mass hierarchies and the CKM mixing pattern in Ref. <cit.>, the SM flavor identifications are modified for the first and second generations accordingly.
Accordingly, we obtain two separate benchmark points in both patterns with the suggested intermediate symmetry breaking scales.
In Sec. <ref>, we obtain the RG evolutions of the minimal SU(8) theory by following both WSW and WWS sequences based on the suggested benchmark points.
The RG behaviors in neither symmetry breaking patterns are likely to achieve the conventional gauge coupling unification, and their behaviors also match with what we have found in the SWW sequence <cit.> by and large.
We conclude and make future perspective in Sec. <ref>, where we suggest that the gauge coupling unification in the minimal SU(8) theory to be interpreted in terms of the Kač-Moody Lie algebra.
In App. <ref>, we collect the SM quark/lepton mass matrices from the SWW symmetry breaking pattern that we have obtained in Refs. <cit.>.
§ THE SU(8) THEORY AND POSSIBLE SYMMETRY BREAKING PATTERNS
§.§ Overview
The SU(8) theory was formulated by requiring several distinctive chiral IRAFFSs that can lead to three-generational SM fermions at the electroweak scale <cit.>.
The non-anomalous global DRS symmetries from fermions in Eq. (<ref>) are
_ DRS
SU(8) , n_g=3
= [ SU(4)_ω⊗ U(1)_T_2] ⊗[ SU(5)_ω̇⊗ U(1)_T_3] ,
and we also denote the anomalous global Peccei-Quinn symmetries <cit.> as
_ PQ
SU(8) , n_g=3
= U(1)_ PQ_2⊗ U(1)_ PQ_3 .
Since the global B-L symmetry should be identical for all three generations, we further require a common non-anomalous U(1)_T≡ U(1)_T_2 = U(1)_T_3 between two chiral IRAFFSs.
The anomalous global U(1)_ PQ charges are assigned such that
p : q_2 ≠ -3 : +2 , p : q_3 ≠ -3 : +1 .
The most general gauge-invariant Yukawa couplings at least include the following renormalizable and non-renormalizable terms [The term of 56_F56_F28_H + H.c. vanishes due to the anti-symmetric property <cit.>. Instead, only a d=5 non-renormalizable term of 1/ M_ pl56_F56_F28_H_ ,ω̇^†63_H is possible to generate masses for vectorlike fermions in the 56_F. Since it transforms as an SU(5)_ω̇ vector and carries non-vanishing U(1)_ PQ charge of p+3q_3 ≠ 0 from Eq. (<ref>), it is only possible due to the gravitational effect.]
-_Y = Y_8_F^ω28_F8_H_ ,ω + Y_28_F28_F70_H + Y_8_F^ω̇56_F28_H_ ,ω̇ + c_4 / M_ pl56_F56_F28_H_ ,ω̇^†63_H + H.c. ,
with the reduced Planck scale of M_ pl= ( 8 π G_N)^-1/2= 2.4 × 10^18 GeV.
All renormalizable Yukawa couplings and non-renormalizable Wilson coefficients are expected to be (Y_ , Y_ , Y_ , c_4 )∼(1).
Altogether, we collect the SU(8) Higgs fields as follows
{ H }_ SU(8)^n_g=3 = 8_H_ , ω⊕28_H_ , ω̇⊕70_H⊕63_H , dim_𝐇= 547 ,
where the adjoint Higgs field of 63_H is real while all others are complex.
Accordingly, the non-anomalous global U(1)_T charges and the anomalous global U(1)_ PQ charges for all fermions and Higgs fields are assigned in Tab. <ref>.
§.§ The SWW symmetry breaking pattern
The SWW symmetry breaking pattern of the SU(8) theory follows the sequence of
SU(8) _441_341_331_ SM SU(3)_c⊗ U(1)_ EM ,
_441≡ SU(4)_s⊗ SU(4)_W ⊗ U(1)_X_0 , _341≡ SU(3)_c⊗ SU(4)_W ⊗ U(1)_X_1 ,
_331≡ SU(3)_c⊗ SU(3)_W ⊗ U(1)_X_2 , _ SM≡ SU(3)_c⊗ SU(2)_W ⊗ U(1)_Y ,
with v_U≫ v_441≫ v_341≫ v_331≫ v_ EW ,
which was previously studied in Refs. <cit.>.
Once one assumes that the extended gauge symmetries in the strong sector break first, there is no ambiguity of the sequential symmetry breaking patterns.
Sequentially, the U(1)_X_1, U(1)_X_2, and U(1)_Y charges are defined according to the SU(4)_s and the SU(4)_W fundamental representations as follows
X̂_1(4_s) ≡ diag ( (- 1/12+ _0 ) 𝕀_3× 3_3_c , 1/4+ _0 ) ,
X̂_2 ( 4_W ) ≡ diag ( ( 1/12 + _1 ) 𝕀_3× 3_3_W , -1/4 + _1 ) ,
Ŷ ( 4_W ) ≡ diag ( ( 1/6+ _2 ) 𝕀_2× 2 ,- 1/3+ _2 , _2 ) = diag ( ( 1/4 + _1 ) 𝕀_2× 2_2_W , ( - 1/4 + _1 ) 𝕀_2× 2) ,
Q̂_e ( 4_W ) ≡ T_ SU(4) ^3 + Ŷ ( 4_W ) = diag ( 3/4 + _1 , ( - 1/4 + _1 ) 𝕀_3× 3) .
The non-anomalous global U(1)_T symmetry becomes the global U(1)_B-L at the EW scale according to the following sequence <cit.>
_441 : ^'≡ - 4t _0 , _341 : ^''≡^' + 8t_1 ,
_331 : ^'''≡^'' , _ SM : - ≡^''' .
§.§ The WSW symmetry breaking pattern
The WSW symmetry breaking pattern of the SU(8) theory follows the sequence of
SU(8) _441_431_331_ SM SU(3)_c⊗ U(1)_ EM ,
_441≡ SU(4)_s⊗ SU(4)_W ⊗ U(1)_X_0 , _431≡ SU(4)_c⊗ SU(3)_W ⊗ U(1)_X_1 ,
_331≡ SU(3)_c⊗ SU(3)_W ⊗ U(1)_X_2 , _ SM≡ SU(3)_c⊗ SU(2)_W ⊗ U(1)_Y ,
with v_U≫ v_441≫ v_431≫ v_331≫ v_ EW .
The U(1)_X_1, U(1)_X_2, and U(1)_Y charges along the WSW sequence are defined according to the SU(4)_s and the SU(4)_W fundamental representations as follows
X̂_1 ( 4_W ) ≡ diag ( ( 1/12 + _0 ) 𝕀_3× 3_3_W , -1/4 + _0 ) ,
X̂_2(4_s) ≡ diag ( (- 1/12+ _1 ) 𝕀_3× 3_3_c , 1/4+ _1 ) ,
Ŷ ( 4_W ) ≡ diag ( ( 1/6+ _2 ) 𝕀_2× 2 ,- 1/3+ _2 , _2 ) = diag ( ( 1/4 + _0 ) 𝕀_2× 2_2_W , ( - 1/4 + _0 ) 𝕀_2× 2) ,
Q̂_e ( 4_W ) ≡ T_ SU(4) ^3 + Ŷ ( 4_W ) = diag ( 3/4 + _0 , ( - 1/4 + _0 ) 𝕀_3× 3) .
The non-anomalous global U(1)_T symmetry becomes the global U(1)_B-L at the EW scale according to the following sequence <cit.>
_441 : ^'≡ + 4t _0 , _431 : ^''≡^' -8t_1 ,
_331 : ^'''≡^'' + 8 t _2 , _ SM : - ≡^''' .
By following the symmetry breaking pattern in Eq. (<ref>), we tabulate the fermion representations at various stages of the SU(8) theory in Tabs. <ref>, <ref>, and <ref>.
For the right-handed down-type quarks of _R^Ω^c, they are named as follows
_R^1̇^c ≡d_R^c , _R^2̇^c ≡s_R^c , _R^V̇İİ^c ≡_R^'''''^c , _R^V̇İİİ^c ≡_R^'''^c , _R^İẊ^c ≡_R^''''^c ,
_R^3^c ≡b_R^c , _R^ IV ^c ≡_R^''^c , _R^ V ^c ≡_R^c , _R^ VI ^c ≡_R^'^c .
For the left-handed SU(2)_W lepton doublets of (_L^Ω , - _L^Ω ), they are named as follows
( _L^1̇ , - _L^1̇) ≡ (e_L , - ν_e L ) , ( _L^2̇ , - _L^2̇) ≡( μ_L , - ν_μ L ) ,
( _L^V̇İİ , - _L^V̇İİ) ≡ ( _L^'''' , - _L^'''' ) , ( _L^V̇İİİ , - _L^V̇İİİ ) ≡ ( _L^''' , - _L^''' ) , ( _L^İẊ , - _L^İẊ ) ≡ ( _L^''''' , - _L^''''' ) ,
( _L^ 3 , - _L^3) ≡ ( τ_L , - ν_τ L) , ( _L^ IV , - _L^ IV ) ≡ ( _L , - _L ) ,
( _L^ V , - _L^ V ) ≡ ( _L^'' , - _L^'' ) , ( _L^ VI , - _L^ VI ) ≡ ( _L^' , - _L^' ) .
Through the analysis in Sec. <ref>, we shall see that all heavy (^Ω , ^Ω , ^Ω) (with Ω= IV , … ,İẊ) acquire vectorlike masses during the intermediate symmetry breaking stages.
For the remaining left-handed sterile neutrinos of ( _L^Ω , _L^Ω^' , _L^Ω^'' ), several of them are massive and they are named as follows
_L^ IV≡_L^'' , _L^ IV^'≡_L^ , _L^ V^'≡_L^' , _L^V̇İİ^'≡_L^''' .
Notice that, the first- and second-generational SM fermions are named differently in Tab. <ref> as compared to the names following the SWW symmetry breaking pattern.
This will be elaborated in our derivation of the SM quark/lepton mass matrices in Sec. <ref>.
We decompose the Higgs fields in Yukawa term into components that can be responsible for the sequential symmetry breaking pattern in Eq. (<ref>).
All possible Higgs components that are likely to develop VEVs for the corresponding symmetry breaking stages are marked by ⟨ ... ⟩, while their UV origins are denoted by underlines.
For Higgs fields of 8_H_ , ω they read
8_H_ ,ω ⊃ ( 4 , 1 , +1/4 )_𝐇 , ω⊕⟨ ( 1 , 4 , -1/4 )_𝐇 , ω⟩
⊃ ⟨ ( 4 , 1 , +1/4 )_𝐇 , ω⟩⊕ ( 1 , 3 , -1/3 )_𝐇 , ω
⊃ ⟨ ( 1 , 3 , -1/3 )_𝐇 , ω⟩⊃⟨ ( 1 , 2 , -1/2 )_𝐇 , ω⟩ .
For Higgs fields of 28_H_ ,ω̇, they read
28_H_ ,ω̇ ⊃ ( 6 , 1 , +1/2 )_𝐇 , ω̇⊕ ( 1 , 6 , -1/2 )_𝐇 , ω̇⊕ ( 4 , 4 , 0 )_𝐇 , ω̇
⊃ [ ( 1 , 3 , -1/3 )_𝐇 , ω̇^'⊕ ( 1 , 3 , -2/3 )_𝐇 , ω̇] ⊕ [ ( 4 , 3 , -1/12 )_𝐇 , ω̇⊕⟨ ( 4 , 1 , +1/4 )_𝐇 , ω̇⟩ ]
⊃ [ ⟨ ( 1 , 3 , -1/3 )_𝐇 , ω̇^'⟩⊕ ( 1 , 3 , -2/3 )_𝐇 , ω̇ ] ⊕⟨ ( 1 , 3 , -1/3 )_𝐇 , ω̇⟩
⊃ [ ⟨ ( 1 , 2 , -1/2 )_𝐇 , ω̇^'⟩⊕⟨ ( 1 , 2 , -1/2 )_𝐇 , ω̇⟩ ] ⊕⟨ ( 1 , 2 , -1/2 )_𝐇 , ω̇⟩ .
For Higgs field of 70_H, they read
70_H ⊃ ( 1 , 1 , -1 )_𝐇^''⊕ ( 1 , 1 , +1 )_𝐇^''''⊕ ( 4 , 4 , -1/2 )_𝐇⊕ ( 6 , 6 , 0 )_𝐇⊕ ( 4 , 4 , +1/2 )_𝐇
⊃ ( 4 , 3 , +5/12 )_𝐇⊃ ( 1 , 3 , +2/3 )_𝐇^'''⊃⟨ ( 1 , 2 , +1/2 )_𝐇^'''⟩ .
Schematically, we assign the Higgs VEVs according to the decompositions in Eqs. (<ref>), (<ref>), and (<ref>) as follows
_441→_431 : ⟨ ( 1 , 4 , -1/4 )_𝐇 , IV⟩≡1/√(2)W_4 , IV ,
_431→_331 : ⟨ ( 4 , 1 , +1/4 )_𝐇 , V⟩≡1/√(2) w_4 , V , ⟨ ( 4 , 1 , +1/4 )_𝐇 ,1̇, V̇İİ⟩≡1/√(2) w_4 , 1̇,V̇İİ ,
_331→_ SM : ⟨ ( 1 , 3 , -1/3 )_𝐇 , 3,VI⟩≡1/√(2) V_3 , 3,VI ,
⟨ ( 1 , 3 , -1/3 )_𝐇 ,İẊ⟩≡1/√(2) V_3 ,İẊ , ⟨ ( 1 , 3 , -1/3 )_𝐇 ,2̇, V̇İİİ^'⟩≡1/√(2) V_3 , 2̇,V̇İİİ^' ,
EWSB : ⟨ ( 1 , 2 , +1/2 )_𝐇^'''⟩≡1/√(2) v_ EW .
For our later convenience, we also parametrize different symmetry breaking VEVs
ζ_0 ≡ v_U / M_ pl , ζ_1 ≡ W_4 , IV/ M_ pl , ζ_2 ≡ w_4 , V/ M_ pl , ζ̇_2 ≡ w_4 ,1̇,V̇İİ/ M_ pl ,
ζ_3 ≡ V_3 ,3, VI/ M_ pl , ζ̇_3^'≡ V_3 , 2̇,V̇İİİ^'/ M_ pl , ζ̇_3 ≡ V_3 , İẊ/ M_ pl ,
ζ_0 ≫ζ_1 ≫ζ_2 ∼ζ̇_2 ≫ζ_3 ∼ζ̇_3^'∼ζ̇_3 , ζ_i j≡ζ_j /ζ_i , ( i < j ) ,
in terms of dimensionless quantities.
Here, we adopt the conventions as follows
* The notations of (W , w , V) and the dimensionless quantities of ( ζ_1 , ζ_2 ∼ζ̇_2 , ζ_3 ∼ζ̇_3^'∼ζ̇_3) are used for the Higgs VEVs at the first, second, and the third symmetry breaking stages, regardless of the specific symmetry breaking patterns.
* W_4 and w_4 represent the VEVs for the _441→_431 and the _431→_331 stages in Eqs. (<ref>).
In the SWW sequence <cit.>, W_4 and w_4 represent the VEVs for the _441→_341 and the _341→_331 stages instead.
§.§ The WWS symmetry breaking pattern
The WWS symmetry breaking pattern of the SU(8) theory follows the sequence of
SU(8) _441_431_421_ SM SU(3)_c⊗ U(1)_ EM ,
_441≡ SU(4)_s⊗ SU(4)_W ⊗ U(1)_X_0 , _431≡ SU(4)_s⊗ SU(3)_W ⊗ U(1)_X_1 ,
_421≡ SU(4)_s⊗ SU(2)_W ⊗ U(1)_X_2 , _ SM≡ SU(3)_c⊗ SU(2)_W ⊗ U(1)_Y ,
with v_U≫ v_441≫ v_431≫ v_331≫ v_ EW .
The U(1)_X_1, U(1)_X_2, and U(1)_Y charges along the WWS sequence are defined according to the SU(4)_s and the SU(4)_W fundamental representations as follows
X̂_1 ( 4_W ) ≡ diag ( ( 1/12 + _0 ) 𝕀_3× 3_3_W , -1/4 + _0 ) ,
X̂_2(3_W) ≡ diag ( ( 1/6 + _1 ) 𝕀_2 × 2_2_W , - 1/3+ _1 ) ,
Ŷ ( 4_s ) ≡ diag ( (- 1/12+ _2 ) 𝕀_3× 3_3_c , 1/4+ _2 ) .
The non-anomalous global U(1)_T symmetry becomes the global U(1)_B-L at the EW scale according to the following sequence <cit.>
_441 : ^'≡ + 4t _0 , _431 : ^''≡^' ,
_421 : ^'''≡^'' - 8 t _2 , _ SM : - ≡^''' + 8 t .
By following the symmetry breaking pattern in Eq. (<ref>), we tabulate the fermion representations at various stages of the SU(8) theory in Tabs. <ref>, <ref>, and <ref>.
For the right-handed down-type quarks of _R^Ω^c, they are named as follows
_R^1̇^c ≡d_R^c , _R^2̇^c ≡s_R^c , _R^V̇İİ^c ≡_R^'''''^c , _R^V̇İİİ^c ≡_R^'''^c , _R^İẊ^c ≡_R^''''^c ,
_R^3^c ≡b_R^c , _R^ IV ^c ≡_R^''^c , _R^ V ^c ≡_R^^c , _R^ VI ^c ≡_R^'^c .
For the left-handed SU(2)_W lepton doublets of (_L^Ω , - _L^Ω ), they are named as follows
( _L^1̇ , - _L^1̇) ≡ (e_L , - ν_e L ) , ( _L^2̇ , - _L^2̇) ≡( μ_L , - ν_μ L ) ,
( _L^V̇İİ , - _L^V̇İİ) ≡ ( _L^'''' , - _L^'''' ) , ( _L^V̇İİİ , - _L^V̇İİİ ) ≡ ( _L^''' , - _L^''' ) , ( _L^İẊ , - _L^İẊ ) ≡ ( _L^''''' , - _L^''''' ) ,
( _L^ 3 , - _L^3) ≡ ( τ_L , - ν_τ L) , ( _L^ IV , - _L^ IV ) ≡ ( _L , - _L ) ,
( _L^ V , - _L^ V ) ≡ ( _L^'' , - _L^'' ) , ( _L^ VI , - _L^ VI ) ≡ ( _L^' , - _L^' ) .
Through the analysis in Sec. <ref>, we shall see that all heavy (^Ω , ^Ω , ^Ω) (with Ω= IV , … ,İẊ) acquire vectorlike masses during the intermediate symmetry breaking stages.
For the remaining left-handed sterile neutrinos of ( _L^Ω , _L^Ω^' , _L^Ω^'' ), several of them are massive and they are named as follows
_L^ IV≡_L^'' , _L^ IV^'≡_L^ , _L^ V^'≡_L^' , _L^V̇İİ^'≡_L^''' .
Similar to the situation in the WSW pattern, the SM fermion names in Tab. <ref> are exchanged from what we have obtained in the SWW pattern.
The detailed derivation of the corresponding SM quark/lepton mass matrices will be elaborated in Sec. <ref>.
We decompose the Higgs fields in Yukawa term into components that can be responsible for the sequential symmetry breaking pattern in Eq. (<ref>).
For Higgs fields of 8_H_ , ω they read
8_H_ ,ω ⊃ ( 4 , 1 , +1/4 )_𝐇 , ω⊕⟨ ( 1 , 4 , -1/4 )_𝐇 , ω⟩
⊃ ( 4 , 1 , +1/4 )_𝐇 , ω⊕⟨ ( 1 , 3 , -1/3 )_𝐇 , ω⟩
⊃ ⟨ ( 4 , 1 , +1/4 )_𝐇 , ω⟩⊕ ( 1 , 2 , -1/2 )_𝐇 , ω⊃⟨ ( 1 , 2 , -1/2 )_𝐇 , ω⟩ .
For Higgs fields of 28_H_ ,ω̇, they read
28_H_ ,ω̇ ⊃ ( 6 , 1 , +1/2 )_𝐇 , ω̇⊕ ( 1 , 6 , -1/2 )_𝐇 , ω̇⊕ ( 4 , 4 , 0 )_𝐇 , ω̇
⊃
⟨ ( 1 , 3 , -1/3 )_𝐇 , ω̇^'⟩⊕ ( 1 , 3 , -2/3 )_𝐇 , ω̇
⊕
( 4 , 3 , -1/12 )_𝐇 , ω̇⊕ ( 4 , 1 , +1/4 )_𝐇 , ω̇^'
⊃ [ ( 1 , 2 , -1/2 )_𝐇 , ω̇^'⊕ ( 1 , 2 , -1/2 )_𝐇 , ω̇ ] ⊕
( 4 , 2 , -1/4 )_𝐇 , ω̇⊕⟨ ( 4 , 1 , +1/4 )_𝐇 , ω̇⟩⊕⟨ ( 4 , 1 , +1/4 )_𝐇 , ω̇^'⟩
⊃
⟨ ( 1 , 2 , -1/2 )_𝐇 , ω̇^'⟩⊕⟨ ( 1 , 2 , -1/2 )_𝐇 , ω̇⟩
⊕
⟨ ( 1 , 2 , +1/2 )_𝐇 , ω̇⟩
.
For Higgs field of 70_H, they read
70_H ⊃ ( 1 , 1 , -1 )_𝐇^''⊕ ( 1 , 1 , +1 )_𝐇^''''⊕ ( 4 , 4 , +1/2 )_𝐇⊕ ( 4 , 4 , -1/2 )_𝐇⊕ ( 6 , 6 , 0 )_𝐇
⊃ ( 4 , 3 , +5/12 )_𝐇⊃ ( 4 , 2 , +1/4 )_𝐇^⊃⟨ ( 1 , 2 , +1/2 )_𝐇^'''⟩ .
Schematically, we assign the Higgs VEVs according to the symmetry breaking pattern as follows
_441→_431 : ⟨ ( 1 , 4 , -1/4 )_𝐇 , IV⟩≡1/√(2)W_4 , IV ,
_431→_421 : ⟨ ( 1 , 3 , -1/3 )_𝐇 , V⟩≡1/√(2) w_3 , V , ⟨ ( 1 , 3 , -1/3 )_𝐇 , 1̇, V̇İİ^'⟩≡1/√(2) w_3 , 1̇,V̇İİ ,
_421→_ SM : ⟨ ( 4 , 1 , +1/4 )_𝐇 ,3, VI⟩≡1/√(2) V_4 ,3, VI , ⟨ ( 4 , 1 , +1/4 )_𝐇 , İẊ⟩≡1/√(2) V_4 ,İẊ , ⟨ ( 4 , 1 , +1/4 )_𝐇 ,2̇ , V̇İİİ^'⟩≡1/√(2) V_4 ,2̇ , V̇İİİ^' ,
EWSB : ⟨ ( 1 , 2 , +1/2 )_𝐇^'''⟩≡1/√(2) v_ EW .
At the third stage, the Higgs VEVs from the rank-3 sector are due to the components of ( 4 , 1 , +1/4 )_𝐇 , İẊ⊂ ( 4 , 4 , 0 )_𝐇 , İẊ and ( 4 , 1 , +1/4 )_𝐇 ,2̇ , V̇İİİ^'⊂ ( 4 , 4 , 0 )_𝐇 , 2̇ , V̇İİİ.
This is different from the third stage of the WSW pattern, where the Higgs VEVs from the rank-3 sector are from different irreps of ( 1 , 6 , -1/2 )_𝐇 , 2̇ , V̇İİİ and ( 4 , 4 , 0 )_𝐇 , İẊ in Eq. (<ref>), respectively.
For our later convenience, we also parametrize different symmetry breaking VEVs
ζ_0 ≡ v_U / M_ pl , ζ_1 ≡ W_4 , IV/ M_ pl , ζ_2 ≡ w_3 , 3, V/ M_ pl , ζ̇_2 ≡ w_3 ,1̇,V̇İİ/ M_ pl ,
ζ_3 ≡ V_4 , VI/ M_ pl , ζ̇_3^'≡ V_4 ,2̇ , V̇İİİ^'/ M_ pl , ζ̇_3 ≡ V_4 , İẊ/ M_ pl ,
ζ_0 ≫ζ_1 ≫ζ_2 ∼ζ̇_2 ≫ζ_3 ∼ζ̇_3^'∼ζ̇_3 , ζ_i j≡ζ_j /ζ_i , ( i < j ) ,
in terms of dimensionless quantities.
Notice that we use the same notations as in Eq. (<ref>) to manifest the intrinsic hierarchies, while their symmetry breaking patterns should be distinguishable.
§.§ The d=5 operators for the SM quark and lepton masses
In Ref. <cit.>, we found that only the top quark obtains the natural tree-level mass from the Y_28_F28_F70_H + H.c. term with Y_∼(1) in Eq. (<ref>).
All other lighter SM quark/lepton masses are due to the d=5 operators <cit.>, which all explicitly break the emergent global symmetries in Eqs. (<ref>) and (<ref>).
Among them, we have the d=5 direct Yukawa coupling terms of
c_3 _^ (3 ,2) = c_3 8_F^ω̇56_F·28_H_ , κ̇^†·70_H^† ,
c_4 _^ (4 ,1) = c_4 56_F56_F·28_H_ , ω̇·70_H ,
c_5 _^ (5 ,1) = c_5 28_F56_F·8_H_ ,ω·70_H .
The operators of _^ (4 ,1) and _^ (5 ,1) were found to generate the hierarchical up-type quark masses.
To generate the masses for all down-type quarks and charged leptons, we conjecture two sets of Higgs mixing terms
d_𝒜 _𝒜^d=5 ≡ d_𝒜 ϵ_ω_1 ω_2 ω_3 ω_4 8_H_ , ω_1^†8_H_ , ω_2^†8_H_ , ω_3^†8_H_ , ω_4 ^†70_H^† , = 2 ( 2p + 3 q_2 ) ≠ 0 ,
d_ℬ _ℬ^d=5 ≡ d_ℬ ( 28_H_ ,κ̇_1 ^†28_H_ ,κ̇_2 ) ·28_H_ ,ω̇_1^†28_H_ ,ω̇_2^†70_H^† , = 2 ( p + q_2 + q_3) , with κ̇_2 ≠ ( κ̇_1 , ω̇_1 , ω̇_2 ) ,
where one of the Higgs fields will mix with two renormalizable Yukawa couplings of Y_8_F^ω28_F8_H_ ,ω+ Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c. in Eq. (<ref>), as were depicted in Fig. <ref>.
Below, we will re-derive the SM quark/lepton mass terms along two separate WSW and WWS symmetry breaking patterns.
§ THE WSW SYMMETRY BREAKING PATTERN
§.§ The first stage
The first symmetry breaking stage of _441→_431 is achieved by ( 1 , 4 , -1/4 )_H ,ω⊂8_H_ , ω in the rank-2 sector, according to their _431-invariant and U(1)_ T^'-neutral components in Tab. <ref>.
The Yukawa coupling between 8_F^ω and 28_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω28_F8_H_ ,ω + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 4 , 4 , 0 )_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω⊗ ( 1 , 6 , +1/2 )_𝐅] ⊗⟨ ( 1 , 4 , -1/4)_𝐇 ,ω⟩ + H.c.
⇒ 1/√(2)Y_( _L^''_R^''^c + _L^''_R^'' c + _L _R ^c - _L _R ^c+_L _R^c )W_4, IV + H.c. .
Without loss of generality, we choose ω = IV according to the VEV assignment in Eq. (<ref>) at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^ IV ≡_R^''^c , ( 1 , 2 , -1/2 )_𝐅^ IV ≡ ( _L , - _L ), and (_L^ IV , _L^ IV^' ) ≡ ( _L^'' , _L).
After this stage, the remaining massless fermions expressed in terms of the _431 irreps are the following
( 4 , 1 , +1/4 )_𝐅^Ω⊕[ ( 1 , 3 , -1/3 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω^''] ⊕⊂8_F^Ω , Ω = ( ω , ω̇) , ω = (3 , V , VI ) , ω̇= ( 1̇ , 2̇ , V̇İİ ,V̇İİİ , İẊ) , ( 1 , 1 , 0 )_𝐅^ IV^''⊂8_F^ IV ,
( 6 , 1 , -1/2 )_𝐅⊕[ ( 1 , 3 , +1/3 )_𝐅⊕ ( 1 , 3 , +2/3 )_𝐅] ⊕[ ( 4 , 3 , +1/12 )_𝐅⊕ ( 4 , 1 , -1/4 )_𝐅] ⊂28_F ,
[ ( 1 , 3 , +2/3 )_𝐅^'⊕ ( 1 , 1 , -1 )_𝐅^''] ⊕ ( 4 , 1 , -3/4 )_𝐅⊕[ ( 4 , 3 , +1/12 )_𝐅⊕ ( 4 , 3 , +5/12)_𝐅] ⊕ [ ( 6 , 3 , -1/6 )_𝐅⊕ ( 6 , 1 , -1/2)_𝐅^'] ⊂56_F .
Fermions that become massive at this stage are crossed out by slashes.
Loosely speaking, we find only one of the massive 8_F^Ω is integrated out from the anomaly-free conditions of [ SU(4)_s]^2 · U(1)_X_1=0, SU(3)_W
^2 · U(1)_X_1=0, and U(1)_X_1
^3=0, except for one left-handed sterile neutrino of _L^ IV^''≡ ( 1 , 1 , 0 )_F^ IV^''⊂8_F^ IV.
§.§ The second stage
The second symmetry breaking stage can be achieved by ( 4 , 1 , +1/4 )_H ,ω⊂8_H_ , ω in the rank-2 sector and ( 4 , 1 , +1/4 )_H , ω̇⊂28_H_ , ω̇ in the rank-3 sector, according to their _331-invariant and U(1)_ T^''-neutral components in Tab. <ref>.
The Yukawa coupling between 8_F^ω and 28_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω28_F8_H_ ,ω + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 6 , 1 , -1/2 )_𝐅⊕ ( 1 , 4 , -1/4)_𝐅^ω⊗ ( 4 , 4 , 0)_𝐅]⊗ ( 4 , 1 , +1/4 )_𝐇 ,ω+ H.c.
⊃
Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 6 , 1 , -1/2 )_𝐅⊕ ( 1 , 3 , -1/3)_𝐅^ω⊗ ( 4 , 3 , +1/12)_𝐅 ⊕ ( 1 , 1 , 0)_𝐅^ω^''⊗ ( 4 , 1 , -1/4)_𝐅] ⊗⟨ ( 4 , 1 , +1/4 )_𝐇 ,ω⟩ + H.c.
⇒ 1/√(2)Y_( _L _R^c +_L^''_R^''^c -_L^''_R^''^c+_L^' c_R^' c + _L^V^''_R^'' c) w_4, V + H.c. .
Without loss of generality, we choose ω = V at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^ V ≡_R^c , ( 1 , 2 , -1/2 )_𝐅^ V ≡ ( _L^'' , - _L^'' ), and _L^ V^'≡_L^'.
The Yukawa coupling between 8_F^ω̇ and 56_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 4 , -1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 4 , 6 , +1/4 )_𝐅] ⊗ ( 4 , 4 , 0)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 1 , -1/2)_𝐅^'⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 4 , 3 , +1/12 )_𝐅] ⊗⟨ ( 4 , 1 , +1/4)_𝐇 , ω̇⟩ + H.c.
⇒ 1/√(2) Y_( _L^'''''_R^'''''^c+_L^''''_R^''''^c - _L^''''_R^''''^c + _L^'''_R^''' c ) w_4 , V̇İİ + 1/√(2) Y_( _L^'''''d_R ^c+e_L _R^''''^c - ν_e L_R^''''^c + _L^1̇_R^''' c ) w_4 , 1̇ + H.c. .
Without loss of generality, we choose ω̇= ( 1̇ , V̇İİ ) according to the VEV assignment in Eq. (<ref>) at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^V̇İİ≡_R^'''''^c, ( 1 , 2 , -1/2 )_𝐅^V̇İİ≡ ( _L^'''' , - _L^'''' ), and _L^V̇İİ^'≡_L^'''.
The (10_F , 10_F)-pair within the 56_F can obtain the vectorlike masses through the following d=5 operator
c_4/M_pl56_F56_F⟨63_H⟩28_H_ ,ω̇^† + H.c.
⊃ c_4 v_U /M_pl[ ( 1 , 4 , +3/4 )_𝐅⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 6 , +1 /4 )_𝐅⊗ ( 6 , 4 , -1/4 )_𝐅] ⊗ ( 4 , 4 , 0 )_𝐇 , ω̇^† + H.c.
⊃ c_4 v_U /M_pl[ ( 1 , 1 , +1 )_𝐅^''⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 3 , +5/12 )_𝐅⊗ ( 6 , 3 , -1/6 )_𝐅] ⊗⟨ ( 4 , 1 , +1/4)_𝐇 , ω̇^†⟩ + H.c.
⇒ c_4 /√(2)ζ_0 ( _L _R^c+_L _R^c - _L _R^c+_L _R^c ) w_4 , 1̇ , V̇İİ + H.c. .
After integrating out the massive fermions, the remaining massless fermions expressed in terms of the _331 irreps are the following
[ ( 3 , 1 , +1/3 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω] ⊕[ ( 1 , 3 , -1/3 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω^''] ⊂8_F^Ω , Ω = ( ω , ω̇) , ω = (3 , VI) , ω̇= (1̇ , 2̇ , V̇İİİ , İẊ ) , ( 1 , 1 , 0)_𝐅^ IV^''⊂8_F^ IV , ( 1 , 1 , 0)_𝐅^ V⊕ ( 1 , 1 , 0)_𝐅^ V^''⊂8_F^ V , ( 1 , 1 , 0)_𝐅^V̇İİ⊕ ( 1 , 1 , 0)_𝐅^V̇İİ^''⊂8_F^V̇İİ ,
[ ( 3 , 1 , -1/3 )_𝐅⊕ ( 3 , 1 , -2/3 )_𝐅] ⊕[ ( 1 , 3 , +1/3 )_𝐅⊕ ( 1 , 3 , +2/3 )_𝐅] ⊕ [ ( 3 , 3 , 0 )_𝐅⊕ ( 1 , 3 , +1/3 )_𝐅^''] ⊕[ ( 3 , 1 , -1/3 )_𝐅^''⊕ ( 1 , 1 , 0 )_𝐅^''] ⊂28_F ,
[ ( 1 , 3 , +2/3 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅^''] ⊕[ ( 3 , 1 , -2/3 )_𝐅^'⊕ ( 1 , 1 , -1 )_𝐅] ⊕ [ ( 3 , 3 , 0 )_𝐅^'⊕ ( 1 , 3 , +1/3 )_𝐅^'⊕( 3 , 3 , +1/3 )_𝐅⊕ ( 1 , 3 , +2/3)_𝐅^''] ⊕ [ ( 3 , 3 , 0)_𝐅^''⊕ ( 3 , 3 , -1/3)_𝐅⊕ ( 3 , 1 , -1/3)_𝐅^'''''⊕ ( 3 , 1 , -2/3)_𝐅^'''] ⊂56_F .
We use the slashes and the back slashes to cross out massive fermions at the first and the second stages, respectively.
From the anomaly-free conditions of [ SU(3)_c]^2 · U(1)_X_2=0, SU(3)_W
^2 · U(1)_X_2=0, and U(1)_X_2
^3=0, we find that one of the 8_F^ω and one of the 8_F^ω̇ are integrated out.
Without loss of generality, we choose the massive anti-fundamental fermions to be ω= V and ω̇=V̇İİ at this stage.
§.§ The third stage
The third symmetry breaking stage of _331→_ SM can be achieved by Higgs fields of ( 1 , 3 , - 1/3 )_H ,ω⊂8_H_ , ω and ( 1 , 3 , -1/ 3 )_H , ω̇^'⊕ ( 1 , 3 , -1/ 3 )_H , ω̇
⊂28_H_ , ω̇, according to the decompositions in Eqs. (<ref>) and (<ref>), as well as their U(1)_T^'''-neutral components in Tab. <ref>.
The Yukawa coupling between 8_F^ω and 28_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω28_F8_H_ ,ω + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 4 , 4 , 0 )_𝐅⊕ ( 1 , 4 , -1/4)_𝐅^ω⊗ ( 1 , 6 , +1/2)_𝐅^ω]⊗ ( 1 , 4 , -1/4 )_𝐇 ,ω+ H.c. ⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 4 , 3 , +1/12 )_𝐅⊕ ( 1 , 3 , -1/3)_𝐅^ω⊗ ( 1 , 3 , +2/3)_𝐅^ω ⊕ ( 1 , 1 , 0)_𝐅^ω^''⊗ ( 1 , 3 , +1/3)_𝐅]⊗ ( 1 , 3 , -1/3 )_𝐇 ,ω+ H.c.
⊃ Y_[ ( 3 , 1 , +1/3)_𝐅^ω⊗ ( 3 , 3 , 0)_𝐅⊕ ( 1 , 1 , 0)_𝐅^ω⊗ ( 1 , 3 , +1/3)_𝐅^'' ⊕ ( 1 , 3 , -1/3)_𝐅^ω⊗ ( 1 , 3 , +2/3)_𝐅⊕( 1 , 1 , 0)_𝐅^ω^''⊗ ( 1 , 3 , +1/3)_𝐅]⊗⟨ ( 1 , 3 , -1/3 )_𝐇 ,ω⟩ + H.c.
⇒ 1/√(2) Y_(_L^'_R^'^c +_L^VI_R^' c-_L^'_R^'^c + _L^'_R^'^c + _L^VI^''_R^c )V_3, VI + 1/√(2) Y_( _L^'b_R^c +_L^3_R^' c-τ_L _R^'^c + ν_τ L_R^'^c +_L^3^''_R^c ) V_3,3 + H.c. .
Without loss of generality, we choose ω = ( 3 , VI ) at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^ VI ≡_R^'^c, and ( 1 , 2 , -1/2 )_𝐅^ VI ≡ ( _L^' , - _L^' ).
The third-generational SM fermions of (τ , ν_τ , b) only form mass mixing terms with their heavy partner fermions of ( ^' , ^' , ^'), and remain massless at this stage.
The Yukawa coupling between 8_F^ω̇ and 56_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 4 , 6 , +1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 1 , 4 , +3/4 )_𝐅] ⊗ ( 1 , 6 , -1/2)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 4 , 3 , +1/12)_𝐅⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 1 , 3 , +2/3 )_𝐅^'] ⊗ ( 1 , 3 , -1/3)_𝐇 , ω̇^' + H.c.
⊃ Y_[ ( 3 , 1 , +1/3)_𝐅^ω̇⊗ ( 3 , 3 , 0)_𝐅^'⊕( 1 , 1 , 0)_𝐅^ω̇⊗ ( 1 , 3 , +1/3)_𝐅^'
⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 1 , 3 , +2/3 )_𝐅^'] ⊗⟨ ( 1 , 3 , -1/3)_𝐇 , ω̇^'⟩ + H.c.
⇒ 1/√(2) Y_( _L^'''_R^'''^c+ _L^V̇İİİ_R^''' c-_L^'''_R^'''^c + _L^'''_R^'''^c )V_3,V̇İİİ^' + 1/√(2) Y_( _L^'''s_R^c+ _L^2̇_R^''' c-μ_L _R^'''^c + ν_μ L_R^'''^c ) V_3,2̇^' + H.c. ,
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 4 , -1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 4 , 6 , +1/4 )_𝐅] ⊗ ( 4 , 4 , 0)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 3 , -1/6)_𝐅⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 4 , 3 , +5/12 )_𝐅 ⊕ ( 1 , 1 ,0 )_𝐅^ω̇^''⊗ ( 4 , 3 , +1/12 )_𝐅] ⊗ ( 4 , 3 , -1/12)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 3 , 1 , +1/3)_𝐅^ω̇⊗ ( 3 , 3 ,0)_𝐅^''⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 1 , 3 , +2/3 )_𝐅^'' ⊕ ( 1 , 1 ,0 )_𝐅^ω̇^''⊗ ( 1 , 3 , +1/3 )_𝐅^'] ⊗⟨ ( 1 , 3 , -1/3)_𝐇 , ω̇⟩ + H.c.
⇒ 1/√(2) Y_( _L^''''_R^''''^c -_L^'''''_R^'''''^c +_L^'''''_R^''''' c +_L^İẊ^''_R^''' c)V_3,İẊ+ H.c. .
Without loss of generality, we choose ω̇= ( 2̇ , V̇İİİ ) in Eq. (<ref>) and ω̇= İẊ in Eq. (<ref>) at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^V̇İİİ≡_R^'''^c, ( 1 , 2 , -1/2 )_𝐅^V̇İİİ≡ ( _L^''' , - _L^''' ), ( 3 , 1 , +1/3 )_𝐅^İẊ≡_R^''''^c , and ( 1 , 2 , -1/2 )_𝐅^İẊ≡ ( _L^''''' , - _L^''''' ).
There can also be mass mixing terms from the d=5 operator
c_4/M_pl56_F56_F63_H28_H_,ω̇^† + H.c. ⊃ c_4v_U/M_pl[ ( 1 , 3 , +2/3 )_𝐅^'⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 3 , +1/12 )_𝐅⊗ ( 6 , 3 , -1/6 )_𝐅 ⊕ ( 4 , 3 , +5/12 )_𝐅⊗ ( 6 , 1 , -1/2 )_𝐅]⊗ ( 4 , 3 , -1/12)_𝐇 , ω̇^† + H.c. ⊃ c_4 ζ_0 [ ( 1 , 3 , +2/3 )_𝐅^'⊗ ( 1 , 1 , -1)_𝐅⊕ ( 3 , 3 ,0 )_𝐅^'⊗ ( 3 , 3 , -1/3 )_𝐅 ⊕ ( 3 , 3 , +1/3 )_𝐅⊗ ( 3 , 1 , -2/3 )_𝐅^''']⊗ ( 1 , 3 , -1/3)_𝐇 , ω̇^'' † + H.c.
⇒ c_4 ζ_0[ _L e_R^c +u_L _R^c-d_L_R^c+_L u_R^c]V_3,İẊ^*+ H.c.
and
c_4/M_pl56_F56_F63_H28_H_,ω̇^† + H.c. ⊃ c_4v_U/M_pl[ ( 4 , 1 , -3/4 )_𝐅⊗ ( 4 , 6 , +1/4 )_𝐅⊕ ( 6 , 4 , -1/4)_𝐅⊗ ( 6 , 4 , -1/4)_𝐅]⊗ ( 1 , 6 , -1/2)_𝐇 , ω̇^† + H.c. ⊃ c_4 ζ_0 [ ( 4 , 1 , -3/4 )_𝐅⊗ ( 4 , 3 , +5/12 )_𝐅]⊗ ( 1 , 3 , -1/3)_𝐇 , ω̇^† + H.c. ⊃ c_4 ζ_0 [ ( 3 , 1 , -2/3 )_𝐅^'⊗ ( 3 , 3 , +1/3 )_𝐅⊕ ( 1 , 1 , -1 )_𝐅⊗ ( 1 , 3 , +2/3)_𝐅^'']⊗ ( 1 , 3 , -1/3)_𝐇 , ω̇^† + H.c.
⇒ c_4 ζ_0 [ _L c_R^c +_L μ_R^c ](V_3,2̇^' *+V_3,V̇İİİ^' *)+ H.c. .
The remaining massless fermions of the _ SM are listed as follows
[ ( 3 , 1 , +1/3 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω] ⊕[ ( 1 , 2 , -1/2 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω^'⊕ ( 1 , 1 , 0 )_𝐅^Ω^''] ⊂8_F^Ω , Ω = ( 1̇ , 2̇ , 3 ) ,
( 1 , 1 , 0)_𝐅^ V⊕ ... ⊕ ( 1 , 1 , 0)_𝐅^İẊ⊂8_F^Ω ,
( 1 , 1 , 0)_𝐅^ VI^'⊕ ( 1 , 1 , 0)_𝐅^V̇İİİ^'⊕ ( 1 , 1 , 0)_𝐅^İẊ^'⊂8_F^Ω^' ,
( 1 , 1 , 0)_𝐅^ IV^''⊕ ... ⊕ ( 1 , 1 , 0)_𝐅^İẊ^''⊂8_F^Ω^'' ,
[ ( 3 , 1 , -1/3 )_𝐅⊕ ( 3 , 1 , -2/3 )_𝐅] ⊕[ ( 1 , 2 , +1/2 )_𝐅⊕ ( 1 , 1 , 0 )_𝐅⊕ ( 1 , 2 , +1/2 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅]
⊕ [ ( 3 , 2 , +1/6 )_𝐅⊕ ( 3 , 1 , -1/3 )_𝐅^'⊕ ( 1 , 2 , +1/2 )_𝐅^''⊕ ( 1 , 1 , 0 )_𝐅^'⊕ ( 3 , 1 , -1/3 )_𝐅^''⊕ ( 1 , 1 , 0 )_𝐅^''] ⊂28_F ,
[ ( 1 , 2 , +1/2 )_𝐅^'''⊕ ( 1 , 1 , +1 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅^''] ⊕[ ( 3 , 1 , -2/3 )_𝐅^'⊕ ( 1 , 1 , -1 )_𝐅]
⊕ [ ( 3 , 2 , +1/6 )_𝐅^'⊕ ( 3 , 1 , -1/3 )_𝐅^'''⊕ ( 1 , 2 , +1/2 )_𝐅^''''⊕ ( 1 , 1 , 0 )_𝐅^'''
⊕ ( 3 , 2 , +1/6 )_𝐅^''⊕ ( 3 , 1 , +2/3 )_𝐅⊕ ( 1 , 2 , +1/2)_𝐅^'''''⊕ ( 1 , 1 , +1)_𝐅^''']
⊕ [ ( 3 , 2 , +1/6 )_𝐅^'''⊕ ( 3 , 1 , -1/3)_𝐅^''''⊕ ( 3 , 2 , -1/6)_𝐅⊕ ( 3 , 1 , -2/3)_𝐅^''
⊕ ( 3 , 1 , -1/3)_𝐅^'''''⊕ ( 3 , 1 , -2/3)_𝐅^'''] ⊂56_F .
The fermions that become massive at this stage are further crossed outs.
After this stage of symmetry breaking, there are three-generational massless SM fermions together with twenty-three left-handed massless sterile neutrinos [The number of residual left-handed massless sterile neutrinos have been precisely obtained through the `t Hooft anomaly matching in Ref. <cit.>.].
The third-generational SM fermions are from the rank-2 chiral IRAFFS of 8_F^ω⊕28_F, while the first- and second-generational SM fermions are from the rank-3 chiral IRAFFS of 8_F^ω̇⊕56_F.
§.§ A summary of the vectorlike fermion masses
§.§ The d=5 bi-linear fermion operators
We proceed to analyze the d=5 bi-linear fermion operators in Eqs. (<ref>) along the WSW symmetry breaking pattern.
The operator of _^ ( 3 , 2) is decomposed as
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(1 ,4 ,+3/4)_𝐅]
⊗ (4 ,4 ,0)_𝐇,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,3 ,+5/12)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^''⊕(1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,3 ,+2/3)_𝐅^']
⊗ ⟨ (4 ,1 ,+1/4)_𝐇,κ̇^†⟩⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3w_4,V̇İİ/√(2)M_pl[ (3 ,1 ,+1/3)_𝐅^ω̇⊗(3 ,3 ,+1/3)_𝐅⊕ (1 ,1 ,0)_𝐅^ω̇⊗(1 ,3 ,+2/3)_𝐅^''⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^''
⊕ (1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,3 ,+2/3)_𝐅^'] ⊗⟨ (1 ,3 ,+2/3)_𝐇^'''†⟩ +H.c.
⇒ c_3/2ζ̇_̇2̇ ( _L _R^ω̇^c +_L^ω̇_R^'''^c + _L^ω̇_R ^c+_L^ω̇^''_R^'''''^c ) v_ EW+H.c. ,
and
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(1 ,4 ,+3/4)_𝐅]
⊗ (4 ,4 ,0)_𝐇,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,3 ,+1/12)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,3 ,+2/3)_𝐅^']
⊗ (4 ,3 ,-1/12)_𝐇,κ̇^†⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3/M_pl[ (3 ,1 ,+1/3)_𝐅^ω̇⊗(3 ,3 ,0)_𝐅^'⊕ (1 ,1 ,0)_𝐅^ω̇⊗(1 ,3 ,+1/3)_𝐅^'⊕(1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,3 ,+2/3)_𝐅^']
⊗ ⟨ (1 ,3 ,-1/3)_𝐇,κ̇^†⟩⊗⟨ (1 ,3 ,+2/3)_𝐇^''' †⟩+H.c.
⇒ c_3/2ζ̇_̇3̇ ( d_L _R^ω̇^c +_L^ω̇_R^''''^c - _L^ω̇ e_R^c + _L^ω̇^'_R^'''^c ) v_ EW+H.c. ,
and
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c. ⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(6 ,4 ,-1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅]
⊗ (1 , 6 , -1/2)_𝐇 ,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(6 ,3 ,-1/6)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(4 ,3 ,+5/12)_𝐅⊕ (1 ,1 ,0)_𝐅^ω̇^''⊗(4 ,3 ,+1/12)_𝐅]
⊗ (1 ,3 ,-1/3)_𝐇 ,κ̇^' †⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3/M_pl[ (3 ,1 ,+1/3)_𝐅^ω̇⊗(3 ,3 ,0)_𝐅^''⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,3 ,+2/3)_𝐅^''⊕ (1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,3 ,+1/3)_𝐅^']
⊗ ⟨ (1 ,3 ,-1/3)_𝐇 ,κ̇^' †⟩⊗⟨(1 ,3 ,+2/3)_𝐇^''' †⟩+H.c.
⇒ c_3/2ζ̇_3^' ( s_L _R^ω̇^c -_L^ω̇μ_R^c + _L^ω̇^'_R^'''''^c + _L^ω̇^''_R^''''^c ) v_ EW+H.c. .
By taking the possible flavor indices of ω̇= 1̇ , 2̇ in Eqs. (<ref>) and (<ref>), one finds the following set of mass matrices of the (d , s) and (e , μ)
( _d )_2 × 2^ direct = c_3 /2 ( ccζ̇_3 ζ̇_3
ζ̇_3^' ζ̇_3^'
) v_ EW ,
( _e )_2 × 2^ direct = - c_3 /2 ( ccζ̇_3 ζ̇_3^'
ζ̇_3 ζ̇_3^'
) v_ EW ,
which leave the down quark and electron massless.
For the operator of _^ (4 ,1), it is decomposed as
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (6 ,4 ,-1/4)_𝐅⊗ (6 ,4 ,-1/4)_𝐅] ⊗ (4 ,4 ,0)_𝐇,ω̇⊗ (4 ,4 ,+1/2)_𝐇+H.c.
⊃ c_4/M_pl (4 ,1 ,-3/4)_𝐅⊗ (4 ,3 ,+1/12)_𝐅⊗⟨ (4 ,1 ,+1/4)_𝐇,ω̇⟩⊗ (4 ,3 ,+5/12)_𝐇+H.c.
⊃ c_4 w_4,V̇İİ/√(2)M_pl[ (3 ,1 ,-2/3)_𝐅^'⊗ (3 ,3 ,0)_𝐅^'⊕ (1 ,1 ,-1)_𝐅⊗ (1 ,3 ,+1/3)_𝐅^']⊗⟨(1 ,3 ,+2/3)_𝐇^'''⟩ + H.c.
⇒ c_4/2ζ̇_̇2̇ ( u_L c_R^c + _L _R^''''^c ) v_ EW + H.c. ,
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗ (4 ,3 ,+5/12)_𝐅] ⊗ (4 ,3 ,-1/12)_𝐇,ω̇⊗ (4 ,3 ,+5/12)_𝐇+H.c.
⊃ c_4/M_pl[ (3 ,1 ,-2/3)_𝐅^'⊗ (3 ,3 ,+1/3)_𝐅^'⊕ (1 ,1 ,-1)_𝐅⊗ (1 ,3 ,+2/3)_𝐅^''] ⊗⟨ (1 ,3 ,-1/3)_𝐇,ω̇⟩⊗⟨(1 ,3 ,+2/3)_𝐇^'''⟩ + H.c.
⇒ c_4/2ζ̇_̇3̇ ( -_L c_R^c - _L _R^'''''^c ) v_ EW + H.c. ,
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (1 ,4 ,+3/4)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕ (4 ,6 ,+1/4)_𝐅⊗(6 ,4 ,-1/4)_𝐅] ⊗ (1 ,6 ,-1/2)_𝐇,ω̇⊗ (4 ,4 ,+1/2)_𝐇+H.c.
⊃ c_4/M_pl[ (1 ,3 ,+2/3)_𝐅^'⊗ (4 ,1 ,-3/4)_𝐅⊕ (4 ,3 ,+1/12)_𝐅⊗ (6 ,3 ,-1/6)_𝐅⊕ (4 ,3 ,+5/12)_𝐅⊗(6 ,1 ,-1/2)_𝐅^''] ⊗ (1 ,3 ,-1/3)_𝐇,ω̇^'⊗ (4 ,3 ,+5/12)_𝐇+H.c. ⊃ c_4/M_pl[ (1 ,3 ,+2/3)_𝐅^'⊗ (1 ,1 ,-1)_𝐅⊕ (3 ,3 ,0)_𝐅^'⊗ (3 ,3 ,-1/3)_𝐅⊕ (3 ,3 ,+1/3)_𝐅⊗ (3 ,1 ,-2/3)_𝐅^'''] ⊗⟨ (1 ,3 ,-1/3)_𝐇,ω̇ ^'⟩⊗⟨ (1 ,3 ,+2/3)_𝐇^'''⟩ +H.c.
⇒ c_4 /2ζ̇_3^' ( - _L _R^'''^c + u_L _R^c - _L^'''_R^c - _L u_R^c ) v_ EW + H.c. .
The bi-linear fermion product of (6 ,4 ,-1/4)_𝐅⊗ (6 ,4 ,-1/4)_𝐅⊗ (4 ,4 ,0)_𝐇,ω̇⊗ (4 ,4 ,+1/2)_𝐇+H.c. was previously found to vanish due to their anti-symmetric properties <cit.>.
For the operator of _^ (5 ,1), it is decomposed as
c_5/M_pl28_F56_F·8_H_ , ω·70_H + H.c.
⊃ c_5/M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,6 ,+1/2)_𝐅⊗(1 ,4 ,-3/4)_𝐅⊕(4 ,4 ,0)_𝐅⊗(6 ,4 ,-1/4)_𝐅] ⊗ ⟨ (1 ,4 ,-1/4)_𝐇, ω⟩⊗ (4 ,4 ,+1/2)_𝐇 + H.c.
⊃ c_5W_4,IV/√(2)M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,3 ,+1/12)_𝐅⊕ (1 ,3 ,+1/3)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,3 ,+1/12)_𝐅⊗(6 ,1 ,-1/2)_𝐅^'' ⊕ (4 ,1 ,-1/4)_𝐅⊗ (6 ,3 ,-1/6)_𝐅] ⊗(4 ,3 ,+5/12)_𝐇 + H.c.
⊃ c_5/√(2)ζ_1 [(3 ,1 ,-2/3)_𝐅⊗(3 ,3 ,0)_𝐅^'⊕ (1 ,3 ,+1/3)_𝐅⊗(1 ,1 ,-1)_𝐅⊕(3 ,3 ,0)_𝐅⊗(3 ,1 ,-2/3)_𝐅^''' ⊕ (3 ,1 ,-1/3)_𝐅^''⊗ (3 ,3 ,-1/3)_𝐅] ⊗⟨(1 ,3 ,+2/3)_𝐇^'''⟩ + H.c.
⇒ c_5 /2ζ_1 ( u_L t_R^c + _L _R ^c +t_L u_R^c + _L^''_R^c ) v_ EW + H.c. ,
c_5/M_pl28_F56_F·8_H_ , ω·70_H+H.c.
⊃ c_5/M_pl[ (6 ,1 ,-1/2)_𝐅⊗ (6 ,4 ,-1/4)_𝐅⊕ (4 ,4 ,0)_𝐅⊗ (4 ,1 ,-3/4)_𝐅] ⊗ (4 ,1 ,+1/4)_𝐇 , ω⊗ (4 ,4 ,+1/2)_𝐇 + H.c.
⊃ c_5/M_pl[ (6 ,1 ,-1/2)_𝐅⊗ (6 ,3 ,-1/6)_𝐅⊕ (4 ,3 ,+1/12)_𝐅⊗ (4 ,1 ,-3/4)_𝐅]
⊗ ⟨ (4 ,1 ,+1/4)_𝐇 , ω⟩⊗ (4 ,3 ,+5/12)_𝐇 + H.c.
⊃ c_5w_4,V/√(2)M_pl[ (3 ,1 ,-1/3)_𝐅⊗ (3 ,3 ,-1/3)_𝐅⊕ (3 ,1 ,-2/3)_𝐅⊗ (3 ,3 ,0)_𝐅^''⊕ (3 ,3 ,0)_𝐅⊗ (3 ,1 ,-2/3)_𝐅^'
⊕ (1 , 3 ,+1/3)_𝐅^''⊗ (1 ,1 ,-1)_𝐅]⊗⟨(1 ,3 ,+2/3)_𝐇^'''⟩+ H.c.
⇒ c_5/2ζ_2 ( _L _R^c + c_L t_R^c+t_L c_R^c + _L_R^c ) v_ EW+H.c. ,
c_5/M_pl28_F56_F·8_H_ , ω·70_H+H.c.
⊃ c_5/M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,6 ,+1/2)_𝐅⊗(1 ,4 ,-3/4)_𝐅⊕(4 ,4 ,0)_𝐅⊗(6 ,4 ,-1/4)_𝐅]
⊗ (1 ,4 ,-1/4)_𝐇 , ω⊗(4 ,4 ,+1/2)_𝐇 + H.c.
⊃ c_5/M_pl[(3 ,1 ,-2/3)_𝐅⊗(3 ,3 ,+1/3)_𝐅⊕ (1 ,3 ,+2/3)_𝐅⊗(1 ,1 ,-1)_𝐅⊕(3 ,3 ,0)_𝐅⊗(3 ,3 ,-1/3)_𝐅]
⊗ ⟨ (1 ,3 ,-1/3)_𝐇 , ω⟩⊗⟨(1 ,3 ,+2/3)_𝐇^'''⟩+H.c.
⇒ c_5/2ζ_3 ( - _L t_R^c +_L _R^'^c+ t_L _R^c + _L^'_R^c ) v_ EW+H.c. .
§.§ The d=5 irreducible Higgs mixing operators
We further decompose the d=5 irreducible Higgs mixing operators along the WSW symmetry breaking pattern.
For the Yukawa coupling of 8_F^ω_128_F8_H_ ,ω_1, we find the mass terms of
Y_8_F^ω_128_F8_H_ ,ω_1×d_𝒜/M_plϵ_ω_1 ω_2ω_3ω_4 8_H_ ,ω_1^†8_H_ ,ω_2^†8_H_ ,ω_3^†8_H_ ,ω_4^†70_H^†+H.c.
⊃ Y_[ (4 ,1 ,+1/4)_𝐅^ω_1⊗ ( 4 ,4 ,0)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω_1⊗ (1 ,6 ,+1/2)_𝐅] ⊗ (1 ,4 ,-1/4)_𝐇 ,ω_1
× d_𝒜/M_pl (1 ,4 ,-1/4)_𝐇 ,ω_1^†⊗(1 ,4 ,-1/4)_𝐇 ,ω_2^†⊗(1 ,4 ,-1/4)_𝐇 ,ω_3^†⊗ (4 ,1 ,+1/4)_𝐇 ,ω_4^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ Y_[ (4 ,1 ,+1/4)_𝐅^ω_1⊗ ( 4 ,3 ,+1/12)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω_1⊗ (1 ,3 ,+2/3)_𝐅⊕(1 ,1 ,0)_𝐅^ω_1^''⊗ (1 ,3 ,+1/3)_𝐅]
⊗ (1 ,3 ,-1/3)_𝐇 ,ω_1×d_𝒜W_4 IV/√(2)M_pl (1 ,3 ,-1/3)_𝐇 ,ω_1^†⊗(1 ,3 ,-1/3)_𝐇 ,ω_3^†⊗ (4 ,1 ,+1/4)_𝐇 ,ω_4^†⊗ (4 ,3 ,+5/12)_𝐇^†
+ H.c.
⊃ Y_[ (3 ,1 ,+1/3)_𝐅^ω_1⊗ (3 ,3 ,0)_𝐅⊕(1 ,1 ,0)_𝐅^ω_1⊗ (1 ,3 ,+1/3)_𝐅^''⊕ (1 ,3 ,-1/3)_𝐅^ω_1⊗ (1 ,3 ,+2/3)_𝐅
⊕ (1 ,1 ,0)_𝐅^ω_1^''⊗ (1 ,3 ,+1/3)_𝐅] (1 ,3 ,-1/3)_𝐇 ,ω_1×d_𝒜W_4 IVω_4,V/2M_pl (1 ,3 ,-1/3)_𝐇 ,ω_1^†⊗⟨(1 ,3 ,-1/3)_𝐇 ,ω_3^†⟩
⊗ ⟨(1 ,3 ,+2/3)_𝐇^''' †⟩ +H.c.
⇒ Y_ d_𝒜/ 4 W_4 , IV w_4 , V V_3 , VI/ M_ pl m_( 1 , 4 , - 1/4 )_H ,3^2 ( b_L b_R^c + τ_L τ_R^c + _L^3 _R^''^c - _L^ 3^'_R^'^c +_L^ 3^''_R^c ) v_ EW + H.c. .
If we consider all three possibilities of the propagator masses, each of them leads to the (b ,τ) masses as follows
m_( 1 , 4 , - 1/4 )_H ,3∼(v_441) : m_b = m_τ = Y_ d_𝒜/ 4 ζ_2 ζ_13 v_ EW ,
m_( 1 , 4 , - 1/4 )_H ,3∼(v_431) : m_b = m_τ = Y_ d_𝒜/ 4ζ_1 ζ_23 v_ EW ,
m_( 1 , 4 , - 1/4 )_H ,3∼(v_331) : m_b = m_τ = Y_ d_𝒜/ 4 ζ_1 /ζ_23 v_ EW .
The last choice coincides with our previous result in Ref. <cit.>.
The indirect Yukawa couplings from the operator in Eq. (<ref>) are expected to generate the first- and second-generational down-type quark and charged lepton masses.
The gauge-invariant subset of ( 28_H_ ,1̇^†28_H_ ,V̇İİ) can develop the VEV of ⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ = 1/2 w_4 , 1̇ w_4 , V̇İİ∼ ( v_431^2 ) according to the VEV assignments in Eq. (<ref>).
Similar to the indirect Yukawa couplings in Eq. (<ref>), we should look for the EWSB components from the 28_H_ , 1̇ , 2̇ here.
For the Yukawa coupling of 8_F^ω̇_156_F28_H_ , ω̇_1, we find the mass terms of
Y_8_F^ω̇_156_F28_H_ , ω̇_1 × d_ℬ/ M_ pl28_H_ , ω̇_1 ^†28_H_ , ω̇_2 ^†70_H^†( 28_H_ ,1̇^†28_H_ ,V̇İİ) + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 4 , 6 , + 1 /4 )_F⊕ ( 1 , 4 , - 1 /4 )_F^ω̇_1 ⊗ ( 1 , 4 , + 3 /4 )_F] ⊗ ( 1 , 6 , - 1 /2 )_H , ω̇_1
× d_ℬ/ M_ pl ( 1 , 6 , - 1 /2 )_H , ω̇_1^†⊗ ( 4 , 4 , 0 )_H , ω̇_2 ^†⊗ ( 4 , 4 , + 1 /2 )_H^†⊗⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 4 , 3 , + 1 /12 )_F⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1 ⊗ ( 1 , 3 , + 2 /3 )_F^'] ⊗ ( 1 , 3 , - 1 /3 )_H , ω̇_1^'
× d_ℬ/ M_ pl ( 1 , 3 , - 1 /3 )_H , ω̇_1^' †⊗ ( 4 , 3 , -1/12 )_H , ω̇_2 ^†⊗ ( 4 , 3 , + 5 /12 )_H^†⊗⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_ d_ℬ w_4 , 1̇ w_4 , V̇İİ/ 2 M_ pl m_ ( 1 , 6 , - 1 / 2 )_H , ω̇_1 ^2[ ( 3 , 1 , + 1 /3 )_F^ω̇_1⊗ ( 3 , 3 , 0 )_F^'⊕ ( 1 , 1 , 0 )_F^ω̇_1⊗ ( 1 , 3 , + 1 /3 )_F^'
⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1 ⊗ ( 1 , 3 , +2 /3 )_F^'] ⊗⟨ ( 1 , 3 , - 1 /3 )_H , ω̇_2 ^†⟩⊗⟨ ( 1 , 3 , + 2 /3 )_H^''' †⟩ + H.c.
⇒ Y_ d_ℬ/4 ζ̇_3 [ w_4 , 1̇ w_4 , V̇İİ/ m_ ( 1 , 6 , - 1 / 2 )_H , 1̇^2 ( d_L d_R^c + e_L e_R^c ) + w_4 , 1̇ w_4 , V̇İİ/ m_ ( 1 , 6 , - 1 / 2 )_H , 2̇^2 ( d_L s_R^c + μ_L e_R^c ) ] v_ EW + H.c. ,
where the SM quark/lepton components from the 8_F^ω̇_1= 1̇/8_F^ω̇_1= 2̇ correspond to the (d_R^c , e_L) and (s_R^c , μ_L), respectively.
The other mass terms read
Y_8_F^ω̇_156_F28_H_ , ω̇_1 × d_ℬ/ M_ pl28_H_ , ω̇_1 ^†28_H_ , ω̇_2 ^†70_H^†( 28_H_ ,1̇^†28_H_ ,V̇İİ) + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 6 , 4 , - 1 /4 )_F⊕ ( 1 , 4 , - 1 /4 )_F^ω̇_1 ⊗ ( 4 , 6 , + 1 /4 )_F] ⊗ ( 4 , 4 , 0 )_H , ω̇_1
× d_ℬ/ M_ pl ( 4 , 4 , 0 )_H , ω̇_1^†⊗ ( 1 , 6 , - 1 /2 )_H , ω̇_2 ^†⊗ ( 4 , 4 , + 1 /2 )_H^†⊗⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 6 , 3 , - 1 /6 )_F⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1 ⊗ ( 4 , 3 , + 5 /12 )_F⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 4 , 3 , + 1 /12 )_F]
⊗ ( 4 , 3 , -1/12 )_H , ω̇_1× d_ℬ/ M_ pl ( 4 , 3 , -1/12 )_H , ω̇_1^†⊗ ( 1 , 3 , -1/3 )_H , ω̇_2^' †⊗( 4 , 3 , +5/12 )_H , ω̇_1^†
⊗ ⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩+H.c.
⊃ Y_[ ( 3 , 1 , + 1 /3 )_F^ω̇_1⊗ ( 3 , 3 , 0 )_F^''⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1⊗ ( 1 , 3 , + 2 /3 )_F^''⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 1 , 3 , + 1 /3 )_F^']
⊗ ( 1 , 3 , - 1 / 3 )_H , ω̇_1^× d_ℬw_4 , 1̇ w_4 , V̇İİ/ 2 M_ pl ( 1 , 3 , - 1 / 3 )_H , ω̇_1^†⟨ ( 1 , 3 , - 1 / 3 )_H , ω̇_2^ †⟩⊗ ( 1 , 3 , + 2 /3 )_H^''' † + H.c.
⊃ Y_ d_ℬ w_4 , 1̇ w_4 , V̇İİ V_3 , V̇İİİ^'/ 2 √(2) M_ pl m_ (4 , 4 , 0 )_H , ω̇_1^2 [ ( 3 , 1 , + 1 /3 )_F^ω̇_1⊗ ( 3 , 2 , +1 /6 )_F^'''⊕ ( 1 , 2 , - 1 /2 )_F^ω̇_1⊗ ( 1 , 1 , + 1 )_F^'''
⊕ ( 1 , 1 , 0 )_F^ω̇_1^'⊗ ( 1 , 2 , + 1 /2 )_F^'''''⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 1 , 2 , + 1 /2 )_F^''''] ⊗⟨ ( 1 , 2 , + 1 /2 )_H^''' †⟩ + H.c.
⇒ Y_ d_ℬ/4 ζ̇_3^'[ w_4 , 1̇ w_4 , V̇İİ/ m_ ( 4 , 4 , 0 )_H , 1̇^2 ( s_L d_R^c + e_L μ_R^c ) + w_4 , 1̇ w_4 , V̇İİ/ m_ ( 4 , 4 , 0)_H , 2̇^2 ( s_L s_R^c + μ_L μ_R^c ) ] v_ EW + H.c. .
With the Higgs VEV assignments in (<ref>), one expects the natural propagator masses of
m_ ( 4 , 4 , 0 )_H , 1̇∼ m_ ( 1 , 6 , - 1 /2 )_H , 1̇∼( v_431 ) , m_ ( 1 , 6 , - 1 /2 )_H , 2̇∼ m_ ( 4 , 4 , 0 )_H , 2̇∼( v_331 ) .
For convenience, we parametrize the following ratios of
Δ_ω̇≡w_4 , 1̇ w_4 , V̇İİ/ m_ ( 4 , 4 , 0 )_H , ω̇^2 , Δ_ω̇^'≡ w_4 , 1̇ w_4 , V̇İİ/ m_ ( 1 , 6 , - 1/ 2 )_H , ω̇^2 .
§.§ The SM quark/lepton masses and the CKM mixing
For all up-type quarks with Q_e=+2/3, we write down the following tree-level masses from both the renormalizable Yukawa couplings and the gravity-induced terms in the basis of ≡ (u ,c ,t)
_u = 1/√(2)( ccc
0 c_4 ζ̇_2 /√(2) c_5 ζ_1 /√(2)
0 0 c_5 ζ_2/ √(2)
c_5 ζ_1 /√(2) c_5 ζ_2/√(2) Y_
) v_ EW≈_u^ (0) + _u^ (1 ) + _u^( 2) ,
_u^ (0) = 1/√(2)( ccc
0 0 0
0 0 0
0 0 Y_
) v_ EW ,
_u^ (1) = 1/√(2)( ccc
0 0 c_5 ζ_1 /√(2)
0 0 0
c_5 ζ_1 /√(2) 0 0
) v_ EW ,
_u^ (2) = 1/√(2)( ccc
0 c_4 ζ̇_2 /√(2) 0
0 0 c_5 ζ_2 /√(2)
0 c_5 ζ_2 /√(2) 0
) v_ EW ,
where we have neglected the ∼(ζ_3 v_ EW) terms in the above expansions.
One obvious feature is that the gauge eigenstates of up quark and the charm quark do not obtain tree-level masses through the d=5 operators with the SM Higgs doublet.
Instead, there are only off-diagonal mass mixing terms in Eqs. (<ref>) and (<ref>).
Accordingly, we find that
det^'
_u^ (0)_u^ (0) †
= 1/2 Y_^2 v_ EW^2 ⇒ m_t^2 ≈ 1 / 2 Y_^2 v_ EW^2 .
Here and below, we use the det^' to denote the matrix determinant that is equal to the products of all non-zero eigenvalues.
Next, we find the charm quark mass squared of
m_c^2 = det^'
( _u^ (0) + _u^ (1)) ·( _u^ (0) † + _u^ (1) †)
/ det^'
_u^ (0)_u^ (0) †
≈ c_5^4 ζ_1^4 /8 Y_^2 v_ EW^2 .
The up quark mass squared can be similarly obtained by
m_u^2 = det
( _u^ (0) + _u^ (1) + _u^ (2)) ·( _u^ (0) † + _u^ (1) † + _u^ (2) †)
/ det^'
( _u^ (0) + _u^ (1)) ·( _u^ (0) † + _u^ (1) †)
≈ c_4^2 ζ_2^2 ζ̇_2^2 /4 ζ_1^2 v_ EW^2 .
To summarize, all SM up-type quark masses are expressed as follows
m_u ≈ c_4 ζ_2 ζ̇_2 / 2 ζ_1 v_ EW , m_c ≈ c_5^2 ζ_1^2 / 2 √(2) Y_ v_ EW , m_t ≈ Y_/√(2) v_ EW .
For all down-type quarks with Q_e=-1/3, we find the following tree-level SM mass matrix
( _d )_3× 3≈1/4( ccc
( 2 c_3 + Y_ d_ℬ ) ζ̇_3 ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3 0
( 2 c_3 + Y_ d_ℬ )ζ̇_3^' ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3^' 0
0 0 Y_ d_𝒜ζ_23^-1ζ_1
) v_ EW .
For convenience, we parametrize all _331-breaking VEVs as follows
ζ_3 = ζ̇_3^' = ζ̇_3 /tanλ .
It is straightforward to find the following SM down-type quark masses of
m_b ≈ 1 / 4 Y_ d_𝒜ζ_ 23 ^-1ζ_1 v_ EW ,
m_s ≈ 1 /4 ( 2c_3 + Y_ d_ℬζ_ 23 ^-2 ) ζ̇_3 v_ EW ,
m_d ≈ c_3 ζ̇_3 v_ EW ,
from Eq. (<ref>).
For all charged leptons with Q_e=-1, their tree-level mass matrix is correlated with the down-type quark mass matrix as
( _ℓ)_ 3× 3 = ( _d^T )_3× 3 = 1/4( ccc
( 2 c_3 + Y_ d_ℬΔ_1̇^' ) ζ̇_3 ( 2 c_3 + Y_ d_ℬ )ζ̇_3^' 0
( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3 ( 2 c_3 + Y_ d_ℬΔ_2̇ ) ζ̇_3^' 0
0 0 Y_ d_𝒜ζ_23^-1ζ_1
) v_ EW .
Thus, it is straightforward to find the tree-level mass relations of
m_τ = m_b , m_μ = m_s , m_e = m_d .
§.§ The CKM matrix and the benchmark
The bi-unitary transformations of
_L __R^† = _^ diag , __^† = _L^† ( _^ diag )^2 _L , = ( , ) ,
diagonalize the un-hatted flavor eigenstates into their hatted mass eigenstates.
To obtain the CKM matrix of the quark sector, we derive the left-handed mixing matrices of (_L , _L) of
( cû_L
ĉ_L
t̂_L
) = _L ·( c u_L
c_L
t_L
) , ( cd̂_L
ŝ_L
b̂_L
) = _L ·( c d_L
s_L
b_L
) ,
through their perturbative expansions in Eqs. (<ref>) and (<ref>).
Explicitly, we find that
_L = _L^ (12)·_L^ (23)·_L^ (13)·_L^ID ,
_L^ ID = ( ccc
0 1 0
-1 0 0
0 0 1
) ,
_L^ (12) = ( ccccosϵ_1 -sinϵ_1 0
sinϵ_1 cosϵ_1 0
0 0 1
) ,
sinϵ_1≃m_u/m_cζ_1/ζ_2-ζ_2/ζ_1∼ (10^-2) ,
_L^(13)≈( ccc
1 0 - c_5 ζ_2/√(2)Y_
0 1 0
c_5 ζ_2/√(2)Y_ 0 1
) ,
_L^ (23)≈( ccc
1 0 0
0 1 c_5 ζ_1/√(2)Y_
0 -c_5 ζ_1/√(2)Y_ 1
) ,
and
_L= ( cccsinλ cosλ
- cosλ sinλ
1
) ≈( cccλ 1-λ^2/2
- 1 + λ^2/2 λ
1
) .
The CKM matrix can be approximated as the Wolfenstein parametrization
V̂_ CKM|_ SU(8) , WSW = _L _L^†≈( ccc
1-λ^2/2 λ -c_5 ζ_2 /Y_
-λ 1-λ^2/2 c_5 ζ_1 /Y_
c_5 ( λζ_1 + ζ_2 ) /Y_ - c_5 ζ_1/Y_ 1
) ,
which is identical to what we have obtained in the SWW sequence <cit.>.
For completeness, we tabulate the benchmark point of the SU(8) along the WSW sequence, as well as the predicted SM quantities in Tab. <ref>.
Three dimensionless parameters of (ζ_1 , ζ_2 , ζ_3) can be translated into three intermediate symmetry breaking scales in Eq. (<ref>) as below
v_441≃ 1.4 × 10^17 GeV , v_431≃ 4.8× 10^15 GeV , v_331≃ 4.8× 10^13 GeV .
§ THE WWS SYMMETRY BREAKING PATTERN
§.§ The first stage
The first symmetry breaking stage of _441→_431 along the WWS symmetry breaking pattern is completely identical to what we have described in Sec. <ref>.
Thus, the remaining massless fermions can be found in Eq. (<ref>).
§.§ The second stage
The Yukawa coupling between 8_F^ω and 28_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω28_F8_H_ ,ω + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 4 , 4 , 0 )_𝐅⊕ ( 1 , 4 , -1/4)_𝐅^ω⊗ ( 1 , 6 , +1/2 )_𝐅]⊗ ( 1 , 4 , -1/4 )_𝐇 ,ω+ H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 4 , 3 , +1/12 )_𝐅⊕ ( 1 , 3 , -1/3)_𝐅^ω⊗ ( 1 , 3 , +2/3)_𝐅
⊕ ( 1 , 1 , 0)_𝐅^ω^''⊗ ( 1 , 3 , +1/3)_𝐅]⊗⟨ ( 1 , 3 , -1/3 )_𝐇 ,ω⟩ + H.c.
⇒ 1/√(2)Y_( _L^'_R^'^c + _L^'_R^' c - _L^'_R^'^c + _L^'_R^'^c + _L^V^''_R^c) w_3, V + H.c. .
Without loss of generality, we choose ω = V at this stage. Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^ V ≡_R^'^c , ( 1 , 2 , -1/2 )_𝐅^ V ≡ ( _L^' , - _L^' ), and _L^ V≡_L^'.
The Yukawa coupling between 8_F^ω̇ and 56_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c. ⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 4 , 6 , +1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 1 , 4 , +3/4 )_𝐅] ⊗ ( 1 , 6 , -1/2)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 4 , 3 , +1/12)_𝐅^'⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 1 , 3 , +2/3 )_𝐅^'] ⊗⟨ ( 1 , 3 , -1/3)_𝐇 , ω̇^'⟩ + H.c.
⇒ 1/√(2) Y_( _L^'''_R^'''^c + _L^'''_R^''' c - _L^'''_R^'''^c + _L^'''_R^'''^c ) w_3 , V̇İİ
+ 1/√(2) Y_( _L^'''d_R^c + ν_e L_R^''' c - e_L _R^'''^c + _L^1̇_R^'''^c ) w_3 , 1̇ + H.c. .
Without loss of generality, we choose ω̇= ( 1̇ , V̇İİ ) at this stage. Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^V̇İİ≡_R^'''^c, ( 1 , 2 , -1/2 )_𝐅^V̇İİ≡ ( _L^''' , - _L^''' ), and _L^V̇İİ≡_L^'''.
After integrating out the massive fermions, the remaining massless fermions expressed in terms of the _421 irreps are the following:
( 4 , 1 , +1/4 )_𝐅^Ω⊕[ ( 1 , 2 , -1/2 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω ^'] ⊕ ( 1 , 1 , 0 )_𝐅^Ω^''⊂8_F^Ω ,
Ω = ( ω , ω̇) , ω = (3 , VI) , ω̇= (1̇ , 2̇ , V̇İİİ , İẊ ) ,
( 1 , 1 , 0)_𝐅^ IV^''⊂8_F^ IV , ( 1 , 1 , 0)_𝐅^ V^'⊕ ( 1 , 1 , 0)_𝐅^ V^''⊂8_F^ V ,
( 1 , 1 , 0)_𝐅^V̇İİ^'⊕ ( 1 , 1 , 0)_𝐅^V̇İİ^''⊂8_F^V̇İİ ,
( 6 , 1 , -1/2 )_𝐅⊕[ ( 1 , 2 , +1/2 )_𝐅⊕ ( 1 , 1 , 0 )_𝐅⊕( 1 , 2 , +1/2 )_𝐅⊕ ( 1 , 1 , +1 )_𝐅]
⊕ [ ( 4 , 2 , +1/4 )_𝐅⊕ ( 4 , 1 , -1/4 )_𝐅^] ⊕ ( 4 , 1 , -1/4 )_𝐅^''⊂28_F ,
[ ( 1 , 2 , +1/2 )_𝐅^'''⊕ ( 1 , 1 , +1 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅^''] ⊕ ( 4 , 1 , -3/4 )_𝐅
⊕ [ ( 4 , 2 , +1/4 )_𝐅^'⊕ ( 4 , 1 , -1/4 )_𝐅^''⊕ ( 4 , 2 , +1/4 )_𝐅⊕ ( 4 , 1 , +3/4 )_𝐅^]
⊕ [ ( 6 , 2 , 0 )_𝐅⊕ ( 6 , 1 , -1/2 )_𝐅^'⊕ ( 6 , 1 , -1/2)_𝐅^''] ⊂56_F .
We use the slashes and the back slashes to cross out massive fermions at the first and the second stages, respectively.
From the anomaly-free conditions of [ SU(4)_c]^2 · U(1)_X_2=0, SU(2)_W
^2 · U(1)_X_2=0, and U(1)_X_2
^3=0, we find that one of the 8_F^ω and one of the 8_F^ω̇ are integrated out.
Without loss of generality, we choose the massive anti-fundamental fermions to be ω= V and ω̇=V̇İİ at this stage.
§.§ The third stage
The third symmetry breaking stage of _ 421→_ SM can be achieved by Higgs fields of ( 4 , 1 , + 1/4 )_H ,ω⊂8_H_ , ω and ( 4 , 1 , +1/ 4 )_H , ω̇⊕ ( 4 , 1 , + 1/ 4 )_H , ω̇^'
⊂ ( 4 , 4 , 0 )_H , ω̇⊂28_H_ , ω̇, according to the decompositions in Eqs. (<ref>) and (<ref>), as well as their U(1)_T^''' charges in Tab. <ref>.
The Yukawa coupling between 8_F^ω and 28_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω28_F8_H_ ,ω + H.c. ⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 6 , 1 , -1/2 )_𝐅⊕ ( 1 , 4 , -1/4)_𝐅^ω⊗ ( 4 , 4 , 0)_𝐅^ω]⊗ ( 4 , 1 , +1/4 )_𝐇 ,ω+ H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 6 , 1 , -1/2 )_𝐅⊕ ( 1 , 3 , -1/3)_𝐅^ω⊗ ( 4 , 3 , +1/12)_𝐅^ω
⊕ ( 1 , 1 , 0)_𝐅^ω^''⊗ ( 4 , 1 , -1/4)_𝐅^']⊗ ( 4 , 1 , +1/4 )_𝐇 ,ω + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω⊗ ( 6 , 1 , -1/2)_𝐅⊕ ( 1 , 2 , -1/2)_𝐅^ω⊗ ( 4 , 2 , +1/4)_𝐅
⊕ ( 1 , 1 , 0)_𝐅^ω''⊗ ( 4 , 1 , -1/4)_𝐅^']⊗ ( 4 , 1 , +1/4 )_𝐇 ,ω+ H.c.
⇒ 1/√(2) Y_( _L _R^c + _L^''_R^''^c -_L^''_R^''^c + _L^VI ^''_R^'' c) V_4 , VI
+ 1/√(2) Y_( _L b_R^c + τ_L _R^''^c - ν_τ L_R^''^c + _L^3 ^''_R^'' c) V_4 , 3 + H.c. .
Without loss of generality, we choose ω = ( 3 , VI ) at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^ VI≡_R ^c ), and ( 1 , 2 , -1/2 )_𝐅^ VI ≡ ( _L^'' , - _L^'' ).
The Yukawa coupling between 8_F^ω̇ and 56_F and the corresponding vectorlike fermion masses can be expressed as
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 4 , -1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 4 , 6 , +1/4 )_𝐅] ⊗ ( 4 , 4 , 0)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 3 , -1/6)_𝐅⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 4 , 3 , +5/12 )_𝐅
⊕ ( 1 , 1 , 0 )_𝐅^ω̇^''⊗ ( 4 , 3 , +1/12 )_𝐅^'] ⊗ ( 4 , 3 , -1/12)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 1 , -1/2)_𝐅^'⊕ ( 1 , 2 , -1/2 )_𝐅^ω̇⊗ ( 4 , 2 , +1/4 )_𝐅
⊕ ( 1 , 1 , 0 )_𝐅^ω̇^''⊗ ( 4 , 1 , -1/4 )_𝐅^''] ⊗⟨ ( 4 , 1 , +1/4)_𝐇 , ω̇⟩ + H.c.
⇒ 1/√(2) Y_( _L^''''_R^''''^c -_L^'''''_R^'''''^c + _L^'''''_R^'''''^c + _L^V̇İİİ^''_R^''' c)V_4 , V̇İİİ + 1/√(2) Y_( _L^'''' s_R^c -μ_L_R^'''''^c + ν_μ L_R^'''''^c + _L^2̇^''_R^''' c)V_4 , 2̇+ H.c. ,
Y_8_F^ω̇56_F28_H_ ,ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 4 , -1/4)_𝐅⊕ ( 1 , 4 , -1/4 )_𝐅^ω̇⊗ ( 4 , 6 , +1/4 )_𝐅] ⊗ ( 4 , 4 , 0)_𝐇 , ω̇ + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 1 , -1/2)_𝐅^''⊕ ( 1 , 3 , -1/3 )_𝐅^ω̇⊗ ( 4 , 3 , +1/12 )_𝐅^'] ⊗ ( 4 , 1 , +1/4)_𝐇 , ω̇^' + H.c.
⊃ Y_[ ( 4 , 1 , +1/4)_𝐅^ω̇⊗ ( 6 , 1 ,-1/2)_𝐅^''⊕ ( 1 , 2 , -1/2 )_𝐅^ω̇⊗ ( 4 , 2 , +1/4 )_𝐅^'
⊕ ( 1 , 1 ,0 )_𝐅^ω̇^'⊗ ( 4 , 1 , -1/4 )_𝐅^''] ⊗⟨ ( 4 , 1 , +1/4)_𝐇 , ω̇^'⟩ + H.c.
⇒ 1/√(2) Y_( _L^'''''_R^'''''^c -_L^''''_R^''''^c +_L^''''_R^''''^c +_L^İẊ^'_R^''' c)V_4 , İẊ^' + H.c. .
Without loss of generality, we choose ω̇= ( 2̇ , V̇İİİ ) and ω̇= İẊ at this stage.
Thus, we can identify that ( 3 , 1 , +1/3 )_𝐅^V̇İİİ , İẊ≡ ( _R^''''^c , _R^'''''^c), ( 1 , 2 , -1/2 )_𝐅^V̇İİİ≡ ( _L^''''' , - _L^''''' ), and ( 1 , 2 , -1/2 )_𝐅^İẊ≡ ( _L^'''' , - _L^'''' ).
The ( 10_F , 10_F )-pair within the 56_F can obtain the vectorlike masses from the d=5 operator as follows
c_4/M_pl56_F56_F⟨63_H⟩28_H_,ω̇^† + H.c.
⊃ c_4v_U/M_pl[ ( 1 , 4 , +3/4 )_𝐅⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 6 , +1/4 )_𝐅⊗ ( 6 , 4 , -1/4 )_𝐅]⊗ ( 4 , 4 , 0)_𝐇 , ω̇^† + H.c.
⊃ c_4 ζ_0 [ ( 1 , 1 , +1 )_𝐅^''⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 3 , +5/12 )_𝐅⊗ ( 6 , 3 , -1/6 )_𝐅]⊗ ( 4 , 1 , +1/4)_𝐇 , ω̇^' † + H.c.
⊃ c_4 ζ_0 [ ( 1 , 1 , +1 )_𝐅^''⊗ ( 4 , 1 , -3/4 )_𝐅⊕ ( 4 , 2 , +1/4 )_𝐅⊗ ( 6 , 2 , 0 )_𝐅
⊕ ( 4 , 1 , +3/4 )_𝐅⊗ ( 6 , 1 , -1/2)_𝐅^'] ⊗⟨ ( 4 , 1 , +1/4)_𝐇 , ω̇^' †⟩ + H.c.
⇒ c_4/√(2)ζ_0 ( _L _R^c + _L _R^c -_L _R^c +_L _R^c ) V_4,İẊ^' + H.c. .
Similar to Eq. (<ref>), we find that the ( 10_F , 10_F )-pair obtain the vectorlike masses through the Yukawa couplings with the components of ( 4 , 4 , 0)_𝐇 , ω̇⊂28_H_ , ω̇.
According to the VEV assignments in Eq. (<ref>), these components only contribute to the _421-breaking VEVs at the third stage.
Correspondingly, the ( 10_F , 10_F )-pair of ( , , , ) obtain the masses of ∼(v_421).
The remaining massless fermions of the _ SM are listed as follows
[ ( 3 , 1 , +1/3 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω] ⊕[ ( 1 , 2 , -1/2 )_𝐅^Ω⊕ ( 1 , 1 , 0 )_𝐅^Ω^'⊕ ( 1 , 1 , 0 )_𝐅^Ω^''] ⊂8_F^Ω , Ω = ( 1̇ , 2̇ , 3 ) ,
( 1 , 1 , 0)_𝐅^Ω⊂8_F^Ω , Ω = ( VI , V̇İİİ , İẊ ) ,
( 1 , 1 , 0)_𝐅^Ω^'⊂8_F^Ω , Ω = ( V , VI , V̇İİ ,V̇İİİ , İẊ) ,
( 1 , 1 , 0)_𝐅^Ω^''⊂8_F^Ω , Ω = ( IV , V , VI , V̇İİ ,V̇İİİ , İẊ ) ,
[ ( 3 , 1 , -1/3 )_𝐅⊕ ( 3 , 1 , -2/3 )_𝐅] ⊕[ ( 1 , 2 , +1/2 )_𝐅⊕ ( 1 , 1 , 0 )_𝐅⊕( 1 , 2 , +1/2 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅]
⊕[ ( 3 , 2 , +1/6 )_𝐅⊕( 1 , 2 , +1/2 )_𝐅^''⊕ ( 3 , 1 , -1/3 )_𝐅^'⊕ ( 1 , 1 ,0 )_𝐅^'] ⊕[ ( 3 , 1 , -1/3 )_𝐅^''⊕ ( 1 , 1 , 0 )_𝐅^''] ⊂28_F ,
[ ( 1 , 2 , +1/2 )_𝐅^'''⊕ ( 1 , 1 , +1 )_𝐅^'⊕ ( 1 , 1 , +1 )_𝐅^''] ⊕[ ( 3 , 1 , -2/3 )_𝐅^'⊕ ( 1 , 1 , -1 )_𝐅]
⊕[ ( 3 , 2 , +1/6 )_𝐅^'⊕ ( 1 , 2 , +1/2 )_𝐅^''''⊕ ( 3 , 1 , -1/3 )_𝐅^'''⊕ ( 1 , 1 , 0 )_𝐅^'''
⊕( 3 , 2 , +1/6 )_𝐅^''⊕( 1 , 2 , +1/2 )_𝐅^'''''⊕ ( 3 , 1 , +2/3 )_𝐅^⊕ ( 1 , 1 , +1 )_𝐅^''']
⊕[ ( 3 , 2 , +1/6 )_𝐅^'''⊕( 3 , 2 , -1/6 )_𝐅⊕ ( 3 , 1 , -1/3 )_𝐅^''''⊕ ( 3 , 1 , -2/3 )_𝐅^''
⊕ ( 3 , 1 , -1/3 )_𝐅^'''''⊕ ( 3 , 1 , -2/3 )_𝐅^'''] ⊂56_F .
The fermions that become massive at this stage are further crossed outs.
After this stage of symmetry breaking, there are three-generational massless SM fermions together with twenty-three left-handed massless sterile neutrinos.
The third-generational SM fermions are from the rank-2 chiral IRAFFS of 8_F^ω⊕28_F, while the first- and second-generational SM fermions are from the rank-3 chiral IRAFFS of 8_F^ω̇⊕56_F.
§.§ A summary of the vectorlike fermion masses
§.§ The d=5 bi-linear fermion operators
We proceed to analyze the d=5 bi-linear fermion operators in Eqs. (<ref>) along the WWS symmetry breaking pattern.
The operator of _^ ( 3 , 2) is decomposed as
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(1 ,4 ,+3/4)_𝐅]
⊗ (4 ,4 ,0)_𝐇,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,3 ,+5/12)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^''⊕(1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,3 ,+2/3)_𝐅^']
⊗ (4 ,1 ,+1/4)_𝐇,κ̇^' †⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,2 ,+1/4)_𝐅⊕ (1 ,2 ,-1/2)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^''⊕(1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,2 ,+1/2)_𝐅^''''']
⊗ ⟨ (4 ,1 ,+1/4)_𝐇,κ̇^' †⟩⊗⟨(4 ,2 ,+1/4)_𝐇^†⟩+H.c.
⇒ c_3/2ζ̇_3^' (_L _R^ω̇^c +_L^ω̇_R^'''^c+_L^ω̇_R^c+_L^ω̇^''_R^'''''^c )v_ EW+H.c. ,
and
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(1 ,4 ,+3/4)_𝐅]
⊗ (4 ,4 ,0)_𝐇,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,3 ,+1/12)_𝐅^'⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(1 ,3 ,+2/3)_𝐅^']
⊗ (4 ,3 ,-1/12)_𝐇 ,κ̇^†⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(4 ,2 ,+1/4)_𝐅^'⊕(1 ,2 ,-1/2)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^'⊕(1 ,1 ,0)_𝐅^ω̇^'⊗(1 ,2 ,+1/2)_𝐅^''']
⊗ ⟨ (4 ,1 ,-1/4)_𝐇 ,κ̇^†⟩⊗⟨(4 ,2 ,+1/4)_𝐇^†⟩+H.c.
⇒ c_3/2ζ̇_̇3̇( d_L _R^ω̇^c +_L^ω̇_R^''''^c-_L^ω̇e_R^c +_L^ω̇^'_R^'''^c ) v_ EW+H.c. ,
and
c_3/M_pl8_F^ω̇56_F·28_H_,κ̇^†·70_H^†+ H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(6 ,4 ,-1/4)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω̇⊗(4 ,6 ,+1/4)_𝐅]
⊗ (1 ,6 ,-1/2)_𝐇 ,κ̇^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ c_3/M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(6 ,3 ,-1/6)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω̇⊗(4 ,3 ,+5/12)_𝐅⊕ (1 ,1 ,0)_𝐅^ω̇^''⊗(4 ,3 ,+1/12)_𝐅]
⊗ ⟨ (1 ,3 ,-1/3)_𝐇,κ̇^†⟩⊗ (4 ,3 ,+5/12)_𝐇^†+H.c.
⊃ c_3w_3,V̇İİ/√(2)M_pl[ (4 ,1 ,+1/4)_𝐅^ω̇⊗(6 ,2 ,0 )_𝐅⊕ (1 ,2 ,-1/2)_𝐅^ω̇⊗(1 ,1 ,+1)_𝐅^'''⊕ (1 ,1 ,0)_𝐅^ω̇^'⊗(4 ,2 ,+1/4)_𝐅^'''''
⊕ (1 ,1 ,0)_𝐅^ω̇^''⊗(1 ,2 ,+1/2)_𝐅^''''] ⊗⟨(4 ,2 ,+1/4)_𝐇^†⟩ +H.c.
⇒ c_3/2ζ̇_̇2̇ ( s_L _R^ω̇^2 -_L^ω̇μ_R^c+_L^ω̇^'_R^'''''^c+_L^ω̇^''_R^''''^c )v_ EW +H.c. .
By taking the possible flavor indices of ω̇= 1̇ , 2̇ in mass terms from Eqs. (<ref>) and (<ref>), one finds the following set of mass matrices of the (d , s) and (e , μ)
( _d )_2 × 2^ direct = c_3 /2 ( ccζ̇_3 ζ̇_3
ζ̇_2 ζ̇_2
) v_ EW ,
( _e )_2 × 2^ direct = - c_3 /2 ( ccζ̇_3 ζ̇_2
ζ̇_3 ζ̇_2
) v_ EW ,
which leave the down quark and electron massless.
For the operator of _^ (4 ,1), it is decomposed as
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (6 ,4 ,-1/4)_𝐅⊗ (6 ,4 ,-1/4)_𝐅] ⊗ (4 ,4 ,0)_𝐇 ,ω̇⊗ (4 ,4 ,+1/2)_𝐇+H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗ (4 ,3 ,+1/12)_𝐅^'] ⊗ (4 ,1 ,+1/4)_𝐇,ω̇^'⊗ (4 ,3 ,+5/12)_𝐇+H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗ (4 ,2 ,+1/4)_𝐅^'] ⊗⟨ (4 ,1 ,+1/4)_𝐇,ω̇^'⟩⊗⟨(4 ,2 ,+1/4)_𝐇⟩+ H.c.
⇒ c_4/2ζ̇_̇3̇^' ( u_L c_R^c + _L _R^''''^c ) v_ EW + H.c. ,
and
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (4 ,1 ,-3/4)_𝐅⊗ (4 ,3 ,+5/12)_𝐅] ⊗ (4 ,3 ,-1/12)_𝐇,ω̇⊗ (4 ,3 ,+5/12)_𝐇+H.c.
⊃ c_4/M_pl[ (4 ,1 ,-1/4)_𝐅⊗ (4 ,2 ,+1/4)_𝐅] ⊗⟨ (4 ,1 ,-1/4)_𝐇,ω̇⟩⊗⟨(4 ,2 ,+1/4)_𝐇⟩+H.c.
⇒ c_4/2ζ̇_̇3̇ ( -_L c_R^c + _L _R^'''''^c )v_ EW + H.c. ,
and
c_4/M_pl56_F56_F·28_H_,ω̇·70_H + H.c.
⊃ c_4/M_pl[ (1 ,4 ,+3/4)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕ (4 ,6 ,+1/4)_𝐅⊗(6 ,4 ,-1/4)_𝐅] ⊗ (1 ,6 ,-1/2)_𝐇,ω̇⊗ (4 ,4 ,+1/2)_𝐇+H.c.
⊃ c_4/M_pl[ (1 ,3 ,+2/3)_𝐅^'⊗ (4 ,1 ,-3/4)_𝐅⊕ (4 ,3 ,+1/12)_𝐅^'⊗ (6 ,3 ,-1/6)_𝐅⊕ (4 ,3 ,+5/12)_𝐅⊗(6 ,1 ,-1/2)_𝐅^'']
⊗ ⟨ (1 ,3 ,-1/3)_𝐇,ω̇⟩⊗ (4 ,3 ,+5/12)_𝐇+H.c.
⊃ c_4 w_3,V̇İİ/√(2)M_pl[ (1 ,2 ,+1/2)_𝐅^'''⊗ (4 ,1 ,-3/4)_𝐅⊕ (4 ,2 ,+1/4)_𝐅^'⊗ (6 ,1 ,-1/2)_𝐅^'
⊕ (4 ,1 ,-1/4)_𝐅^''⊗ (6 ,2 ,0)_𝐅⊕ (4 ,2 ,+1/4)_𝐅^''⊗ (6 ,1 ,-1/2)_𝐅^''] ⊗⟨(4 ,2 ,+1/4)_𝐇⟩+H.c.
⇒ c_4 /2ζ̇_̇2̇ ( _L _R^'''^c + u_L _R^c + _L^'''_R^c - _L u_R^c )v_ EW + H.c. .
For the operator of _^ (5 ,1), it is decomposed as
c_5/M_pl28_F56_F·8_H_,ω̇·70_H+H.c.
⊃ c_5/M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,6 ,+1/2)_𝐅⊗(1 ,4 ,-3/4)_𝐅⊕(4 ,4 ,0)_𝐅⊗(6 ,4 ,-1/4)_𝐅]
⊗ (1 ,4 ,-1/4)_𝐇,ω̇⊗(4 ,4 ,+1/2)_𝐇
⊃ c_5W_4,IV/√(2)M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,3 ,+1/12)_𝐅^'⊕ (1 ,3 ,+1/3)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,3 ,+1/12)_𝐅⊗(6 ,1 ,-1/2)_𝐅^''
⊕ (4 ,1 ,-1/4)_𝐅⊗ (6 ,3 ,-1/6)_𝐅] ⊗(4 ,3 ,+5/12)_𝐇
⊃ c_5W_4,IV/√(2)M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,2 ,+1/4)_𝐅^'⊕ (1 ,2 ,+1/2)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,2 ,+1/4)_𝐅⊗(6 ,1 ,-1/2)_𝐅^''
⊕ (4 ,1 ,-1/4)_𝐅⊗ (6 ,2 ,0)_𝐅] ⊗⟨(4 ,2 ,+1/4)_𝐇⟩
⇒ c_5 /2ζ_1 ( u_L t_R^c + _L _R ^c +t_L u_R^c + _L^''_R^c ) v_ EW + H.c. ,
and
c_5/M_pl28_F56_F·8_H_,ω̇·70_H+H.c.
⊃ c_5/M_pl[ (6 ,1 ,-1/2)_𝐅⊗ (6 ,4 ,-1/4)_𝐅⊕ (4 ,4 ,0)_𝐅⊗ (4 ,1 ,-3/4)_𝐅]⊗ (4 ,1 ,+1/4)_𝐇,ω̇⊗ (4 ,4 ,+1/2)_𝐇
⊃ c_5/M_pl[ (6 ,1 ,-1/2)_𝐅⊗ (6 ,3 ,-1/6)_𝐅⊕ (4 ,3 ,+1/12)_𝐅⊗ (4 ,1 ,-3/4)_𝐅]⊗ (4 ,1 ,+1/4)_𝐇,ω̇⊗ (4 ,3 ,+5/12)_𝐇
⊃ c_5/M_pl[ (6 ,1 ,-1/2)_𝐅⊗ (6 ,2 ,0)_𝐅⊕ (4 ,2 ,+1/4)_𝐅⊗ (4 ,1 ,-3/4)_𝐅]⊗⟨(4 ,1 ,+1/4)_𝐇,ω̇⟩⊗⟨(4 ,2 ,+1/4)_𝐇⟩
⇒ c_5/2ζ_3 ( _L _R^c +c_L t_R^c+t_L c_R^c + _L _R^c ) v_ EW+H.c. ,
and
c_5/M_pl28_F56_F·8_H_,ω̇·70_H+H.c.
⊃ c_5/M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,6 ,+1/4)_𝐅⊕ (1 ,6 ,+1/2)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,4 ,0)_𝐅⊗(6 ,4 ,-1/4)_𝐅]
⊗ (1 ,4 ,-1/4)_𝐇,ω̇⊗(4 ,4 ,+1/2)_𝐇
⊃ c_5/M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,3 ,+5/12)_𝐅⊕ (1 ,3 ,+2/3)_𝐅⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,3 ,+1/12)_𝐅⊗(6 ,3 ,-1/6)_𝐅]
⊗ ⟨(1 ,3 ,-1/3)_𝐇,ω̇⟩⊗(4 ,3 ,+5/12)_𝐇
⊃ c_5w_3,V/√(2)M_pl[(6 ,1 ,-1/2)_𝐅⊗(4 ,2 ,+1/4)_𝐅⊕ (1 ,2 ,+1/2)_𝐅^'⊗(4 ,1 ,-3/4)_𝐅⊕(4 ,2 ,+1/4)_𝐅⊗(6 ,1 ,-1/2)_𝐅^'
⊕ (4 ,1 ,-1/4)_𝐅⊗(6 ,2 ,0)_𝐅]⊗⟨(4 ,2 ,+1/4)_𝐇⟩
⇒ c_5/2ζ_2 ( - _L t_R^c +_L _R^'^c +t_L _R^c +_L^'_R^c ) v_ EW+H.c. .
§.§ The d=5 irreducible Higgs mixing operators
For the Yukawa coupling of 8_F^ω_128_F8_H_ ,ω_1, we find the mass terms of
Y_8_F^ω_128_F8_H_ ,ω_1×d_𝒜/M_plϵ_ω_1 ω_2ω_3ω_4 8_H_ ,ω_1^†8_H_ ,ω_2^†8_H_ ,ω_3^†8_H_ ,ω_4^†70_H^†+H.c.
⊃ Y_[ (4 ,1 ,+1/4)_𝐅^ω_1⊗ ( 4 ,4 ,0)_𝐅⊕ (1 ,4 ,-1/4)_𝐅^ω_1⊗ (1 ,6 ,+1/2)_𝐅] ⊗ (1 ,4 ,-1/4)_𝐇 ,ω_1
× d_𝒜/M_pl (1 ,4 ,-1/4)_𝐇 ,ω_1^†⊗(1 ,4 ,-1/4)_𝐇 ,ω_2^†⊗(1 ,4 ,-1/4)_𝐇 ,ω_3^†⊗ (4 ,1 ,+1/4)_𝐇 ,ω_4^†⊗ (4 ,4 ,+1/2)_𝐇^†+H.c.
⊃ Y_[ (4 ,1 ,+1/4)_𝐅^ω_1⊗ ( 4 ,3 ,+1/12)_𝐅⊕ (1 ,3 ,-1/3)_𝐅^ω_1⊗ (1 ,3 ,+2/3)_𝐅⊕(1 ,1 ,0)_𝐅^ω_1^''⊗ (1 ,3 ,+1/3)_𝐅]
⊗ (1 ,3 ,-1/3)_𝐇 ,ω_1×d_𝒜W_4 IV/√(2)M_pl (1 ,3 ,-1/3)_𝐇 ,ω_1^†⊗(1 ,3 ,-1/3)_𝐇 ,ω_3^†⊗ (4 ,1 ,+1/4)_𝐇 ,ω_4^†⊗ (4 ,3 ,+5/12)_𝐇^† + H.c.
⊃ Y_[ (4 ,1 ,+1/4)_𝐅^ω_1⊗ (4 ,2 ,+1/4)_𝐅⊕(1 ,2 ,-1/2)_𝐅^ω_1⊗ (1 ,1 ,+1)_𝐅^⊕ (1 ,1 ,0)_𝐅^ω_1^'⊗ (1 ,2 ,+1/2)_𝐅^'
⊕ (1 ,1 ,0)_𝐅^ω_1^''⊗ (1 ,2 ,+1/2)_𝐅] ⊗ (1 ,2 ,-1/2)_𝐇 ,ω_1 × d_𝒜W_4 IV w_3,V/2M_pl (1 ,2 ,-1/2)_𝐇 ,ω_1^†⊗(4 ,1 ,+1/4)_𝐇 ,ω_4^†⊗ (4 ,2 ,+1/4)_𝐇^†+H.c.
⊃ Y_[ (3 ,1 ,+1/3)_𝐅^ω_1⊗ (3 ,2 ,+1/6)_𝐅⊕ (1 ,1 ,0)_𝐅^ω_1⊗ (1 ,2 ,+1/2)_𝐅⊕(1 ,2 ,-1/2)_𝐅^ω_1⊗ (1 ,1 ,+1)_𝐅^
⊕ (1 ,1 ,0)_𝐅^ω_1^'⊗ (1 ,2 ,+1/2)_𝐅^'⊕ (1 ,1 ,0)_𝐅^ω_1^''⊗ (1 ,2 ,+1/2)_𝐅] ⊗ (1 ,2 ,-1/2)_𝐇 ,ω_1 ×d_𝒜W_4 , IV w_3 , V V_4 , VI/2√(2)M_pl (1 ,2 ,-1/2)_𝐇 ,ω_1^†⊗⟨ (1 ,2 ,+1/2)_𝐇^'''†⟩+H.c.
⇒ Y_ d_𝒜/ 4 W_4 , IV w_3 , V V_4 , VI/ M_ pl m_( 1 , 4 , - 1/4 )_H ,3^2 ( b_L b_R^c + τ_L τ_R^c + _L^3 _R^''^c - _L^ 3^'_R^'^c +_L^ 3^''_R^c ) v_ EW + H.c. .
If we consider all three possibilities of the propagator masses, each of them leads to the (b ,τ) masses as follows
m_( 1 , 4 , - 1/2 )_H ,3∼(v_441) : m_b = m_τ = Y_ d_𝒜/ 4 ζ_2 ζ_13 v_ EW ,
m_( 1 , 4 , - 1/2)_H ,3∼(v_431) : m_b = m_τ = Y_ d_𝒜/ 4ζ_1 ζ_23 v_ EW ,
m_( 1 , 4 , - 1/2 )_H ,3∼(v_421) : m_b = m_τ = Y_ d_𝒜/ 4 ζ_1 /ζ_23 v_ EW .
The indirect Yukawa couplings from the operator in Eq. (<ref>) are expected to generate the first- and second-generational down-type quark and charged lepton masses.
The gauge-invariant subset of ( 28_H_ ,1̇^†28_H_ ,V̇İİ) can develop the VEV of ⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ = 1/2 w_3 , 1̇ w_3 , V̇İİ∼ ( v_431^2 ) according to the VEV assignments in Eq. (<ref>).
Similar to the indirect Yukawa couplings in Eq. (<ref>), we should look for the EWSB components from the 28_H_ , 1̇ , 2̇ here.
For the Yukawa coupling of 8_F^ω̇_156_F28_H_ , ω̇_1, we find the mass terms of
Y_8_F^ω̇_156_F28_H_ , ω̇_1 × d_ℬ/ M_ pl28_H_ , ω̇_1 ^†28_H_ , ω̇_2 ^†70_H^†( 28_H_ ,1̇^†28_H_ ,V̇İİ) + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 4 , 6 , + 1 /4 )_F⊕ ( 1 , 4 , - 1 /4 )_F^ω̇_1 ⊗ ( 1 , 4 , + 3 /4 )_F] ⊗ ( 1 , 6 , - 1 /2 )_H , ω̇_1
× d_ℬ/ M_ pl ( 1 , 6 , - 1 /2 )_H , ω̇_1^†⊗ ( 4 , 4 , 0 )_H , ω̇_2 ^†⊗ ( 4 , 4 , + 1 /2 )_H^†⊗⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 4 , 3 , + 1 /12 )_F⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1 ⊗ ( 1 , 3 , + 2 /3 )_F^'] ⊗ ( 1 , 3 , - 1 /3 )_H , ω̇_1
× d_ℬ/ M_ pl ( 1 , 3 , - 1 /3 )_H , ω̇_1^†⊗ ( 4 , 3 , -1/12 )_H , ω̇_2 ^†⊗ ( 4 , 3 , + 5 /12 )_H^†⊗⟨28_H_ ,1̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_ d_ℬ w_3 , 1̇ w_3 , V̇İİ/ 2 M_ pl m_ ( 1 , 6 , - 1 / 2 )_H , ω̇_1 ^2 [ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 4 , 2 , +1/4 )_F^'⊕ ( 1 , 2 , -1/2 )_F^ω̇_1⊗ ( 1 , 1 , + 1 )_F^'
⊕ ( 1 , 1 , 0 )_F^ω̇_1 ^'⊗ ( 1 , 2 , +1 /2 )_F^'''] ⊗⟨ ( 4 , 1 , + 1 /4 )_H , ω̇_2 ^†⟩⊗⟨ ( 4 , 2 , + 1 /4 )_H^†⟩ + H.c.
⇒ Y_ d_ℬ/4 ζ̇_3 [ w_3 , 1̇ w_3 , V̇İİ/ m_ ( 1 , 6 , - 1 / 2 )_H , 1̇^2 ( d_L d_R^c + e_L e_R^c ) + w_3 , 1̇ w_3 , V̇İİ/ m_ ( 1 , 6 , - 1 / 2 )_H , 2̇^2 ( d_L s_R^c + μ_L e_R^c ) ] v_ EW + H.c. ,
where the SM quark/lepton components from the 8_F^ω̇_1= 1̇/8_F^ω̇_1= 2̇ correspond to the (d_R^c , e_L) and (s_R^c , μ_L), respectively.
The other mass terms read
Y_8_F^ω̇_156_F28_H_ , ω̇_1 × d_ℬ/ M_ pl28_H_ , ω̇_1 ^†28_H_ , ω̇_2 ^†70_H^†( 28_H_ ,2̇^†28_H_ ,V̇İİ) + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 6 , 4 , - 1 /4 )_F⊕ ( 1 , 4 , - 1 /4 )_F^ω̇_1 ⊗ ( 4 , 6 , + 1 /4 )_F] ⊗ ( 4 , 4 , 0 )_H , ω̇_1
× d_ℬ/ M_ pl ( 4 , 4 , 0 )_H , ω̇_1^†⊗ ( 1 , 6 , - 1 /2 )_H , ω̇_2 ^†⊗ ( 4 , 4 , + 1 /2 )_H^†⊗⟨28_H_ ,2̇^†28_H_ ,V̇İİ⟩ + H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 6 , 3 , - 1 /6 )_F⊕ ( 1 , 3 , - 1 /3 )_F^ω̇_1 ⊗ ( 4 , 3 , + 5 /12 )_F⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 4 , 3 , + 1 /12 )_F]
⊗ ( 4 , 3 , -1/12 )_H , ω̇_1× d_ℬ/ M_ pl ( 4 , 3 , -1/12 )_H , ω̇_1^†⊗ ( 1 , 3 , -1/3 )_H , ω̇_2^†⊗( 4 , 3 , +5/12 )_H , ω̇_1^†
⊗⟨28_H_ ,2̇^†28_H_ ,V̇İİ⟩+H.c.
⊃ Y_[ ( 4 , 1 , + 1 /4 )_F^ω̇_1⊗ ( 6 , 2 , 0 )_F^⊕ ( 1 , 2 , - 1 /2 )_F^ω̇_1⊗ ( 4 , 1 , + 3 /4 )_F^⊕ ( 1 , 1 , 0 )_F^ω̇_1^'⊗ ( 4 , 2 , + 1 /4 )_F^
⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 4 , 2 , + 1 /4 )_F^'] ⊗ ( 4 , 2 , - 1 / 4 )_H , ω̇_1^× d_ℬω_3 , 2̇ω_3 , V̇İİ^2 / 2√(2) M_ pl ( 4 , 2 , - 1 / 4 )_H , ω̇_1^ †⟨ ( 4 , 2 , + 1 / 4 )_H^ †⟩ + H.c.
⊃ Y_ d_ℬ w_3 , 1̇ w_3 , V̇İİ^2 / 2 √(2) M_ pl m_ (4 , 4 , 0 )_H , ω̇_1^2 [ ( 3 , 1 , + 1 /3 )_F^ω̇_1⊗ ( 3 , 2 , +1 /6 )_F^'''⊕ ( 1 , 2 , - 1 /2 )_F^ω̇_1⊗ ( 1 , 1 , + 1 )_F^'''
⊕ ( 1 , 1 , 0 )_F^ω̇_1^'⊗ ( 1 , 2 , + 1 /2 )_F^'''''⊕ ( 1 , 1 , 0 )_F^ω̇_1^''⊗ ( 1 , 2 , + 1 /2 )_F^''''] ⊗⟨ ( 1 , 2 , + 1 /2 )_H^''' †⟩ + H.c.
⇒ Y_ d_ℬ/4 ζ̇_2 [ w_3 , 1̇ w_3 , V̇İİ/ m_ ( 4 , 4 , 0 )_H , 1̇^2 ( s_L d_R^c + e_L μ_R^c ) + w_3 , 1̇ w_3 , V̇İİ/ m_ ( 4 , 4 , 0 )_H , 2̇^2 ( s_L s_R^c + μ_L μ_R^c ) ] v_ EW + H.c. .
With the Higgs VEV assignments in (<ref>) and hypothesize, we expect the following natural propagator masses of
m_ ( 1 , 6 , - 1 /2 )_H , 1̇∼( v_431 ) ,
m_ ( 1 , 6 , - 1 /2 )_H , 2̇∼( v_421 ) ,
m_ ( 4 , 4 , 0)_H , 1̇∼( v_431 ) ,
m_ ( 4 , 4 , 0 )_H , 2̇∼( v_421 ) .
For convenience, we parametrize the following ratios of
Δ_ω̇≡w_4 , 1̇ w_4 , V̇İİ/ m_ ( 4 , 4 , 0 )_H , ω̇^2 , Δ_ω̇^'≡ w_4 , 1̇ w_4 , V̇İİ/ m_ ( 1 , 6 , - 1/ 2 )_H , ω̇^2 .
§.§ The SM quark/lepton masses and the CKM mixing
For all up-type quarks with Q_e=+2/3, we write down the following tree-level masses from both the renormalizable Yukawa couplings and the gravity-induced terms in the basis of ≡ (u ,c ,t)
_u = 1/√(2)( ccc
0 c_4 ζ̇_3^'/√(2) c_5 ζ_1 /√(2)
0 0 c_5 ζ_3/√(2)
c_5 ζ_1/√(2) c_5 ζ_3/√(2) Y_
) v_ EW≈_u^ (0) + _u^ (1 ) + _u^( 2) ,
_u^ (0) = 1/√(2)( ccc
0 0 0
0 0 0
0 0 Y_
) v_ EW ,
_u^ (1) = 1/√(2)( ccc
0 0 c_5 ζ_1/√(2)
0 0 0
c_5 ζ_1 /√(2) 0 0
) v_ EW ,
_u^ (2) = 1/√(2)( ccc
0 c_4 ζ̇_3^'/√(2) 0
0 0 c_5 ζ_3 /√(2)
0 c_5 ζ_3/√(2) 0
) v_ EW ,
where we have neglected the terms of ∼(ζ_3 v_ EW) in the expansion.
One obvious feature is that the gauge eigenstates of up quark and the charm quark do not obtain tree-level masses through the d=5 operators with the SM Higgs doublet.
Instead, there are only off-diagonal mass mixing terms in Eqs. (<ref>) and (<ref>).
Accordingly, we find that
det^'
_u^ (0)_u^ (0) †
= 1/2 Y_^2 v_ EW^2 ⇒ m_t^2 ≈ 1 / 2 Y_^2 v_ EW^2 .
Here and below, we use the det^' to denote the matrix determinant that is equal to the products of all non-zero eigenvalues.
Next, we find that
det^'
( _u^ (0) + _u^ (1)) ·( _u^ (0) † + _u^ (1) †)
≈ 1/4 c_5^4 ζ_1^4 v_ EW^4 .
Naturally, we expect the smaller eigenvalue above to be the charm quark mass squared of
m_c^2 = det^'
( _u^ (0) + _u^ (1)) ·( _u^ (0) † + _u^ (1) †)
/ det^'
_u^ (0)_u^ (0) †
≈ c_5^4 ζ_1^4 /8 Y_^2 v_ EW^2 .
The up quark mass squared can be similarly obtained by
m_u^2 = det
( _u^ (0) + _u^ (1) + _u^ (2)) ·( _u^ (0) † + _u^ (1) † + _u^ (2) †)
/ det^'
( _u^ (0) + _u^ (1)) ·( _u^ (0) † + _u^ (1) †)
≈c_4^2 ζ_3^2 ζ̇_3^2/4 ζ_1^2 v_ EW^2 .
To summarize, all SM up-type quark masses are expressed as follows
m_u ≈ c_4 ζ_3 ζ̇_3 / 2 ζ_1 v_ EW , m_c ≈ c_5^2 ζ_1^2 /2 √(2) Y_ v_ EW , m_t ≈ Y_/√(2) v_ EW .
For all down-type quarks with Q_e=-1/3, we find the following tree-level SM mass matrix
( _d )_3× 3≈1/4( ccc
( 2 c_3 + Y_ d_ℬ ) ζ̇_3 ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3 0
( 2 c_3 + Y_ d_ℬ )ζ̇_2 ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_2 0
0 0 Y_ d_𝒜ζ_23^-1ζ_1
) v_ EW .
This mass matrix has the similar structure as in Eq. (<ref>) from the WSW pattern, while the terms in the second row receive the _431-breaking VEVs of w_3, 1̇, V̇İİ.
It is straightforward to find the following SM down-type quark masses of
m_b ≈ 1 / 4 Y_ d_𝒜ζ_ 23 ^-1ζ_1 v_ EW ,
m_s ≈ 1 /4 ( 2c_3+ Y_ d_ℬζ_ 23 ^-2 ) ζ̇_2 v_ EW ,
m_d ≈ c_3 ζ̇_3 v_ EW ,
from Eq. (<ref>).
For all charged leptons with Q_e=-1, their tree-level mass matrix is correlated with the down-type quark mass matrix as
( _ℓ)_ 3× 3 = ( _d^T )_3× 3 ≈ 1/4( ccc
( 2 c_3 + Y_ d_ℬ ) ζ̇_3 ( 2 c_3 + Y_ d_ℬ )ζ̇_2 0
( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3 ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_2 0
0 0 Y_ d_𝒜ζ_23^-1ζ_1
) v_ EW .
Thus, it is straightforward to find the tree-level mass relations of
m_τ = m_b , m_μ = m_s , m_e = m_d .
§.§ The CKM matrix and the benchmark
The bi-unitary transformations of
_L __R^† = _^ diag , __^† = _L^† ( _^ diag )^2 _L , = ( , ) ,
diagonalize the un-hatted flavor eigenstates into their hatted mass eigenstates.
To obtain the CKM matrix of the quark sector, we derive the left-handed mixing matrices of (_L , _L) of
( cû_L
ĉ_L
t̂_L
) = _L ·( c u_L
c_L
t_L
) , ( cd̂_L
ŝ_L
b̂_L
) = _L ·( c d_L
s_L
b_L
) ,
through their perturbative expansions in Eqs. (<ref>) and (<ref>).
Explicitly, we find that
_L = _L^ (12)·_L^ (23)·_L^ (13)·_L^ID ,
_L^ ID = ( ccc
0 1 0
-1 0 0
0 0 1
) ,
_L^ (12) = ( ccccosϵ_2 -sinϵ_2 0
sinϵ_2 cosϵ_2 0
0 0 1
) ,
sinϵ_2≃m_u/m_cζ_1/ζ_3-ζ_3/ζ_1∼ (ζ_13) ,
_L^(13)≈( ccc
1 0 - c_5 ζ_3/√(2)Y_
0 1 0
c_5 ζ_3/√(2)Y_ 0 1
) ,
_L^ (23)≈( ccc
1 0 0
0 1 c_5 ζ_1/√(2)Y_
0 -c_5 ζ_1/√(2)Y_ 1
) ,
and
_L
≈( cccζ_ 23 1- ζ_ 23 ^2/2 0
-1+ζ_ 23 ^2/2 ζ_ 23 0
0 0 1
) .
The CKM matrix can be approximated as the Wolfenstein parametrization
V̂_ CKM|_ SU(8) , WWS = _L _L^†≈( ccc
1-ζ_ 23 ^2/2 ζ_ 23 -c_5 ζ_3/√(2)Y_
- ζ_ 23 1- ζ_ 23 ^2/2 c_5 ζ_1/√(2)Y_
c_5(ζ_ 23 ζ_1+ ζ_3 )/√(2)Y_ -c_5 ζ_1 /√(2)Y_ 1
) ,
where the Cabibbo mixing parameter is interpreted as the ratio of λ = | V_us | =ζ_ 23.
The observed mixing hierarchy of |V_cb| ≫ | V_ub| is due to the hierarchy of ζ_1 ≫ζ_3, while this was due to the hierarchy of ζ_1 ≫ζ_2 in the SWW and the WSW patterns.
To obtain the reasonable CKM mixing parameters, one has to choose a suppressed c_3=0.01 for the reasonable (d ,e) masses, and a relatively enhanced Higgs mixing parameter of d_𝒜=0.2 for the reasonable (b , τ) masses.
Based on the predicted quark masses in Eqs. (<ref>), (<ref>), and the CKM matrix in Eq. (<ref>), we suggest a set of benchmark point of the SU(8) input parameters, as well as the predicted SM quantities in Tab. <ref>.
Three dimensionless parameters of (ζ_1 , ζ_2 , ζ_3) can be translated into three intermediate symmetry breaking scales in Eq. (<ref>) as below
v_441≃ 1.4 × 10^17 GeV , v_431≃ 4.8× 10^15 GeV , v_421≃ 1.1× 10^15 GeV .
Obviously, the intermediate scale at the third symmetry breaking stage is much higher than what we have obtained in the SWW and the WSW symmetry breaking pattern <cit.>.
§ THE GAUGE COUPLING EVOLUTIONS IN THE SU(8) THEORY
§.§ Gauge coupling for the _441 fermions
§.§.§ Covariant derivatives and gauge boson masses
We express the SU(4)_W ⊗ U(1)_X_0 covariant derivatives as follows
i D_μψ_4 ≡ i ∂_μψ_4 + ( g_4W A_μ^I̅ T_ SU(4)^I̅ + g_X_0_0 𝕀_4 X_0 μ ) ·ψ_4 ,
for the SU(4)_W fundamental representation, and
i D_μψ_4 ≡ i ∂_μψ_4 + ( - g_4W A_μ^I̅ ( T_ SU(4)^I̅ )^T + g_X_0_0 𝕀_4 X_0 μ ) ·ψ_4 ,
for the SU(4)_W anti-fundamental representation.
The SU(4)_W generators of T_ SU(4)^a are normalized such that ( T_ SU(4)^I̅ T_ SU(4)^J̅)= δ^I̅J̅.
For the SU(4)_W rank-2 anti-symmetric field of ψ_6, the covariant derivative acts as follows
i D_μψ_6 ≡ i ∂_μψ_6 + g_4W A_μ^I̅ ( T_ SU(4)^I̅·ψ_6 + ψ_6· T_ SU(4)^I̅ ,T ) + g_X_0_0 X_0 μψ_6 .
The corresponding kinematic term for the rank-2 anti-symmetric field of ψ_6 should be Tr( ψ̅_6 i ψ_6 ).
For the singlet of ψ_1, the covariant derivative is simply given by
i D_μψ_1 ≡ ( i ∂_μ + g_X_0_0 X_0 μ ) ψ_1 ,
The explicit form for the gauge fields of g_4W A_μ^I̅ T_ SU(4)^I̅ +g_X_1_1 𝕀_4 X_1 μ can be expressed in terms of a 4× 4 matrix below
g_4W A^I̅_μ T_ SU(4)^I̅ + g_X_0_0 𝕀_4 X_0 μ = g_4W (
cccc
0 A_μ^1 - i A_μ^2 A_μ^4 - i A_μ^5 A_μ^9 - i A_μ^10
A_μ^1 + i A_μ^2 0 A_μ^6 - i A_μ^7 A_μ^11 - i A_μ^12
A_μ^4 + i A_μ^5 A_μ^6 + i A_μ^7 0 A_μ^13 - i A_μ^14
A_μ^9 + i A_μ^10 A_μ^11 + i A_μ^12 A_μ^13 + i A_μ^14 0
) + g_4W/2 diag( A_μ^3 + 1/√(3) A_μ^8 , - A_μ^3 + 1/√(3) A_μ^8 , - 2 /√(3) A_μ^8 , 0 ) + g_4W/2 √(6) diag(
A_μ^15 + 12 t_ϑ_G _0 X_0 μ
𝕀_3× 3 , - 3 A_μ^15 + 12 t_ϑ_G _0 X_0 μ) ,
where we have defined an SU(4)_W mixing angle of
t_ϑ_G ≡ tanϑ_G = g_X_1/√(6) g_4W .
For the flavor-conserving neutral gauge bosons of (A_μ^15 , X_1 μ ), the mass squared matrix reads
3 /16 g_4W^2 v_441^2 ( A^15_μ , X_0 μ ) ·(
cc
1 - t_ϑ_G
- t_ϑ_G t_ϑ_G^2
) ·( c A^15 μ
X_0^μ
) .
Obviously it contains a zero eigenvalue which corresponds to the massless gauge boson of X_2 μ after the SU(4)_W symmetry breaking.
The gauge eigenstates of ( A^15_μ , X_1 μ) are diagonalized as follows
( c Z^''_μ
X_1 μ
) = ( cc
c_ϑ_G - s_ϑ_G
s_ϑ_G c_ϑ_G
) ·( c A^15_μ
X_0 μ
) .
The SU(4)_W ⊗ U(1)_X_1 gauge couplings of (α_4W , α_X_1 ) match with the SU(3 )_W ⊗ U(1)_X_2 gauge couplings as follows
α_3W ^-1 (v_441 ) = α_4W ^-1 (v_441 ) , α_X_1^-1 (v_441 ) = 1/6α_4W ^-1 (v_441 ) + α_X_0^-1 (v_441 ) , 1/6α_4W ^-1 = α_X_1^-1 s_ϑ_G^2 , α_X_0^-1 = α_X_1^-1 c_ϑ_G^2 .
The neutral currents are expressed as follows in the V-A basis
_ SU(4)_W^ NC , F = g_X_1/ s_ϑ_G c_ϑ_G ( g_f^V ''fγ^μ f + g_f^A ''fγ^μγ_5 f ) Z_μ^'' ,
and we tabulate the vectorial and axial couplings of ( g_f^V '' , g_f^A '' ) in Tab. <ref>.
The flavor non-universality of the neutral currents mediated by Z_μ^'' among three generational SM quarks and leptons are displayed explicitly.
The gauge coupling evolutions rely on the RGEs of the SU(8) theory.
The two-loop RGE of a gauge coupling of α_Υ in the MS scheme is given by <cit.>
d α^-1_Υ ( μ) / dlogμ = b^ (1) _Υ/2π + ∑_Υ^'b^ (2 ) _ΥΥ^'/8π^2α^-1_Υ ( μ ) ,
where the one-loop and two-loop β coefficients for the non-Abelian groups are
b^1_Υ= -11/3C_2_Υ+2/3∑_FT^F_Υ+1/3∑_ST^S_Υ ,
b^2_ΥΥ^'= -34/3C_2_Υ^2α_Υ-∑_F
2∑_Υ^'C_2^F_Υ^'α_Υ^'+10/3C_2_Υα_Υ
T^F_Υ
-∑_S
4∑_Υ^' C_2^S_Υ^'α_Υ^'+2/3C_2_Υα_Υ
T^S_Υ .
Here, C_2( _Υ ) is the quadratic Casimir of the gauge group _Υ, T( ^F_Υ ) and T( ^S_Υ ) are trace invariants of the chiral fermions in the irrep of ^F_Υ and complex scalars in the irrep of ^S_Υ.
For the U(1) Abelian groups with charges denoted as ^F/S, the one-loop β coefficients become
b^1_Υ=2/3∑_F^F_Υ^2+1/3∑_S^S_Υ^2 .
Below, we first list the minimal set of massless Higgs fields according to the survival hypothesis <cit.> and the massless fermions between different symmetry breaking stages according to the minimal Higgs VEVs in Eqs. (<ref>) and (<ref>).
The two-loop RGEs within the minimal SU(8) setup are therefore derived by using the PyR@TE <cit.>.
§.§ RGEs in the WSW symmetry breaking pattern
Between the v_441≤μ≤ v_U, almost all massless Higgs fields are in the first lines of Eqs. (<ref>), (<ref>), and (<ref>), which are
( 4 , 1 , +1/4)_𝐇 , ω⊕
( 1 , 4 , -1/4)_𝐇 , ω⊂8_H_, ω , ( 6 , 1 , +)_𝐇 ,ω̇⊕
(1 ,6 ,-)_𝐇 ,ω̇⊕
(4 ,4 ,0)_𝐇 ,ω̇⊂28_H_,ω̇ , (4 ,4 ,)_𝐇⊕
(4 ,4 ,)_𝐇⊕
(6 ,6 ,0)_𝐇⊂70_H .
All SU(8) fermions in Eq. (<ref>) remain massless after the decomposition into the _441 irreps.
Correspondingly, we have the _441β coefficients of
(b^(1)_ SU(4)_s ,b^(1)_ SU(4)_W ,b^(1)_ U(1)_X_0)=(+13/3 ,+13/3 ,+55/3) ,
b^(2)__441=
[ 2299/6 405/2 12; 405/2 2299/6 12; 180 180 31 ] .
When the RGEs evolve down to v_441, the gauge couplings should match according to the following relations
α_3W ^-1(v_441 ) = α_4W ^-1 (v_441 ) , α_X_1^-1 (v_441 ) = 1/6α_4W ^-1(v_441 ) + α_X_0^-1 (v_441 ) .
Between the v_431≤μ≤ v_441, the massless Higgs fields are
( 4 , 1 , +1/4)_𝐇 , V⊂8_H_, V , ( 1 , 3 , -1/3)_𝐇 , 3 ,VI⊂ ( 1 , 4 , -1/4)_𝐇 , 3 ,VI⊂8_H_, 3 ,VI , (4 ,3 ,-1/12)_𝐇 ,İẊ⊂ (4 ,4 , 0 )_𝐇 ,İẊ⊂28_H_,İẊ , (4 ,1 ,+1/4)_𝐇 ,1̇ ,V̇İİ⊂ (4 ,4 , 0 )_𝐇 ,1̇ ,V̇İİ⊂28_H_,1̇ ,V̇İİ , (1 ,3 ,-1/3)^'_𝐇 ,2̇ ,V̇İİİ⊂ (1 ,6 ,-1/2)_𝐇 ,2̇ ,V̇İİİ⊂28_H_,2̇ ,V̇İİİ , (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _431 fermions are listed at Eq. (<ref>).
Correspondingly, we have the _431β coefficients of
(b^(1)_ SU(4)_s ,b^(1)_ SU(3)_W ,b^(1)_ U(1)_X_1)=(-23/6 ,+1/3 ,+443/36) , b^(2)__431=
[ 659/6 36 17/4; 135/2 358/3 181/36; 255/4 362/9 2641/216 ] .
When the RGEs evolve down to v_431, the gauge couplings should match according to the following relations
α_3c ^-1(v_431 ) = α_4s ^-1 (v_431 ) , α_X_2^-1 (v_431 ) = 1/6α_4s ^-1(v_431 ) + α_X_1^-1 (v_431 ) .
Between the v_331≤μ≤ v_431, the massless Higgs fields are
( 1 , 3 , -1/3)_𝐇 , 3 ,VI⊂ ( 1 , 4 , -1/4)_𝐇 , 3 ,VI⊂8_H_, 3 ,VI , (1 ,3 ,-1/3)_𝐇 ,İẊ⊂(4 ,3 ,-1/12)_𝐇 ,İẊ⊂ (4 ,4 , 0 )_𝐇 ,İẊ⊂28_H_,İẊ , (1 ,3 ,-1/3)^'_𝐇 ,2̇ ,V̇İİİ⊂ (1 ,6 ,-1/2)_𝐇 ,2̇ ,V̇İİİ⊂28_H_,2̇ ,V̇İİİ , (1 ,3 ,+2/3)_𝐇^'''⊂ (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _331 fermions are listed at Eq. (<ref>).
Correspondingly, we have the _331β coefficients of
(b^(1)_ SU(3)_c ,b^(1)_ SU(3)_W ,b^(1)_ U(1)_X_2)=(-5 ,-4 ,+9) , b^(2)__331=
[ 12 12 2; 12 34 4; 16 32 100/9 ] .
When the RGEs evolve down to v_331, the gauge couplings should match according to the following relations
α_2W ^-1 (v_331 ) = α_3W ^-1 (v_331 ) , α_Y^-1 (v_331 ) = 1/3α_3W ^-1 (v_331 ) + α_X_2^-1 ( v_331 ) .
Between the v_ EW≤μ≤ v_331, the massless Higgs fields are
(1 ,2 ,+1/2)^'''_𝐇⊂ (1 ,3 ,+2/3)_𝐇^'''⊂ (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _ SM fermions are listed at Eq. (<ref>).
Correspondingly, we have the _ SMβ coefficients of
(b^(1)_ SU(3)_c ,b^(1)_ SU(2)_W ,b^(1)_ U(1)_Y)=(-7 ,-19/6 ,+41/6) , b^(2)__SM=
[ -26 9/2 11/6; 12 35/6 3/2; 44/3 9/2 199/18 ] .
With the one- and two-loop β coefficients, we plot the RGEs of the minimal SU(8) setup according to the WSW symmetry breaking pattern in Fig. <ref>.
Three intermediate symmetry breaking scales follow from the benchmark point in Eq. (<ref>).
Obviously, the evolutions of three gauge couplings cannot achieve the unification.
§.§ RGEs in the WWS symmetry breaking pattern
Between the v_441≤μ≤ v_U, the _441β coefficients are identical to Eq. (<ref>), while the matching conditions at v_441 are identical to Eq. (<ref>).
Between the v_431≤μ≤ v_441, the massless Higgs fields are
( 4 , 1 , +1/4)_𝐇 , 3 , VI⊂8_H_, 3 ,VI , ( 1 , 3 , -1/3)_𝐇 ,V⊂ ( 1 , 4 , -1/4)_𝐇 ,V⊂8_H_, V , (4 ,3 ,-1/12)_𝐇 ,İẊ⊂ (4 ,4 , 0 )_𝐇 ,İẊ⊂28_H_,İẊ , (4 ,1 ,+1/4)^'_𝐇 ,2̇ ,V̇İİİ⊂ (4 ,4 , 0 )_𝐇 ,2̇ ,V̇İİİ⊂28_H_,1̇ ,V̇İİ , (1 ,3 ,-1/3)^'_𝐇 ,1̇ ,V̇İİ⊂ (1 ,6 ,-1/2)_𝐇 ,1̇ ,V̇İİ⊂28_H_,2̇ ,V̇İİİ , (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _431 fermions are listed at Eq. (<ref>).
Correspondingly, we have the _431β coefficients of
(b^(1)_ SU(4)_s ,b^(1)_ SU(3)_W ,b^(1)_ U(1)_X_1)=(-10/3 ,+5/6 ,+110/9) , b^(2)__431=
[ 1501/12 44 103/24; 165/2 391/3 175/36; 515/8 350/9 5219/432 ] .
When the RGEs evolve down to v_431, the gauge couplings should match according to the following relations
α_2W ^-1 (v_431 ) = α_3W ^-1 (v_431 ) , α_X_2^-1 (v_431 ) = 1/3α_3W ^-1 (v_431 ) + α_X_1^-1 ( v_431 ) .
Between the v_421≤μ≤ v_431, the massless Higgs fields are
( 4 , 1 , +1/4)_𝐇 , 3 ,VI⊂8_H_, 3 ,VI , (4 ,1 ,+1/4)_𝐇 ,İẊ⊂ (4 ,3 ,-1/12)_𝐇 ,İẊ⊂ (4 ,4 , 0 )_𝐇 ,İẊ⊂28_H_,İẊ , (4 ,1 ,+1/4)^'_𝐇 ,2̇ ,V̇İİİ⊂ (4 ,4 , 0 )_𝐇 ,2̇ ,V̇İİİ⊂28_H_,1̇ ,V̇İİ , (4 ,2 ,+1/ 4)_𝐇⊂ (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _421 fermions are listed at Eq. (<ref>).
Correspondingly, we have the _421β coefficients of
(b^(1)_ SU(4)_s ,b^(1)_ SU(2)_W ,b^(1)_ U(1)_X_2)=(-11/2 ,+4/3 ,+151/12) , b^(2)__421=
[ 131/2 21/2 17/4; 105/2 184/3 11/4; 255/4 33/4 125/8 ] .
When the RGEs evolve down to v_421, the gauge couplings should match according to the following relations
α_3c ^-1(v_421 ) = α_4s ^-1 (v_421 ) , α_Y^-1 (v_421 ) = 1/6α_4s ^-1(v_421 ) + α_X_2^-1 (v_421 ) .
Between the v_ EW≤μ≤ v_421, the massless Higgs fields are
(1 ,2 ,+1/2)^'''_𝐇⊂ (4 ,2 ,+1/ 4)_𝐇⊂ (4 ,3 ,+5/12)_𝐇⊂ (4 ,4 ,+1/ 2)_𝐇⊂70_H .
The massless _ SM fermions are listed at Eq. (<ref>).
Correspondingly, we have the _ SMβ coefficients of
(b^(1)_ SU(3)_c ,b^(1)_ SU(2)_W ,b^(1)_ U(1)_Y)=(-7 ,-19/6 ,+41/6) , b^(2)__SM=
[ -26 9/2 11/6; 12 35/6 3/2; 44/3 9/2 199/18 ] .
With the one- and two-loop β coefficients, we plot the RGEs of the minimal SU(8) setup according to the WWS symmetry breaking pattern in Fig. <ref>.
Three intermediate symmetry breaking scales follow from the benchmark point in Eq. (<ref>).
Neither the evolutions of three gauge couplings can achieve the unification along the WWS sequence.
§ SUMMARY AND OUTLOOK
The WSW and the WWS symmetry breaking patterns following the maximally broken SU(8) theory have been analyzed in both the fermion spectra and the gauge coupling evolutions.
Throughout our analyses of the both patterns, we compare our results from what we have obtained previously in Refs. <cit.>.
* Given the SU(8) chiral fermions in Eq. (<ref>), one finds six vectorlike (5_F , 5_F)-pairs and one (10_F , 10_F)-pair in addition to three-generational SM fermions through Georgi's decomposition rule <cit.>.
Both symmetry breaking patterns are found to give arise to massive vectorlike fermions in the spectrum consist to what we have found along the SWW pattern <cit.>.
* With the central motivation to address the SM quark/lepton masses through the non-universal Yukawa couplings to the unique SM Higgs boson in the spectrum, the SM fermion flavor identifications are elaborated along both the WSW and the SWW pattern.
In particular, we found that the flavor identifications between the first- and second-generational SM fermions are different from those in the SWW pattern, as one can compare the SM flavors in Tabs. <ref> and <ref> versus the results in Tab. <ref>.
The flavor identifications are determined by requiring the similar up-type quark mass matrices in all three symmetry breaking patterns, as one can find in Eqs. (<ref>), (<ref>), and (<ref>).
These flavor identifications, together with the down-type quarks, lead to both reasonable SM quark mass hierarchies as well as the observed CKM mixing pattern.
* There are also observed differences in the SM quark/lepton mass spectra between the (SWW, WSW) and the WWS patterns.
The up-type quark mass matrix along the WWS pattern contains the dimensionless parameters of (ζ_1 , ζ_3 , ζ̇_3^') in Eq. (<ref>), rather than dimensionless parameters of (ζ_1 , ζ_2 , ζ̇_2).
For the down-type quark and charged lepton sectors along the WWS pattern, the mass matrices contain the dimensionless parameters of (ζ̇_2 , ζ̇_3).
Correspondingly, the Cabibbo angle is then interpreted as the ratio between two different symmetry breaking scales such that λ=|V_us|=ζ_23.
In the (SWW, WSW) patterns, the Cabibbo angle is interpreted as the relative ratio between two _331-breaking VEVs in Eq. (<ref>).
These differences originate from the Higgs VEVs at the third symmetry breaking stage in the rank-3 chiral IRAFFS sector since (i) along the (SWW, WSW) patterns, the Higgs VEVs at the third symmetry breaking stage come from two different Higgs components of ( 1 , 3 , -1/3 )_𝐇 , ω̇^'⊂ ( 1 , 6 , -1/2 )_𝐇 , ω̇ and ( 1 , 3 , -1/3 )_𝐇 , ω̇⊂ ( 4 , 4 , 0 )_𝐇 , ω̇ in Eq. (<ref>), and (ii) along the WWS pattern, the Higgs VEVs at the third symmetry breaking stage come from one single Higgs component of ( 4 , 1 , +1/4 )_𝐇 , ω̇⊕ ( 4 , 1 , +1/4 )_𝐇 , ω̇^'⊂ ( 4 , 4 , 0 )_𝐇 , ω̇ in Eq. (<ref>).
Another consequence of the above differences can be found in the suggested benchmark points in Tabs. <ref> and <ref>.
* The gauge coupling evolutions in both patterns are estimated according to the two-loop RGEs based on the suggested intermediate symmetry breaking scales in Eqs. (<ref>) and (<ref>).
Their behaviors are largely matching with what we have found in the SWW sequence <cit.>, where large discrepancies of | α_X_0^-1 - ( α_4S^-1 + α_ 4W^-1 )/2 | ∼ 20 above μ∼ 10^17 GeV can be observed in both Figs. <ref> and <ref>.
Within the field theory context, such large discrepancy could not be reconciled with the one-loop threshold effects <cit.>.
Altogether, the gauge coupling unification should better be interpreted in the context of the Kač-Moody Lie algebra such that
k_i g_i^2 = g_ string^2 = 8 π G_N /α^' ,
where k_i represent the Kač-Moody levels of each gauge couplings, and α^' represents the Regge slope.
To achieve the string unification, it is necessary to consider the constraints of the unitarity and the conformal invariance to the SU(8) algebra.
§ ACKNOWLEDGEMENTS
We would like to thank Tianjun Li, Kaiwen Sun, Yuan Sun, Yinan Wang, Zhong-Zhi Xianyu, and Wenbin Yan for very enlightening discussions and communications.
N.C. would like to thank Nanjing University, Central China Normal University, and South China Normal University for hospitality when preparing this work.
N.C. is partially supported by the National Natural Science Foundation of China (under Grants No. 12035008 and No. 12275140) and Nankai University.
§ DECOMPOSITIONS OF THE SU(8) HIGGS
§.§ Type-WSW
§.§ Type-WWS
This section include Higgs part of Type-WWS model.
§ MAIN RESULTS IN THE SU(8) SWW SYMMETRY BREAKING PATTERN
By following the symmetry breaking pattern in Eq. (<ref>) and the U(1) charges defined in Eqs. (<ref>), we tabulate the SU(8) fermion representations in Tabs. <ref>, <ref>, <ref>.
For the right-handed down-type quarks of _R^Ω^c, they are named as follows
_R^1̇^c ≡d_R^c , _R^2̇^c ≡s_R^c , _R^V̇İİ^c ≡_R^'''''^c , _R^V̇İİİ^c ≡_R^'''^c , _R^İẊ^c ≡_R^''''^c , _R^3^c ≡b_R^c , _R^ IV ^c ≡_R^c , _R^ V ^c ≡_R^''^c , _R^ VI ^c ≡_R^'^c .
For the left-handed SU(2)_W lepton doublets of (_L^Ω , - _L^Ω ), they are named as follows
( _L^1̇ , - _L^1̇) ≡ (e_L , - ν_e L ) , ( _L^2̇ , - _L^2̇) ≡( μ_L , - ν_μ L ) , ( _L^V̇İİ , - _L^V̇İİ) ≡ ( _L^'''' , - _L^'''' ) , ( _L^V̇İİİ , - _L^V̇İİİ ) ≡ ( _L^''' , - _L^''' ) , ( _L^İẊ , - _L^İẊ ) ≡ ( _L^''''' , - _L^''''' ) , ( _L^ 3 , - _L^3) ≡ ( τ_L , - ν_τ L) , ( _L^ IV , - _L^ IV ) ≡ ( _L^'' , - _L^'' ) , ( _L^ V , - _L^ V ) ≡ ( _L , - _L ) , ( _L^ VI , - _L^ VI ) ≡ ( _L^' , - _L^' ) .
According to the flavor identifications in Tabs. <ref>, <ref>, <ref>, we find the following up-type quark mass matrix
_u = 1/√(2)( ccc
0 0 c_5 ζ_1 /√(2)
c_4 ζ̇_2 /√(2) 0 c_5 ζ_2 / √(2)
c_5 ζ_1 /√(2) c_5 ζ_2/√(2) Y_
) v_ EW ,
from the renormalizable Yukawa coupling term of Y_28_F28_F70_H+H.c. and the d=5 direct Yukawa coupling terms in Eqs. (<ref>) and (<ref>).
The down-type quark and the charge lepton mass matrices are found to be
( _d )_3× 3 = ( _ℓ^T )_3× 3 ≈ 1/4( ccc
( 2 c_3 + Y_ d_ℬ )ζ̇_3^' ( 2 c_3 + Y_ d_ℬΔ_2̇ ) ζ̇_3^' 0
( 2 c_3 + Y_ d_ℬΔ_1̇^' ) ζ̇_3 ( 2 c_3 + Y_ d_ℬζ_23 ^-2 ) ζ̇_3 0
0 0 Y_ d_𝒜ζ_23^-1ζ_1
) v_ EW ,
from the d=5 indirect Yukawa coupling terms in Eqs. (<ref>).
§ THE CONJECTURE ON THE TOP QUARK MASS IN THE FLAVOR-UNIFIED SU(N) THEORIES
Here, we provide a conjecture on the top quark mass, which states
a rank-2 chiral IRAFFS is always necessary in the flavor-unified SU(N) theories so that only the top quark obtains mass with the natural (1) Yukawa coupling at the EW scale.
According to our current convention, the EW symmetry breaking must occur in the (N-4)-th and also the last stage of SU(N ≥ 5) → ... →_ SM→ SU(3)_c ⊗ U(1)_ EM.
In a rank-2 chiral IRAFFS of
( N- 4) × [ N , 1 ] _F⊕ [ N , 2 ]_F
one can always write down a Yukawa coupling of
Y_ [ N , 2 ]_F⊗ [ N , 2 ]_F⊗ [ N , N-4 ]_H + H.c. .
According to Ref. <cit.>, a rank-k anti-symmetric Higgs field in the SU(N) theory can only achieve the symmetry breaking pattern of SU(N) → SU(N-k).
Obviously, the Higgs field of [ N , N-4 ]_H in the rank-2 chiral IRAFFS only contains component that can be responsible for the (N-4)-th symmetry breaking stage (the EWSB stage), no matter how previous symmetry breaking patterns were achieved.
This specific component is the SM Higgs doublet.
Since the [ N , 2 ]_F is only left with massless 10_F of the SM [One can justify this by looking at the SM fermions denoted by underlines in Tab. <ref>. ], with all other vectorial components integrated out before the (N-4)-th symmetry breaking stage, the corresponding Yukawa coupling in Eq. (<ref>) must give rise to the top quark mass.
In other words, one unique SM Higgs doublet can only originate from a rank-2 chiral IRAFFS that is guaranteed to couple with the top quark naturally at the tree level.
10Chen:2023qxi
N. Chen, Y.-n. Mao, and Z. Teng, “The global B L symmetry in
the flavor-unified SU(N) theories,”http://dx.doi.org/10.1007/JHEP04(2024)046 JHEP 04
(2024) 046, http://arxiv.org/abs/2307.07921
arXiv:2307.07921 [hep-ph].
Chen:2024cht
N. Chen, Y.-n. Mao, and Z. Teng, “The Standard Model quark/lepton masses and
the Cabibbo-Kobayashi-Maskawa mixing in an SU(8) theory,”http://arxiv.org/abs/2402.10471 arXiv:2402.10471
[hep-ph].
Georgi:1974sy
H. Georgi and S. L. Glashow, “Unity of All Elementary Particle Forces,”http://dx.doi.org/10.1103/PhysRevLett.32.438 Phys. Rev. Lett. 32 (1974) 438–441.
Fritzsch:1974nn
H. Fritzsch and P. Minkowski, “Unified Interactions of Leptons and
Hadrons,”http://dx.doi.org/10.1016/0003-4916(75)90211-0 Annals
Phys. 93 (1975) 193–266.
Dimopoulos:1981zb
S. Dimopoulos and H. Georgi, “Softly Broken Supersymmetry and SU(5),”http://dx.doi.org/10.1016/0550-3213(81)90522-8 Nucl. Phys. B 193 (1981) 150–162.
Georgi:1979ga
H. Georgi and D. V. Nanopoulos, “Masses and Mixing in Unified Theories,”http://dx.doi.org/10.1016/0550-3213(79)90323-7 Nucl. Phys. B 159 (1979) 16–28.
Ellis:1979fg
J. R. Ellis and M. K. Gaillard, “Fermion Masses and Higgs Representations in
SU(5),”http://dx.doi.org/10.1016/0370-2693(79)90476-3 Phys.
Lett. B 88 (1979) 315–319.
Chen:2021zty
P. Chen, G.-J. Ding, and S. F. King, “SU(5) GUTs with A_4 modular
symmetry,”http://dx.doi.org/10.1007/JHEP04(2021)239 JHEP 04 (2021) 239, http://arxiv.org/abs/2101.12724
arXiv:2101.12724 [hep-ph].
King:2021fhl
S. F. King and Y.-L. Zhou, “Twin modular S_4 with SU(5) GUT,”http://dx.doi.org/10.1007/JHEP04(2021)291 JHEP 04
(2021) 291, http://arxiv.org/abs/2103.02633
arXiv:2103.02633 [hep-ph].
Ding:2021zbg
G.-J. Ding, S. F. King, and C.-Y. Yao, “Modular S_4× SU(5) GUT,”http://dx.doi.org/10.1103/PhysRevD.104.055034 Phys. Rev. D 104 no. 5, (2021) 055034,
http://arxiv.org/abs/2103.16311 arXiv:2103.16311
[hep-ph].
Ding:2021eva
G.-J. Ding, S. F. King, and J.-N. Lu, “SO(10) models with A_4 modular
symmetry,”http://dx.doi.org/10.1007/JHEP11(2021)007 JHEP 11 (2021) 007, http://arxiv.org/abs/2108.09655
arXiv:2108.09655 [hep-ph].
deMedeirosVarzielas:2023ujt
I. de Medeiros Varzielas, S. F. King, and M. Levy, “A modular SU (5) littlest
seesaw,”http://dx.doi.org/10.1007/JHEP05(2024)203 JHEP 05 (2024) 203, http://arxiv.org/abs/2309.15901
arXiv:2309.15901 [hep-ph].
deAnda:2023spb
F. J. de Anda and S. F. King, “Orbifold modular GUT of flavor,”http://dx.doi.org/10.1103/PhysRevD.109.095046 Phys. Rev. D 109 no. 9, (2024) 095046,
http://arxiv.org/abs/2312.09010 arXiv:2312.09010
[hep-ph].
ATLAS:2012yve ATLAS Collaboration, G. Aad et al., “Observation of a new
particle in the search for the Standard Model Higgs boson with the ATLAS
detector at the LHC,”http://dx.doi.org/10.1016/j.physletb.2012.08.020 Phys. Lett. B 716 (2012) 1–29,
http://arxiv.org/abs/1207.7214 arXiv:1207.7214 [hep-ex].
CMS:2012qbp CMS Collaboration, S. Chatrchyan et al., “Observation of a
New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC,”http://dx.doi.org/10.1016/j.physletb.2012.08.021 Phys. Lett. B 716 (2012) 30–61,
http://arxiv.org/abs/1207.7235 arXiv:1207.7235 [hep-ex].
CMS:2022dwd CMS Collaboration, A. Tumasyan et al., “A portrait of the
Higgs boson by the CMS experiment ten years after the discovery.,”http://dx.doi.org/10.1038/s41586-022-04892-x Nature
607 no. 7917, (2022) 60–68,
http://arxiv.org/abs/2207.00043 arXiv:2207.00043
[hep-ex].
ATLAS:2022vkf ATLAS Collaboration, G. Aad et al., “A detailed map of
Higgs boson interactions by the ATLAS experiment ten years after the
discovery,”http://dx.doi.org/10.1038/s41586-022-04893-w
Nature 607 no. 7917, (2022) 52–59,
http://arxiv.org/abs/2207.00092 arXiv:2207.00092
[hep-ex]. [Erratum: Nature 612, E24 (2022)].
Georgi:1979md
H. Georgi, “Towards a Grand Unified Theory of Flavor,”http://dx.doi.org/10.1016/0550-3213(79)90497-8 Nucl. Phys. B 156 (1979) 126–134.
Li:1973mq
L.-F. Li, “Group Theory of the Spontaneously Broken Gauge Symmetries,”http://dx.doi.org/10.1103/PhysRevD.9.1723 Phys. Rev. D 9 (1974) 1723–1739.
Chen:2024deo
N. Chen, Z. Hou, Y.-n. Mao, and Z. Teng, “The gauge coupling evolutions of an
SU(8) theory with the maximally symmetry breaking pattern,”http://arxiv.org/abs/2406.09970 arXiv:2406.09970
[hep-ph].
Chakrabarti:1980bn
J. Chakrabarti, M. Popovic, and R. N. Mohapatra, “Problem of Fermion
Generations in Grand Unified Theories,”http://dx.doi.org/10.1103/PhysRevD.21.3212 Phys. Rev. D 21 (1980) 3212.
Ma:1981pr
Z.-q. Ma, T.-s. Tu, P.-y. Xue, and X.-j. Zhou, “An SU(8) Grand Unified Model
Accomodating Three Generations With V-A Coupling,”http://dx.doi.org/10.1016/0370-2693(81)90145-3 Phys. Lett. B 100 (1981) 399–402.
Cabibbo:1963yz
N. Cabibbo, “Unitary Symmetry and Leptonic Decays,”http://dx.doi.org/10.1103/PhysRevLett.10.531 Phys. Rev. Lett. 10 (1963) 531–533.
Kobayashi:1973fv
M. Kobayashi and T. Maskawa, “CP Violation in the Renormalizable Theory of
Weak Interaction,”http://dx.doi.org/10.1143/PTP.49.652 Prog.
Theor. Phys. 49 (1973) 652–657.
Gao:1980up
C.-S. Gao and K.-C. Chou, “A POSSIBLE SU(4)-c x SU(3)-f x U(1) MODEL,”http://dx.doi.org/10.1103/PhysRevD.23.2690 Phys. Rev. D 23 (1981) 2690.
Peccei:1977hh
R. D. Peccei and H. R. Quinn, “CP Conservation in the Presence of
Instantons,”http://dx.doi.org/10.1103/PhysRevLett.38.1440
Phys. Rev. Lett. 38 (1977) 1440–1443.
Barr:2008pn
S. M. Barr, “Doubly Lopsided Mass Matrices from Unitary Unification,”http://dx.doi.org/10.1103/PhysRevD.78.075001 Phys. Rev. D 78 (2008) 075001,
http://arxiv.org/abs/0804.1356 arXiv:0804.1356 [hep-ph].
Machacek:1983tz
M. E. Machacek and M. T. Vaughn, “Two Loop Renormalization Group Equations in
a General Quantum Field Theory. 1. Wave Function Renormalization,”http://dx.doi.org/10.1016/0550-3213(83)90610-7 Nucl. Phys. B 222 (1983) 83–103.
Glashow:1979nm
S. L. Glashow, “The Future of Elementary Particle Physics,”http://dx.doi.org/10.1007/978-1-4684-7197-7_15 NATO Sci. Ser. B 61 (1980) 687.
Barbieri:1979ag
R. Barbieri, D. V. Nanopoulos, G. Morchio, and F. Strocchi, “Neutrino Masses
in Grand Unified Theories,”http://dx.doi.org/10.1016/0370-2693(80)90058-1 Phys. Lett. B 90 (1980) 91–97.
Barbieri:1980vc
R. Barbieri and D. V. Nanopoulos, “An Exceptional Model for Grand
Unification,”http://dx.doi.org/10.1016/0370-2693(80)90998-3
Phys. Lett. B 91 (1980) 369–375.
Barbieri:1980tz
R. Barbieri and D. V. Nanopoulos, “Hierarchical Fermion Masses From Grand
Unification,”http://dx.doi.org/10.1016/0370-2693(80)90395-0
Phys. Lett. B 95 (1980) 43–46.
delAguila:1980qag
F. del Aguila and L. E. Ibanez, “Higgs Bosons in SO(10) and Partial
Unification,”http://dx.doi.org/10.1016/0550-3213(81)90266-2
Nucl. Phys. B 177 (1981) 60–86.
Sartore:2020gou
L. Sartore and I. Schienbein, “PyR@TE 3,”http://dx.doi.org/10.1016/j.cpc.2020.107819 Comput. Phys.
Commun. 261 (2021) 107819,
http://arxiv.org/abs/2007.12700 arXiv:2007.12700
[hep-ph].
Hall:1980kf
L. J. Hall, “Grand Unification of Effective Gauge Theories,”http://dx.doi.org/10.1016/0550-3213(81)90498-3 Nucl. Phys. B 178 (1981) 75–124.
Weinberg:1980wa
S. Weinberg, “Effective Gauge Theories,”http://dx.doi.org/10.1016/0370-2693(80)90660-7 Phys. Lett. B 91 (1980) 51–55.
Langacker:1992rq
P. Langacker and N. Polonsky, “Uncertainties in coupling constant
unification,”http://dx.doi.org/10.1103/PhysRevD.47.4028 Phys.
Rev. D 47 (1993) 4028–4045,
http://arxiv.org/abs/hep-ph/9210235 arXiv:hep-ph/9210235. ]
|
http://arxiv.org/abs/2409.03096v1 | 20240904214907 | Coxeter groups and Billey-Postnikov decompositions | [
"Suho Oh",
"Edward Richmond"
] | math.CO | [
"math.CO",
"20F55, 14M15, 52C35"
] |
§ ABSTRACT
In this chapter, we give an overview of Billey-Postnikov (BP) decompositions which have become an important tool for understanding the geometry and combinatorics of Schubert varieties. BP decompositions are factorizations of Coxeter group elements with many nice properties in relation to Bruhat partial order. They have played an important role in the classification and enumeration of smooth Schubert varieties. They have also been used in the study of inversion hyperplane arrangements and permutation pattern avoidance. We survey many of these applications.
[
Lucas A. Polson^1,2, Pedro Esquinas^3, Sara Kurkowska^2,4, Chenguang Li^1,2, Peyman Sheikhzadeh^5 Mehrshad Abbassi^5, Saeed Farzanehfar^5, Seyyede Mirabedian^5, Carlos Uribe^2,3,6, Arman Rahmim^1,2,6
School of Physics, Nankai University, Tianjin, 300071, China
===========================================================================================================================================================================================================
§ INTRODUCTION
* 35 40 pages
* 5 10 pages to intro, BP-decompo
* 10 pages for geometry
* 7 8 pages for hyperplanes
* 4 page self-dual intervals
* Definition of parabolic, decomposition. Polynomial definition then geometric definition.
* def equiv: prop 4.2 of RS16s
* 3 routes of application : root-system pattern avoidance, hyperplanes, geometry
* geometry : RS16, then AR17. Follow with subsection? on RS17.
* subsection for affine BC12, RS18.
* Copy & Paste is not allowed. Statements can be copied.
* Convention for decomp : W^J × W_J
* S ∖ s for deleting a single element
* Prerequisite on Weyl, Coxeter groups, Coxeter group, Root system, flag variety, Schubert variety we can skip.
* 1.5 page overview
* After them 2.8 (TFAE 4) split stellar stuff (move rational smooth stuff there)
* 12321 12 Non BP, 12321 13 BP Example TFAE all 4 check : picture of W^J with corresponding colors
§ INTRODUCTION
Coxeter groups and their combinatorics play a vital role in the study of Lie groups, flag varieties, and Schubert varieties. Of particular importance are the length function and Bruhat partial order (also called Bruhat-Chevalley order) on a Coxeter group. Geometrically, the length of an element equals the dimension of the corresponding Schubert variety, and the Bruhat order gives closure relations of Schubert cells in a flag variety. Let W be a Coxeter group, and for any w∈ W, we define the Poincaré polynomial
P_w(q):=∑_u≤ w q^ℓ(w)
where ℓ:W→_≥ 0 denotes the length function and ≤ denotes Bruhat order. If W is the Weyl group of a complex reductive Lie group, then P_w(q^2) is the topological Poincaré polynomial with respect to singular cohomology of the corresponding Schubert variety X(w). The central problem we address is to characterize when the polynomial P_w(q) factors in some nice or natural way.
When W=_n is the permutation group, Gasharov proved in <cit.> that the polynomial P_w(q) is a product of factors of the form (1+q+⋯ +q^r) if and only if w avoids the permutation patterns 3412 and 4231.
This is a generalization of the well known result that the Poincaré series on the full permutation group is
∑_w∈_n q^ℓ(w)=∏_k=0^n-1(1+q+⋯+q^k).
Chevalley <cit.> and Solomon <cit.> prove that the Poincaré series of any finite Coxeter group has a similar factorization as in Equation (<ref>), so we can ask if there is an analogue of Gasharov's factorization theorem for elements of other groups. In <cit.>, Lakshmibai and Sandya prove that a permutation w∈_n avoiding the patterns 3412 and 4231 is equivalent to the Schubert variety X(w) being smooth, and hence Gasharov's factorization theorem is a property of “smooth" permutations. As it turn out, nice Poincaré polynomial factorizations hold for not only smooth permutations, but “smooth" elements of any Coxeter group of finite Lie-type. We summarize these properties of (rationally) smooth elements in Section <ref>. Of critical importance to these results is the notion of what is now called a Billey-Postnikov (BP) decomposition. BP decompositions are certain factorizations of group elements with many nice properties in relation to intervals in Bruhat order. The name “BP" comes from the paper <cit.> where Billey and Postnikov use these decompositions to give a root-theoretic criterion for the rational smoothness of Schubert varieties. However, the idea of these decompositions dates back to earlier works such as <cit.> and <cit.>. BP decompositions have had numerous applications and have appeared in the study of fiber bundle structures of Schubert varieties <cit.>, inversion hyperplane arrangements <cit.> and permutation pattern avoidance <cit.>. The purpose of this chapter is to provide an overview of BP decompositions and their applications.
We structure this chapter as follows. In Section <ref>, we review the basic properities of Coxeter groups and define BP decompositions. In Section <ref> we given an overview of the how BP decompositions are used to study rationally smooth elements of Coxeter groups. In Section <ref>, we discuss the applications of BP decompositions to inversion hyperplane arrangments. In Section <ref> we state and prove how BP decomposition correspond to fiber bundle structures on Schubert varieties. In Section <ref>, we look at iterated BP decompositions and how they are modeled by staircase diagrams. One application of staircase diagrams is that they can used to enumerate smooth and rationally smooth Schubert varieties. In Section <ref>, we focus on permutation groups and discuss how BP decompositions are connected to permutation pattern avoidance. Finally, in Section <ref> we state some open questions and possible future directions for the study of BP decompositions.
Our hope is that this chapter will provide readers insights on the nature of BP decompositions and how they are applied. Since this is a survey article, many statements will be given without proof. If a proof is not provided, then we will either have brief outlines of the proof or have references provided. Many results and concepts will have illustrating examples.
§ BACKGROUND ON COXETER GROUPS
We review several foundational properties of Coxeter groups. For more details, we refer readers to <cit.>. Let W be a Coxeter group with simple generating set S. In other words, S is a finite set and W is the group generated by S where for any s,t∈ S, we have a relation
(st)^m_st=e
for some m_st∈_>0∪{∞} where m_st=1 if and only if s=t. We say an expression of w∈ W in the simple generators
w=s_1⋯ s_k
is reduced if w cannot be expressed in fewer generators. The length of any reduced expression is unqiue, so we define the function ℓ:W→_≥ 0 which maps w∈ W to the length of any reduced expression. We call the value ℓ(w) the length of w.
Let w=s_1⋯ s_k be a reduced expression. We say that u≤ w in the Bruhat order if there exists a subsequence (i_1,…, i_j)⊆(1,…, k) such that u=s_i_1⋯ s_i_j is a reduced expression for u. We remark that this definition is known as the sub-word property as Bruhat order has other equivalent definitions (see <cit.>).
One particularly important family of Coxeter groups are the permutation groups on integers {1,…,n}. We will use the notation W=_n when focusing on permutations. As a Coxeter group, _n has a simple generating set S={s_1,…,s_n-1} where s_i corresponds the simple transposition that swaps i and (i+1). These generators satisfy the Coxeter relations
s_i^2=(s_is_j)^2=(s_is_i+1)^3=e for all |i-j|>1.
The permutation group _n is referred to as the Coxeter group of type A_n-1.
The permutation group _3 is generated by S={s_1, s_2} and the Bruhat order is given by the following Hassé diagram:
[scale=0.5]
(max) at (0,4) s_1s_2s_1;
(a) at (-2,2) s_2s_1;
(b) at (2,2) s_1s_2;
(c) at (-2,0) s_1;
(d) at (2,0) s_2;
(min) at (0,-2) e;
(d) – (min) – (c) – (a) – (max) – (b) – (d) – (a);
[preaction=draw=white, -,line width=6pt] (c) – (b);
Irreducible Coxeter groups of finite type are classified into four infinite families and six additional types. This classification is commonly given in terms of Coxeter diagrams (or Dynkin-Coxeter diagrams). The Coxeter diagram of a Coxeter group is a labeled graph with vertex set S and edges (s,t) labeled by the value m_st under the conventions that we draw no edge if m_st=2 and an unlabelled edge if m_st=3. See Figure <ref> for the complete classification of irreducible finite Coxeter groups and note that Coxeter groups of types H_2 and G_2 are the dihedral groups I_5 and I_6 respectively.
If W is the Weyl group of finite dimensional Lie group, then we will refer to these Coxeter groups as “finite Lie type" (these groups are also called finite crystalographic Coxeter groups). In the classification found in Figure <ref>, these are Coxeter groups of types A_n, B_n, C_n, D_n, E_6,7,8, F_4 and G_2=I_6.
§.§ Organization
In Section 2 we start by going over the definition and properties of BP-decomposition purely from a Coxeter theoretic point of view. Then we go over the root-theoretic description of when we can apply the decomposition. After that, we describe when the decomposition can be applied using permutation patterns.
In Section 3 we go over the application of BP-decomposition to the theory of hyperplane arrangements. Starting from the Weyl arrangement, for each element of the Weyl group, we take an associated subarrangement. We study a certain polynomial coming from this subarrangement.
In Section 4 we go over the application of BP-decomposition to Schubert geometry.
Section topics:
* Hyper plane arrangements
* Smoothness criteria for Schubert varieties and root systems
* Fiber bundle structure of Schubert varieties
*
§ INTRODUCTION TO BP-DECOMPOSITION
In this section we introduce the Billey-Postnikov decomposition (BP-decomposition) while also reviewing the basic materials that are going to be needed to understand the decomposition and its applications. Throughout the chapter, W always stands for a Weyl group, and S always stands for its set of simple roots.
(Subword Property)
Let w = s_1s_2… s_q be a reduced expression. Then we have
u ≤ w if and only if there exists a reduced expression u = s_i_1s_i_2… s_i_k,1 ≤ i_1 < … < i_k ≤ q.
In this section, unless otherwise specified, W is an arbitrary Coxeter group with generating set S. For any u≤ w∈ W, we use the notation
[u,w]:={z∈ W | u≤ z≤ w}
to denote intervals in Bruhat order. For any w∈ W, define the Poincaré polynomial as the rank generating function on the lower order ideal [e,w]:
P_w(q):=∑_z∈ [e,w] q^ℓ(z).
If X(w) denotes the Schubert variety corresponding to w∈ W, then the Poincaré polynomial recovers the Hilbert-Poincaré series on singular cohomology:
P_w(q^2)=∑_k(H^k(X(w))) q^k.
For example, if we take w=s_1s_2s_1 as in Example <ref>, then
P_w(q) = 1+2q+2q^2+q^3.
In this case, X(s_1s_2s_1) is the full flag variety of type A_2. See Section <ref> for a more detailed description of Schubert varieties.
Let W=⟨ s_1,s_2,s_3 | s_i^2=e⟩ be the the free Coxeter group on three generators and w=s_1s_2s_3s_1.
Then the Bruhat interval [e,w] is:
[scale=0.5]
(max) at (0,4) s_1s_2s_3s_1;
(a1) at (-2,2) s_1s_2s_3;
(a2) at (-6,2) s_1s_2s_1;
(a3) at (6,2) s_1s_3s_1;
(a4) at (2,2) s_2s_3s_1;
(b1) at (-8,0) s_1s_2;
(b2) at (4,0) s_1s_3;
(b3) at (-4,0) s_2s_1;
(b4) at (0,0) s_2s_3;
(b5) at (8,0) s_3s_1;
(c1) at (-0,-2) s_1;
(c2) at (-4,-2) s_2;
(c3) at (4,-2) s_3;
(min) at (0,-4) e;
(a3)–(b2)–(a1)–(b4)–(a4)–(b3)–(c1)–(b2) –(c3) –(b5) – (a3) – (max) –(a2) –(b3) – (c2) – (min) – (c1) – (b1) – (a1) – (max) – (a4) – (b5) – (c3) – (min);
(c3)–(b4)–(c2)–(b1)–(a2);
(c1) – (b5);
and the Poincaré polynomial is
P_w(q)=1+3q+5q^2+4q^3+q^4.
Observe that the group inverse map w → w^-1 is an automorphism of Bruhat order in the sense that
u ≤ w ↔ u^-1≤ w^-1.
One consequence is
P_w(q) = P_w^-1(q)
for any w∈ W.
Taking w = s_1s_2 from Example <ref>, we see
P_s_1s_2(q)=P_s_2s_1(q)= 1+2q+q^2.
§.§ Quotients and parabolic decompositions
In this section, we discuss parabolic quotients of Coxeter groups.
First, we say that a product w=xy is a reduced factorization if ℓ(w)=ℓ(x)+ℓ(y). Let J ⊆ S and let W_J denote the subgroup of W generated by the set J. Subgroups of form W_J are called parabolic subgroups of W. Each left coset wW_J has a unique representative of minimal length. The set of minimal coset representatives can be defined as
W^J:={w ∈ W | ws > w for all s ∈ J}.
The next proposition is from <cit.>.
Let J ⊆ S. Then the following hold:
* Every w ∈ W has a unique factorization w = vu such that v ∈ W^J and u ∈ W_J.
* The decomposition w=vu is a reduced factorization. In other words,
ℓ(w)=ℓ(v)+ℓ(u).
We call the decomposition w=vu in Proposition <ref> the parabolic decomposition with respect to J. We remark that each w∈ W also has a “left-sided" parabolic decomposition w=uv where v denotes a minimal length representative of the right coset W_Jw. If needed, we denote this set of minimal length representative by ^JW. However, the convention we take is that parabolic decompositions will be “right-sided" decompositions w=vu where v is the minimal element of wW_J and u∈ W_J.
One consequence of Proposition <ref> is that the coset decomposition of the group
W=_v∈ W^J vW_J
respects length in the sense that
∑_w∈ W q^ℓ(w)=(∑_v∈ W^J q^ℓ(v))·(∑_u∈ W_J q^ℓ(u)).
It is natural to ask if the analogous coset decomposition of the interval
[e,w]=_v∈ W^J[e,w]∩ vW_J
gives a similar factorization of the Poincaré polynomial P_w(q). To make this question precise, we define relative Bruhat intervals and relative Poincaré polynomials. For any J⊂ S and Bruhat interval [u,v], define the Bruhat interval relative to J as
[u,v]^J:=[u,v]∩ W^J.
For any v∈ W^J, we define the Poincaré polynomial relative to J as
P^J_v(q):=∑_z∈ [e,v]^J q^ℓ(z).
Let J⊆ S. We say the parabolic decomposition w=vu such that v∈ W^J and u∈ W_J is a Billey-Postinkov (BP) decomposition with respect to J if the Poincaré polynomial factors as
P_w(q) = P_v^J(q)· P_u(q).
Proposition <ref> implies that w=vu is a BP decomposition with respect to J if and only if there is a graded poset isomorphism:
[e,v]^J× [e,u]≃ [e,w]
where (v',u')↦ v'u'.
Let w=s_1s_2s_3s_2s_1∈_3 and Let J={s_1,s_3}. Then
w=vu=(s_1s_3s_2)(s_1s_3)
is a BP decomposition with respect to J. Here we have
[e,v]^J={e, s_2, s_1s_2, s_3s_2, s_1s_3s_2} and [e,u]={e,s_1,s_3,s_1s_3}
with
P_v^J(q)=1+q+2q^2+q^3 and P_u(q)=1+2q+q^2.
The Poincaré polynomial
P_w(q)=(1+q+2q^2+q^3)(1+2q+q^2)=1+3q+5q^2+6q^3+4q^4+q^5.
In Figure <ref>, we assign each coset a different color and see that the interval
[e,v]^J× [e,u]≃ [e,w].
We remark that not all parabolic decompositions are BP decompositions. In fact, BP decompositions are rather special and should not be expected in general.
Let w=s_1s_2s_3s_2s_1∈_3 and let J={s_1,s_2}. Then the parabolic decomposition
w=vu=(s_1s_2s_3)(s_2s_1)
is not a BP decomposition. Here we have
[e,v]^J={e, s_3, s_2s_3, s_1s_2s_3} and [e,u]={e,s_1,s_2,s_2s_1}
with
P_v^J(q)=1+q+q^2+q^3 and P_u(q)=1+2q+q^2.
The Poincaré polynomial
P_w(q)≠ (1+q+q^2+q^3)(1+2q+q^2).
In Figure <ref>, we assign each coset in [e,w] a different color and observe [e,v]^J× [e,u] and [e,w] are not poset isomoprhic.
One important observation contrasting Examples <ref> and <ref>, is that the interval cosets [e,w]∩ vW_J all have the same shape as [e,u] in Example <ref> (cosets are distinguished by different colors in Figure <ref>), while they do not in Example <ref>. In particular, when comparing the identity coset [e,w]∩ W_J with the “top" coset [e,w]∩ vW_J, if w=vu is a BP decomposition, then u must be the maximal element of [e,w]∩ W_J. In <cit.>, it is shown this maximally condition is also sufficient for the existence of a BP decomposition. We remark that van den Hombergh proved in <cit.> that the set [e,w]∩ W_J always has a unique maximal element. This fact was proved separately by Billey, Fan, and Losonczy in <cit.>.
§.§ Characterizing BP decompositions
Our next goal is to list a several combinatorial characterizations of a BP decomposition and prove they are equivalent. For any w∈ W, define the support of w as the set
S(w):={s∈ S | s≤ w}.
The support of w can be viewed as the set of generators needed to make any reduced expression of w. We also define the left and right descent sets of w as
D_L(w):={s∈ S | ℓ(sw)≤ℓ(w)}
and
D_R(w):={s∈ S | ℓ(ws)≤ℓ(w)}.
These decent sets can be thought of as the set of generators appearing on the left (respectively right) of some reduced expression of w. Observe that D_L(w)=D_R(w^-1). For example, if w=s_1s_2s_1s_3∈_3,
then
S(w)={s_1,s_2,s_3}, D_L(w)={s_1,s_2}, and D_R(w)={s_1, s_3}.
The following characterization theorem appears in <cit.>.
Let J⊂ S and let w=vu be a parabolic decomposition with respect to J. Then the following are equivalent:
* w=vu is a BP decomposition.
* The map [e,v]^J×[e,u]→ [e,w] given by (v',u')↦ v'u' is bijective.
* u is maximal in [e,w]∩ W_J.
* S(v)∩ J ⊆ D_L(u).
We prove the theorem by showing the equivalencies: (1)↔ (2), (2)↔ (3), and (3)↔ (4).
Proof of (1)↔ (2): By Proposition <ref>, the multiplication map given in part (2) is length preserving and injective. Hence, part (2) says the interval [e,w] decomposes as a product of posets [e,v]^J×[e,u]. This implies that part (1) is equivalent to part (2).
Proof of (2)↔ (3): We first assume the multiplication map is surjective (and hence bijective). Then [e,u]=[e,w]∩ W_J and hence u must be the maximal element in [e,w]∩ W_J. Conversely, assume u is maximal in [e,w]∩ W_J and let x∈ [e,w]. Let x=v'u' denote the parabolic decomposition of x with respect to J. By <cit.>, since x≤ w, we have v'≤ v. But u is maximal in [e,w]∩ W_J and hence u'≤ u. Thus the multiplication map is surjective.
Proof of (3)↔ (4): First suppose that u is maximal in [e,w]∩ W_J and s∈ S(v)∩ J. Then su∈[e,w] ∩ W_J and by the maximality of u, we must have su≤ u. Hence s∈ D_L(u). Conversely, suppose that S(v)∩ J⊆ D_L(u) and hence we can write a reduced factorization u=u_0u' where u_0 is the maximal element in W_S(v)∩ J. Let x∈ [e,w]∩ W_J. Since x≤ w=vu=(vu_0)u', we can write a reduced factorization x=u_1u_2 where u_1,u_2∈ W_J with u_1≤ vu_0 and u_2≤ u'. In particular, we have u_1∈ W_S(v)∩ J and hence x=u_1u_2≤ u_0u'=u. Thus u is maximal in [e,w]∩ W_J.
For example, we start with w = s_1s_2s_3s_2s_1 in type A, with J = {s_1,s_2}, as in Figure <ref>. The parabolic decomposition is v = s_1s_2s_3 and u = s_2s_1. We have P_w(t) = 1+3q+5q^2+4q^3+q^5, which is not equal to (1+q+q^2+q^3)(1+2q+2q^2+q^3). The map {id,s_3,s_2s_3,s_1s_2s_3}× [id,s_2s_1] → [id,s_1s_2s_3s_2s_1] is not a bijection: in the figure, the cosets are colored differently, and the red and blue cosets have six elements each. The element u = s_2s_1 is clearly not the maximal element of W_J, which is the red coset in the figure. Instead, the maximal element of W_J is s_1s_2s_1. Lastly, the set S(v) ∩ J = {s_1,s_2} which is not a subset of D_L(u) = {s_2}.
This time we again look at w = s_1s_2s_3s_2s_1 in type A, but with J = {1,3}, as in Figure <ref>. The parabolic decomposition is v = s_1s_3s_2 and u = s_1s_3. We have P_w(t) = 1+3q+5q^2+4q^3+q^5 equal to (1+q+2q^2+q^3)(1+2q+q^2). The map {e,s_2,s_3s_2,s3s_1,s_3s_1s_2}×{id,s_1,s_3,s_1s_3}→ [id,w] is a bijection: in the figure, every colored coset has the same poset. The element u=s_3s_1 is clearly the maximal element of W_J, which is the red coset in the figure. Lastly, the set S(v) ∩ J = {s_1,s_3} which is a subset of D_L(u) = {s_1,s_3}.
While parts (3) and (4) of Theorem <ref> seem like less conventional ways to describe a BP decomposition, we will see in the subsequent sections these characterizations are very useful when working with BP decompositions.
§ RATIONALLY SMOOTH ELEMENTS OF COXETER GROUPS
In this section, we discuss BP-decompositions and “rationally smooth" elements of Coxeter groups.
We say w∈ W is rationally smooth if the coefficients of Poincaré polynomial
P_w(q)=∑_i=0^ℓ(w) a_i q^i
satisfy a_i=a_ℓ(w)-i for all 0≤ i≤ℓ(w). In other words, P_w(q) is a palindromic polynomial.
Similarly, we say v∈ W^J is rationally smooth with respect to J if P_v^J(q) is a palindromic polynomial.
For example, w=s_1s_2s_1∈_n as in Example <ref> is rationally smooth, but the w=s_1s_2s_3s_1 in Example <ref> is not rationally smooth. Note that rational smoothness and rational smoothness with respect to J do not imply each other.
Let w=s_2s_1s_3s_2∈_4, then w is rationally smooth with respect to J={s_1,s_3}, but is not rationally smooth (See Figure <ref>).
Here we have
P_w(q)=1+3q+5q^2+4q^3+q^4 and P_w^J(q)=1+q+2q^2+q^3+q^4.
We also see that if w=s_1s_3s_2∈_4, then w is rationally smooth, but not rationally smooth with respect to J={s_1,s_3}. The polynomials
P_w(q)=1+3q+3q^2+q^3 and P_w^J(q)=1+q+2q^2+q^3.
The term “rationally smooth" is derived from the corresponding Schubert variety being rationally smooth in the geometric sense. The geometric notion of rational smoothness was developed by Kazhdan and Lusztig <cit.> where they show that a Schubert variety is rationally smooth if and only if certain Kazhdan-Lusztig polynomials are trivial. It was proved by Carrell and Peterson in <cit.> that a Schubert variety X(w) is rationally smooth if and only if P_w(q) is a palindromic polynomial (this result also holds in the relative case with X^J(w) and P^J_w(q)). Note that Definition <ref> well-defined for elements in any Coxeter group, even if there is no corresponding Schubert variety. Any smooth variety is rationally smooth for topological reasons, however the converse is not true. If W is a simply-laced Coxeter of finite type (ie. type A, D or E in Figure <ref>), then X(w) is smooth if and only if it is rationally smooth. This fact was proved by Deodhar in type A <cit.> and then later in all simply-laced types by Carrell and Kuttler using ideas by Peterson in <cit.>.
§.§ BP decompositions of rationally smooth elements
The next theorem connects BP decompositions to rationally smooth permutations and is a rephrasing of results due to Gasharov in <cit.> and, independently, due to Lascoux in <cit.>.
Let w∈_n. Then w is smooth if and only if either w or w^-1 has a BP decomposition vu with respect to J=S∖{s_n-1} where
P_w(q)=(1+q+⋯ + q^ℓ(v))· P_u(q)
and u∈ W_J≃_n-1 is smooth.
In Theorem <ref>, the relative interval [e,v]^J is a chain and hence relative Poincaré polynomial
P^J_v(q)=1+q+⋯ + q^ℓ(v).
Hence v is rationally smooth with respect to J. Polynomials of this form will come up frequently, so we use q-integer notation
[r]_q:=1+q+⋯ +q^r-1
for r∈_>0. Since P_w(q)=P_w^-1(q), the reverse implication of Theorem <ref> follows from the fact products of q-integers are palindromic polynomials. In Section <ref>, we provide a new proof of the forward direction of Theorem <ref> using “split-pattern" avoidance which was developed by Alland and Richmond in <cit.> to describe BP decompositions in _n in one-line notation.
The following is a generalization of Theorem <ref> to Coxeter groups of finite Lie-type.
Let W be a Coxeter group of finite Lie-type and let w∈ W such that |S(w)|≥ 2. Then w is rationally smooth if and only if there is a leaf s ∈ S(w) of the Coxeter diagram of W_S(w) such that either w or w^-1 has a BP decomposition vu with respect to J= S(w) ∖{ s} where
* v is rationally smooth with respect to J and
* u is rationally smooth.
Furthermore, s∈ S(w) can be chosen so that v is either the maximal length element in W_S(v)∩ W^J, or one of the following holds:
* W_S(v) is of type B_n or C_n, with either
* J = S(w) ∖{s_1}, and v = s_ks_k+2… s_ns_n-1… s_1, for some 1 < k ≤ n.
* J = S(w) ∖{s_n} with n ≥ 2 and v = s_1 … s_n
* W_S(v) is of type F_4, with either
* J = S(w) ∖{s_1} and v = s_4s_3s_2s_1
* J = S(w) ∖{s_4} and v = s_1s_2s_3s_4
* W_S(v) is of type G_2, and v is one of the elements
s_2s_1, s_1s_2s_1, s_2s_1s_2s_1, s_1s_2, s_2s_1s_2, s_1s_2s_1s_2.
We use the standard conventions of Bourbaki <cit.> for the vertex labelling of Coxeter-Dynkin diagrams.
Note that the later part of Theorem <ref> only concerns Coxeter groups that are not simply laced. For the classical types of B,C and D, Theorem <ref> was proved by Billey in <cit.>. The exceptional types were later proved in <cit.> by Billey and Postnikov and in <cit.> by Oh and Yoo. We remark that if w is rationally smooth of type B/C, then P_w(q) factors into a product q-integers as in the type A case <cit.>. This is not necessarily true for Poincaré polynomials of rationally smooth elements of other types. We also remark that, while the notion of rational smoothness is equivalent in types B and C, the notion of smoothness is not since the set of smooth Schubert varieties is differs in these types. We refer the reader to <cit.> and <cit.> for the distinctions between smoothness in types B and C.
We say a BP decomposition w=vu with respect to J is a Grassmannian BP decomposition if J is a maximal proper subset of S(w). Grassmannian BP decompositions are “optimal" in the sense that they minimize the degree of the factor P_v^J(q). Note that all the BP decompositions in Theorem <ref> are Grassmannian. Moreover, since J=S∖{s} where s is leaf in the Coxeter diagram, the poset structure of the relative interval [e,v]^J is less complex compared to when s is not a leaf. For example, in type A, the interval [e,v]^J is always a chain of length ℓ(v) when J=S∖{s} and s is a leaf. Grassmannian BP decompositions are discussed in more detail in Sections <ref> and <ref>.
One issue with Theorem <ref> is the condition that “either w or w^-1 has a BP decomposition". In Section <ref>, we discuss how BP decompositions correspond to fiber bundle structures on Schubert varieties. Since X(w) is not isomorphic X(w^-1) in general, we would like an analogue of Theorem <ref> without the “w or w^-1" condition. In <cit.>, Richmond and Slofstra prove the following.
Let W be a Coxeter group of finite Lie-type. If w∈ W is rationally smooth, then w has a Grassmannian BP-decomposition with respect to J=S(w)∖{s} for some s∈ S(w).
The sacrifice in Theorem <ref> is that we may not necessarily choose s∈ S(w) to be a leaf. Theorem <ref> relies on Theorem <ref> and we give a brief outline of the proof in Section <ref>.
Let w = s_2s_1s_3∈_4. The support set S(w)={s_1,s_2,s_3} with Coxeter diagram
[scale=.4]
[thick] (0,0)–(2,0)–(4,0);
[thick,fill=white] (0,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_1] ;
[thick,fill=white] (2,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_2];
[thick,fill=white] (4,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_3];
The Poincaré polynomial is
P_w(q) = 1+3q+3q^2+q^3
is palindromic and hence w is rationally smooth.
If J=S(w)∖{s_3}={s_1,s_2}, then the parabolic decomposition with respect to J
w=vu=(s_2s_3)(s_1)
is not a BP decomposition.
[scale=0.5]
[black!20!green] (a1) at (0,2) s_2s_1s_3;
[black!20!green] (b2) at (4,0) s_2s_3;
[black!20!red] (b3) at (-4,0) s_2s_1;
[black!20!blue] (b4) at (0,0) s_1s_3;
[black!20!red] (c1) at (-0,-2) s_2;
[black!20!red] (c2) at (-4,-2) s_1;
[black!20!blue] (c3) at (4,-2) s_3;
[black!20!red] (min) at (0,-4) e;
(min)–(c2)–(b3)–(a1)–(b4)–(c2);
(min)–(c1)–(b3);
(min)–(c3)–(b4)–(a1)–(b2)–(c3);
(c1)–(b2);
Likewise w=vu=(s_2s_3)(s_1) is not a BP decomposition with respect to J={s_2,s_3} and hence w has no “leaf removed" BP decomposition.
If we take J =S(w)∖{s_2}= {s_1,s_3}, then
w =vu= (s_2)(s_1s_3)
is a Grassmannian BP decomposition. Here we have
P_w(q) = P^J_s_2(q) · P_s_1s_3(q)=(1+q)· (1+2q+q^2).
[scale=0.5]
[black!20!blue] (a1) at (0,2) s_2s_1s_3;
[black!20!blue] (b2) at (4,0) s_2s_3;
[black!20!blue] (b3) at (-4,0) s_2s_1;
[black!20!red] (b4) at (0,0) s_1s_3;
[black!20!blue] (c1) at (-0,-2) s_2;
[black!20!red] (c2) at (-4,-2) s_1;
[black!20!red] (c3) at (4,-2) s_3;
[black!20!red] (min) at (0,-4) e;
(min)–(c2)–(b3)–(a1)–(b4)–(c2);
(min)–(c1)–(b3);
(min)–(c3)–(b4)–(a1)–(b2)–(c3);
(c1)–(b2);
Observe that the inverse w^-1=s_1s_3s_2 does have a leaf removed BP decomposition with respect to both J={s_1,s_2} and J={s_2,s_3}.
§.§ Background on Permutations
In this section, we focus on the permutation group _n. For any n∈_>0, let [n]:={1,2,…, n}. Each permutation w∈_n corresponds to a bijection w:[n]→ [n] and has a unique presentation in one-line notation w=w(1)w(2)⋯ w(n). Under the Coxeter presentation of _n, the generators s_i correspond to the simple transpositions swapping i and (i+1). The right action of s_i on the one-line notation of w is given by swapping the w(i) and w(i+1) where the left action is given by swapping the position of the entries i and (i+1). The length of a permutation can be calculated by counting inversions:
ℓ(w)=#{(i,j) | i>j and w(i)<w(j)}.
The Bruhat partial order is generated by the relations w≤ w' where w' is w with two entries swapped and ℓ(w)<ℓ(w').
For example, the Bruhat order on _4 is given in Figure <ref>.
It will be common this chapter to state results for general Coxeter groups and then give more details in the case of permutations. Sometimes it will be more convenient to use one-line notation over Coxeter theoretic reduced words to represent permutations. In Section <ref>, we give a detailed overview of how BP-decompositions on permutations can be described using pattern avoidance. Pattern avoidance has been a remarkable tool used to describe many properties of both permutations and Schubert varieties. A survey of many of these results can be found in <cit.>.
We conclude this section with a one-line notation version of Theorem <ref> that will be important in Section <ref> on hyperplane arrangements. Given w ∈_n, we define the operation (w,i) as the permutation in _n-1 obtained by removing the entry w(i) from w and then relabeling the remaining entries with [n-1] while maintaining the relative order. For example, (635214,3)=54213.
Let w ∈_n be a smooth permutation and assume w(d) = n and w(n) = e. Then at least one of the following two statements is true:
* w(d) > w(d+1) > … > w(n), or
* w^-1(e) > w^-1(e+1) > … > w^-1(n).
In both cases, the Poincaré polynomial factors as
P_w(q) = [m+1]_q · P_u(q),
where
* u = (w,d) and m = n-d in the first case and
* u = (w,n) and m = n-e in the second case.
Let w=2431 = s_1s_2s_3s_2∈_4. We have d=2 and e=1. Observe that w(2) = 4 > w(3) = 3 > w(4) = 1, so the first statement of Corollary <ref> holds. We have u = (2431,2) = 231 = s_1s_2 and m = 4-2 = 2. The Poincaré polynomial factors
P_2431(q) = (1+q+q^2)· P_231(q)=(1+q+q^2)(1+2q+q^2).
This factorization corresponds to the BP decomposition of
w^-1=(s_2s_3)(s_2s_1)
with respect to J={s_1,s_2}. In Figure <ref>, we highlight this decomposition in the Bruhat interval [1234,2431].
§ HYPERPLANE ARRANGEMENTS
In this section, we will give an overview of one of the major applications of the Billey-Postnikov decomposition, focusing on hyperplane arrangements. Let W be a Coxeter group of finite Lie type. For each w ∈ W, we will compare the Poincaré polynomial P_w(q) with another polynomial, which comes from an associated hyperplane arrangement.
The poset structure we assign on the chambers of the hyperplane arrangements we study will be motivated from the weak Bruhat order on W.
Let (W,S) be a Coxeter system and let u,w ∈ W. The right and left weak Bruhat orders ≤_R and ≤_L are generated by the following cover relations.
* We have u ≤_R w if w = us, for some s∉ D_R(u).
* We have u ≤_L w if w = su, for some s∉ D_L(u).
An example of left weak Bruhat order of _3 is drawn in Figure <ref>. On the right side of the figure is the usual (strong) Bruhat order of _3. Notice that the set of elements is the same, and the rank of each element is the same between the two posets. This is true in general for any Coxeter group <cit.>.
§.§ Hyperplane arrangement of a Coxeter group
Each finite Coxeter group W is naturally associated with a hyperplane arrangement through a root system. Let Φ be a finite collection of non-zero vectors in some Euclidean space ^n. For each α∈Φ, we define the reflection s_α:^n→^n by
s_α(x):=x-2(α,x)/(α,α)α.
We say Φ is a root system of W if the following hold:
* Φ∩α={α,-α} for each α∈Φ,
* s_α(Φ)⊆Φ for all α∈Φ, and
* W≃⟨ s_α | α∈Φ⟩.
The vectors α∈Φ are called roots. Let S denote the set of simple generators of W. For each s∈ S, there is a unique root (up to sign) α_s which satisfies the condition that s(α_s)=-α_s. We define a set of simple roots
Δ:={α_s | s∈ S}
by selecting ±α_s that all lie on one side of a suitably generic, but fixed hyperplane in ^n. It can be shown that each α∈Φ is either a totally positive or totally negative linear combination of simple roots, so we can decompose
Φ=Φ^+⊔Φ^-
into collections of positive and negative roots with respect to Δ. For more on root systems of Coxeter groups, see <cit.> or <cit.>.
To each α∈Φ, there is the corresponding hyperplane given by
H_α:={x∈^n | (α,x)=0}.
Note that H_α=H_-α and that if x∈ H_α, then s_α(x)=x. The collection of hyperplanes
_W:={H_α | α∈Φ^+}
is called the Coxeter arrangement of W. Since each H_α contains the origin in ^n, _W is an example of a central hyperplane arrangement.
For each element w ∈ W, we can take a subset of hyperplanes in _W corresponding to the inversions of w.
We define the inversion set of w as the set of positive roots
Φ_w := {α∈Φ^+ | w(α)∈Φ^- }.
and the inversion hyperplane arrangement
_w:={H_α | α∈Φ_w}.
If w_0 denotes the longest element in W, then _w_0=_W. For permutations, the root-theoretic inversion set Φ_w corresponds the usual inversion set of pairs:
{(i,j)∈ [n]^2 | i<j and w(i) > w(j)}.
Consider the permutation group _n and let ^n be a vectors space with coordinate basis {x_1,…,x_n}. The group _n acts on ^n by the standard permutation action the coordinate basis elements. The set of vectors
Φ={x_i-x_j | i≠ j}
is a root system of _n with positive roots Φ^+={x_i-x_j | i>j} and simple roots Δ={x_i-x_i+1 | 1≤ i<n}. If α=x_i-x_j, then the hyperplane H_α is defined by the equation x_i=x_j. The Coxeter arrangement of _n is
__n={x_i=x_j | i≠ j}.
For any w∈_n, we have the inversion arrangement
_w={x_i=x_j | i<j and w(i) > w(j) }.
Note that the hyperplane ∑_i x_i=0 is invariant under the action of _n. Hence we can realize __n as a hyperplane arrangement in
^n-1≃^n/(∑_i x_i=0).
This reduction will help us visualize hyperplane arrangements in examples.
For any inversion arrangement _w, Let r_0 denote the fundamental chamber which is defined as the set of points x∈^n such that (α,x)>0 for all α∈Φ_w. We define the distance enumerating polynomial on _w as
R_w(q) :=∑_r q^d(r_0,r)
where the sum
is over all chambers of the arrangement _w and d(r_0,r) is the minimum number of hyperplanes separating
r_0 and r.
In Figure <ref>, we have the inversion arrangements _321 and _312. Since 321 is the longest element of _3, we have _321 = __3 which consists of the three hyperplanes
__3={x_1=x_2, x_1=x_2, x_2=x_3}
Moreover, we can label each region with permutation in _3. The fundamental chamber is labelled with the identity permutation 123. Starting from this, we measure the distance between each chamber and the fundamental chamber by counting the minimal number of hyperplanes needed to cross to reach the fundamental chamber. We highlight this distance in blue. We can see that
R_321(q) = 1+2q+2q^2 + q^3.
For _312, we remove the hyperplane x_2 = x_3 from _321. The fundamental chamber is the unique chamber that contains the identity label 123. Counting in a similar way, we obtain
R_312(q) = 1+2q+q^2.
Our main goal of this section is to prove the following result found in <cit.> and <cit.>:
Let W be a Coxeter group of finite Lie-type. Then w∈ W is rationally smooth if and only if P_w(q) = R_w(q).
Let w=4321 denote the longest permutation in _4. Then
_4321={x_1=x_2, x_1=x_3, x_1=x_4, x_2=x_3, x_2=x_4, x_3=x_4 }.
In Figure <ref>, we label the chambers by the values d(r_0,r). By the symmetry of the picture, we can see that
R_4321=1+3q+5q^2+6q^3+5q^4+3q^5+q^6.
The rank generating function of [1234,4321] in Figure <ref> is the same polynomial, so this verifies the fact that P_4321(q) = R_4321(q).
If w=2431 (See Figure <ref>), then inversion arrangement
_2431={x_1 = x_4, x_2 = x_3, x_2 = x_4, x_3 = x_4}
and R_2431(q)=1+3q+4q^2+3q^3+q^4. From Figure <ref>, we see that w is smooth and P_2431(q) = R_2431(q).
If w=4231, then _4231=_4321∖{x_2=x_3}.
In Figure <ref>, we see
R_4231(q)=1+4q+4q^2+4q^3+4q^5+q^5.
In this case w is not smooth (see Example <ref>) and
P_4231(q)= 1 + 3^q + 5q^2 + 6q^3 + 4q^4 + q^5.
§.§ Inversion arrangements for permutations
In this section, we provide a proof of Theorem <ref> for permutations. The main strategy is to show that R_w(q) follows the exact same decomposition as P_w(q) in Corollary <ref>.
For an undirected graph G on vertex set {1,…,n}, the graphical arrangement, _G is the hyperplane arrangement in ^n with hyperplanes x_i=x_j for all edges (i,j) of G. In the case G is a complete graph on n vertices, we get the Coxeter arrangement __n.
Given a permutation w ∈_n, we define its inversion graph G_w as an undirected graph on vertices [n]={1,…,n} and edges (i,j) whenever we have i < j and w(i) > w(j). Note that the inversion arrangement _w is the graphical arrangement _G_w.
An acyclic orientation on G is an assignment of directions to the edges of G so that no directed cycles are formed. It is easy to see that the regions of _G are in bijection with acyclic orientations of G. Indeed, if is an acyclic orientation of G, then we interpret each directed edge i → j as x_i < x_j. This corresponds to choosing a side of each hyperplane of _G, hence uniquely defining a region in _G.
From this observation, the distance enumerating polynomial R_w(q) can be described in terms of acyclic orientations of the graph G_w. For an acyclic orientation , let () be the number of edges oriented oriented as i → j in where i>j (so this corresponds to a descent of w). We define
R_G(q) := ∑_ q^().
It can be shown that R_G_w(q)=R_w(q). A clique of G is a subgraph of G such that it is isomorphic to a complete graph. Given a graph G and a vertex k of G, let G∖ k denote the graph obtained by deleting k and its adjacent edges in G. The following lemma is from <cit.>.
Suppose that a graph G on vertex set [n] has a vertex k that satisfies the following two conditions:
* The neighbors of k form a clique in G.
* Either all neighbors of k are less than k or all neighbors of k are greater than k.
Then R_G(q) = [m+1]_q· R_G ∖ k(q) where m is the degree of the vertex k.
Consider the inversion graph of w=2431
[scale=0.25]
(0,0) circle (7pt) node[label=[label distance=.1cm]180:4];
(0,4) circle (7pt) node[label=[label distance=.1cm]180:1];
(4,4) circle (7pt) node[label=[label distance=.1cm]0:2];
(4,0) circle (7pt) node[label=[label distance=.1cm]0:3];
[thick] (0,0)–(4,4)–(4,0)–(0,0)–(0,4);
corresponding to inversions (1,4),(2,3),(2,4),(3,4). The neighbor of 2 is {3,4}, which is a clique, since (3,4) is an edge. Moreover all vertices in this clique is bigger than 2. By Lemma <ref>, we have
R_2431(q) = (1+q+q^2)· R_231(q).
Comparing this to P_2431(q) in Example <ref>, we see these polynomials decompose in the exact same manner.
Let w∈_n. We need to check that the recursion in Lemma <ref> behaves exactly the same way as the recursion in Corollary <ref>. Using d and e as in Corollary <ref>, we have w(d) > w(d+1) > … > w(n) if and only if the neighbor of n in G_w forms a clique. The other case, w^-1(e) > … > w^-1(n) happens if and only if the neighbor of n in G_w^-1 forms a clique. From this observation that the two recurrences are the same whenever w is a smooth permutation.
§.§ The general case
We now consider the case when W is a Coxeter group of finite Lie type.
As with permutations, the main idea is to use BP-decompositions to show that the polynomials P_w(q) and R_w(q) follow the same recursion. In type A, the analysis was simpler since we could always choose v so it is the maximal length element of W_S(v)∩ W^J, in the context of Theorem <ref>.
For other types there are more cases of v to consider, so the proof is more technical. Despite already having covered the type A case, the examples we use in this section used to illustrate the ideas will also come from the type A for simplicity. We will be utilizing Theorem <ref> as our main tool for decomposing the polynomials. The results in this section are due to Mcalmon, Oh, and Yoo in <cit.> and Oh and Yoo in <cit.>.
Let be a central hyperplane arrangement with a fixed fundamental chamber r_0 and let Q_ denote the set of chambers of . We define a poset structure on Q_ generated by the covering relations r_1<r_2 if chamber r_1 is adjacent to chamber r_2 and d(r_1,r_0)=d(r_2,r_0)-1.
If ' is some subarrangement of and r∈ Q_', we define the induced subposet Q_,',r to be the subposet of Q_ obtained by restricting to the chambers of contained in r. We say that is uniform with respect to ' if for all chambers r of ', the induced subposets Q_,',r are all isomorphic. In this case, we use Q_,' to denote the poset.
We consider the inversion arrangement of w=4132:
_4132={x_1 = x_2, x_1 = x_3, x_1 = x_4, x_3 = x_4}
Now consider the hyperplane arrangement _3124 which is a subarrangement of _4132 by removing the hyperplanes x_1 = x_4 and x_3 = x_4. In Figure <ref>, we highlight _3124 in yellow.
Let r_0' denote fundamental chamber of _3124. We see that r_0' contains three chambers from _4132 and the poset Q__4132,_3124,r_0' is a chain of length 3. The same is true for all other chambers of _4132 and hence _4132 is uniform with respect to _3124.
Recall that if w_0 is the longest element of W, the arrangement _w_0 is the Coxeter arrangement of W. Here each chamber is indexed with a permutation w ∈ W and two chambers u,w are adjacent if and only if w=su for some s∈ S. Hence, the poset Q__w_0 where w_0 is the longest element of W is exactly the (left) weak Bruhat order of W. Recall that the weak Bruhat order of W and the strong Bruhat order of W are different poset structures on the same set of elements with the same rank <cit.>. From this we get the following lemma.
Let w_0 be the longest element of W. Then P_w_0(q) = R_w_0(q). Furthermore, for any J⊆ S, if u_0 is the longest element of W_J for some J ⊂ S, then _w_0 is uniform with respect to _u_0.
Each chamber of _w_0 is indexed by an element w ∈ W and each w ∈ W has a parabolic decomposition vu where u ∈ W_J and v ∈ W^J. The chambers indexed by vu with common u ∈ W_J are contained in the same chamber indexed by u in _u_0. For each chamber u in _u_J, the chambers of _w_0 contained in u are only separated by hyperplanes in _w_0∖_u_0. The poset Q__w_0,_u_0 is the left weak Bruhat order of W^J.
We use BP decompositions to develop some tools needed for the recursion on R_w(q) when w is rationally smooth. Let J⊆ S and suppose we have a BP decomposition w=vu with respect to J. By Theorem <ref> part (4), we have that S(v)∩ J⊆ D_L(u). In particular, we can write a reduced factorization
u=u_S(v)∩ J· u'
where u_S(v)∩ J is the longest element of W_S(v)∩ J. Theorem <ref> implies that if w∈ W is rationally smooth, then either w or w^-1 has Grassmannian BP decomposition with respect to some J of the form
v· (u_S(v)∩ J· u')
For notational simplicity, let I:=S(v). Given such a decomposition, we decompose
_w=_0⊔_1⊔_2
where
_2:=_w∖_u, _1:=_u∖_0, and _0:= (u')^-1_u_I∩ J
The first step is the following lemma:
Let w∈ W be a rationally smooth element and w=uv be a BP-decomposition. Then every simple reflection in J appearing in the reduced word of v
is a right descent of u.
If we remove every simple reflection appearing in v but one in J, then the resulting element is in W_J and is below w. Hence by maximality of u, it is below u.
Actually, we can state much more about u in terms of simple reflections of J appearing in v.
Let w=uv be a BP-decomposition with respect to J. Let I be the subset of S that appears in the reduced word of v. Then every reflection formed by simple reflections in I∩ J is a right inversion reflection of u. In fact, there is a minimal length decomposition u=u'u_I ∩ J where u_I ∩ J is the longest element of W_I∩ J.
Take the parabolic decomposition of u under the right quotient by W_I∩ J. Say, u=u'u”. Then u' is the minimal length representative of u in W/W_I∩ J. For any simple reflection s∈ I∩ J, the minimal length representative of us in W/W_I∩ J is still u', hence the parabolic decomposition
of us is us=u'(u”s). Since s is a right descent of u by Lemma <ref>, s is a right descent of u”. Therefore, u” is the longest element in W_I∩ J. The rest follows from this.
The above lemma tells us that for each rationally smooth w ∈ W, we can decompose w or w^-1 to u' u_I ∩ J v where uv is the BP-decomposition with respect to J, with u = u'u_I ∩ J and u_I ∩ J is the longest element of W_I ∩ J. Given such decomposition, we decompose _w into _0 _u_I ∩ J, _1 _u ∖_0 and _2 _w ∖_u.
**TODO : make sure BP notation-wise. left BP do it using inverse.
**TODO : Maximal element for W_J notation? Use u_I instead of u_0
**TODO : J = {s_1,…} not integers
**TODO : Use gothic S_n
Let r be some chamber inside _1 ⊔_0. Let r' be the chamber of _0 that contains r. Then the poset Q__w, _1 ⊔_0, r is isomorphic to Q__0 ⊔_2, _0, r'.
Once a chamber r' of _0 is fixed, we will show that any chamber of _0 ⊔_2 contained in r' intersects every chamber of _1 ⊔_0 contained in r'. In order to show this, we can freely add more hyperplanes to _0, _1 and _2. So we may assume that u = u_I∩ J u' is the longest element of W_J and v is the longest element of W^J.
From Lemma <ref>, each chamber of _0 is now indexed with a permutation of W_I ∩ J. Fix a chamber r_x labeled with a permutation x ∈ W_I ∩ J. Each chamber of _0 ⊔_2 contained in r_x is labeled with a permutation zx where z ∈ W^J. Each chamber of _1 ⊔_0 contained in r_x is labeled with a permutation xy where y^-1∈ W^I ∩ J∩ W_J. For any such chamber of _0 ⊔_2 and _1 ⊔_0, their intersection will be the chamber of that is labeled by zxy∈ W.
Let r_1 and r_2 be two different chambers of contained in r. They are separated by a hyperplane in _2. For i=1,2, let r_i' be the chamber of _0 ⊔_2 that contains r_i. Then r_1' and r_2' are different chambers, since they are separated by the hyperplane that separates r_1 and r_2. If r_1 and r_2 are adjacent, then r_1' and r_2' are adjacent. If r_1' and r_2' are adjacent but r_1 and r_2 are not, it means there is a hyperplane of _1 that separates r_1 and r_2. But that contradicts the fact that r_1 and r_2 are both contained in the same chamber of _1 ⊔_0. We conclude that r_1 and r_2 are adjacent if and only if r_1' and r_2' are adjacent.
Let w=4132 and consider the arrangement _4132 from Figure <ref>. The BP-decomposition with respect to J = {s_1,s_2} is
w=(s_2s_3)(s_2s_1)
where v = s_2s_3 = 1342 and u = s_2s_1= 3124. The set I=S(v)={s_2,s_3} and u_I ∩ J = s_2 = 1324.
Looking at the inversions of w, we get:
_w={x_1 = x_2, x_1 = x_3, x_1 = x_4, x_3 = x_4}
with
_u = {x_1 = x_2, x_1 = x_3}, _2= {x_1 = x_4, x_3 = x_4}, and _I ∩ J={x_2 = x_3}.
The arrangement _0 = (s_1)^-1_I ∩ J={x_1 = x_3} and hence _1 = {x_1=x_2}. From Proposition <ref>, we have that Q(_4132,_3124) is isomorphic to Q(_3214,_1324) where s_2s_3s_2=3214 and s_2=1324. This poset is a chain of length 3 by Lemma <ref>.
From the above property we immediately get the following tool:
Suppose we have a decomposition w=v(u_I∩ Ju') as in Proposition <ref> and assume _vu_I ∩ J is uniform with respect to _u_I ∩ J.
If R_vu_I ∩ J(q) = P_vu_I ∩ J(q) and R_u(q) = P_u(q), then R_w(q) = P_w(q).
If _vu_I ∩ J is uniform with respect to _u_I ∩ J, then Proposition <ref> tells us that _w is uniform with respect to _u. Hence R_w(q) is divisible by R_u(q). Moreover,
R_w(q)/R_u(q) = R_vu_I ∩ J(q)/R_u_I ∩ J(q).
From Lemma <ref>, we have R_u_I ∩ J(q) = P_u_I ∩ J(q). Hence R_vu_I ∩ J(q) = P_vu_I ∩ J(q) and R_u(q) = P_u(q) implies R_w(q) = P_w(q).
Corollary <ref> allows us to consider only the case where u is the longest element of some W_I.
Let I be the set of simple roots that appear in a reduced word of v. We say that v is a locally-maximal element in W^J if it is the maximal element of W_I^I ∩ J:=W_I∩ W^I ∩ J and I forms a connected subgraph within the Coxeter diagram. Similarly we say that v is in a local chain if W_I^I ∩ J is a chain poset. Notice that in Theorem <ref>, only case when v is not locally-maximal nor a local chain lies in Coxeter groups of type F_4 and B_n.
Suppose we have a decomposition w=v(u_I∩ Ju') as in Proposition <ref>.
If v is a locally-maximal element or a local chain, then P_u(q)=R_u(q) implies P_w(q) = R_w(q).
From corollary <ref>, it is enough to show _vu_I ∩ J is uniform with respect to _u_I ∩ J and R_vu_I ∩ J(q) = P_vu_I ∩ J(q).
If v is the longest element of W^J, then vu_I ∩ J is the longest element of W_I. In this case, the proposition follows from Lemma <ref>.
When W_I^I ∩ J is a chain, let v' denote the longest element of W_I^I ∩ J. Then w' := v'u_I ∩ J is the longest element of W_I. From Lemma <ref>, we have that R_u_I ∩ J(q) = P_u_I ∩ J(q) and R_v'u_I ∩ J(q) = P_v'u_I ∩ J(q). For each chamber r of _u_I ∩ J, the poset Q(_w',_u_I ∩ J,u) is a chain of length ℓ(v'). In particular, every hyperplane of _w'∖_u_I ∩ J intersects the interior of the chamber r.
When we go from _w'=v'u_I ∩ J to _vu_I ∩ J, we remove some hyperplanes from _w'∖_u_I ∩ J. For each chamber r of _u_I ∩ J, the poset Q(_v'u_I ∩ J,_u_I ∩ J,r) is a chain of length ℓ(v') minus the number of hyperplanes removed. Hence _vu_I ∩ J is uniform with respect to _u_I ∩ J. Moreover, we have
R_vu_I ∩ J(q) = (1+⋯+q^ℓ(v))· R_u_I ∩ J(q).
The proposition now follows from Lemma <ref>.
Lastly we analyze two special examples each coming from Coxeter groups of type F_4 and B_n which will be needed for our main result. These examples correspond to parts (1b) and (2b) in Theorem <ref>. We start with type F_4:
Let W be a Coxeter group of type F_4. Let w=vu where u is the longest element of W_{s_1,s_2,s_3} and v = s_1s_2s_3s_4. Then w=vu is a BP decomposition and
P_w(q) =(1+q+q^2+q^3)· P_u(q).
The root system of type F_4 lies in ^4 and the hyperplane arrangement _w is the union of
_u={x_1=0, x_2=0, x_3=0, x_2-x_1=0, x_3-x_2=0,
x_3-x_1=0, x_1+x_2=0, x_1+x_3=0, x_2+x_3=0}
and the hyperplanes
{x_1+x_2+x_3=x_4, -x_1-x_2+x_3=x_4,
-x_1+x_2-x_3=x_4, x_1-x_2-x_3=x_4}.
Pick any chamber c of _u and an arbitrary interior point z = (z_1,z_2,z_3,z_4)∈ c. Consider the line l_z obtained from z by changing the z_4 value from -∞ to +∞. This line is still contained in chamber c. Imagine moving through the line l_z by changing the z_4 value from -∞ to +∞. The difference between any pair of equations of hyperplanes in _w ∖_u is of form 2x_i + 2x_j = 0. For each pair i ≠ j ≤ 3 whether x_i+x_j is positive or negative is determined by the choice of c since x_i+x_j=0 is a hyperplane of _u. Therefore, the order we cross the hyperplanes of _w ∖_u is completely determined by c.
From this we can conclude that _w is uniform with respect to _u. Moreover, the poset Q_w is obtained from Q_u by a poset product with a chain of length 4. We get
R_w(q) = (1+q+q^2+q^3 + q^4)· R_u(q).
Since R_u(q) = P_u(q) from Lemma <ref> and P_v^W^J(q) = (1+ ⋯ + q^ℓ(v)), we obtain the desired result.
Now we consider the case of B_n with the leaf s_n in its Coxeter diagram.
Let W be a type B_n Coxeter group and simple generating set S={s_1,…, s_n} and let J = S ∖{s_n}. Let w = v u where u is the longest element of W_J and v = s_1 ⋯ s_n-1 s_n. Then _w is uniform with respect to _u and P_w(q) = R_w(q).
The root system of type B_n lies in ^n and the hyperplane arrangement _u consists of hyperplanes defined by the following equations:
* x_i = 0 for 1 ≤ i ≤ n-1,
* x_i - x_j = 0 for 1 ≤ i < j ≤ n-1, and
* x_i + x_j = 0 for 1 ≤ i < j ≤ n-1.
The hyperplane arrangement _w is obtained from _u with the additional the hyperplanes x_n=0 and x_n + x_i = 0 for 1 ≤ i ≤ n-1.
Pick any chamber c of _u and arbitrary interior point z = (z_1,…,z_n)∈ c. The chamber c determines a total order on z_1,…,z_n-1 and 0 that does not depend on the choice of z. Consider the line l_z obtained from z by changing the value of z_n from -∞ to +∞ and that l_z is contained in the chamber c. As the value of z_n moves from -∞ to +∞ along l_z, the order in which we cross the hyperplanes of _w ∖_u is determined by the total order on z_1,…,z_n-1 and 0.
Hence _w is uniform with respect to _u. Moreover, the poset Q_w is obtained from Q_u by a poset product with a chain of length n. We get R_w(q) = (1+ ⋯ + q^ℓ(v))· R_u(q). Since R_u(q) = P_u(q) from Lemma <ref> and P_v^W^J(q) = (1+ ⋯ + q^ℓ(v)), we obtain the desired result.
We are now ready to prove Theorem <ref>. First note that R_w(q) is always palindromic by definition. So if w∈ W is not rationally smooth, P_w(q)≠ R_w(q). From the above Proposition <ref> and Theorem <ref> we can obtain the following result which completes the proof.
Let W be a Coxeter group of finite Lie-type and let w∈ W be rationally smooth. Then R_w(q) = P_w(q).
We use induction on |S(w)|. First, if w=s∈ S, then R_w(q)=P_w(q)=1+q. By Theorem <ref>, either w or w^-1 has a Grassmannian BP decomposition vu. Furthermore, v is a locally-maximal element or is in a local chain or is in special cases of types F_4 or B_n. In the first two cases, that is, when v is a locally-maximal element or is in a local chain, then Proposition <ref> allows us to replace w with rationally smooth u where |S(u)|<|S(w)|. If we are in the special cases, using Example <ref> and Lemma <ref> combined with Corollary <ref> allows us the same replacement.
When w∈ W is rationally smooth, it is common for the polynomials P_w(q) = R_w(q) to factor as a product of q-integers. If P_w(q) factors into q-integers along Grassmannian BP decompositions, we say that w has a chain BP decomposition (this name comes from the fact that each poset [e,v]^J is a chain). By Corollary <ref> (<cit.>), all smooth permutations have chain BP decompositions. If w∈ W has a chain BP decomposition, then the degrees of the q-integer factors of P_w(q)=R_w(q) are strongly related to the structure of the corresponding inversion arrangement and are called exponents of w. In <cit.>, Slofstra gives an explicit description of these exponents. For other interesting results in inversion arrangements, we recommend that the reader take a look at <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
§ CONNECTIONS WITH THE GEOMETRY OF SCHUBERT VARIETIES
In this section, we present results from <cit.> which connects BP decompositions to the geometry of Schubert varieties. Let G be a connected simple Lie group over and fix a Borel subgroup B. Let W denote the Weyl group of G with generating set S. Since G is a finite dimensional Lie group, the Weyl group W is a Coxeter group of finite Lie type (See Figure <ref>). For any subset J⊆ S, let W_J the parabolic subgroup of W generated by J and let P_J:=BW_JB denote the corresponding parabolic subgroup of G. The coset space G/P_J is called a partial flag variety of G and it is a smooth complex protective homogeneous space. If J=∅, then P_∅= B and G/B is called the complete or full flag variety of G.
Consider the natural projection between flag varieties
π:G/B→ G/P_J
given by π(gB)=gP_J. It is not hard to see that the fibers of the this map are
π^-1(gP_J)=gP_J/B
and that the map π gives a (P_J/B)-fiber bundle structure on the flag variety G/B with base G/P_J. For any w∈ W^J, we define the Schubert variety as the closure of the B-orbit (i.e. Schubert cell)
X^J(w):=BwP_J/P_J.
If J=∅, then P_∅=B and we will denote X(w):=X^∅(w).
It is well known that if w∈ W^J, then
_(X^J(w))=ℓ(w)
and Bruhat order describes the closure relations on Schubert cells
X^J(w)=_w'∈[e,w]^J Bw'P_J/P_J.
In this section, we consider all Schubert varieties and do not restrict to the cases of smooth or rationally smooth. If w=vu is a parabolic decomposition with respect to J, then u∈ P_J and hence π restricts to a projection between Schubert varieties
π:X(w)→ X^J(v).
The question we address is when does the map π induce a fiber bundle structure on X(w)? As we will see, the generic fibers of this map are isomorphic to the Schubert variety X(u), however, unlike for G/B, the map π restricted to a Schubert variety may not fiber bundle. The following theorem is a geometric realization of BP-decompositions and is proved by Richmond and Slofstra in <cit.>.
Let w=vu be a parabolic decomposition with respect to J. Then the following are equivalent:
* The decomposition w=vu is a BP decomposition with respect to J.
* The projection π:X(w)→ X^J(v) is a Zarisky-locally trivial with fiber X(u).
Our goal is to give a detailed proof Theorem <ref> following <cit.>. First, we need several important lemmas about Schubert varieties. One key property needed in the proof of these results is the following well-known relation for double B-orbits for BN-pairs (or Tits systems).
Given s∈ S and u∈ W, we have
BsB· BuB=
BsuB if s∉ D_L(u)
BuB∪ BsuB if s ∈ D_L(u)
If xP_J∈ X^J(v), then we can write xP_J=b_0v_0P_J for some b_0∈ B and v_0∈[e,v]^J. The next lemma (<cit.>) describes the fibers of the map π.
Let w=vu be the parabolic decomposition with respect to J and π:X(w)→ X^J(v). Let xP_J∈ X^J(v) and write x=b_0v_0 for some b_0∈ B and v_0∈[e,v]^J. Then
π^-1(xP_J)=x⋃ Bu'B/B
where the union is over all u'∈ W_J such that v_0u'≤ w.
We first look at the fiber over xP_J of the map π:G/B→ G/P_J. Note that
P_J=B W_J B=⋃_u'∈ W_J BuB
and hence the fiber of xP_J in the full flag variety G/B is
π^-1(xP_J)=b_0v_0⋃_u'∈ W_J BuB/B.
Restricting the map π to the Schubert variety X(w) gives
π^-1(xP_J)=(b_0v_0⋃_u'∈ W_J Bu'B/B) ∩(⋃_w'≤ wBw'B/B).
Since v_0∈ W^J and u'∈ W_J, Lemma <ref> implies b_0v_0Bu'B⊆ Bv_0u'B. Hence
π^-1(xP_J)=b_0v_0⋃ Bu'B/B
where the union is over all u'∈ W_J such that v_0u'≤ w.
The next lemma is from <cit.>.
Let w=vu be a parabolic decomposition with respect to J. Then the following are equivalent:
* The decomposition w=vu is a BP decomposition.
* The fibers of the map π:X(w)→ X^J(v) are isomorphic to X(u).
* The fibers of the map π:X(w)→ X^J(v) are equidimensional.
Clearly part (2) implies part (3), so we focus on showing part (1) implies part (2) and part (3) implies part (1). Let xP_J∈ X^J(v) and write x=b_0v_0 for some b_0∈ B and v_0∈[e,v]^J. If w=vu is a BP decomposition, then Theorem <ref> part (3) implies that v_0u'≤ w if and only if u'≤ u. Lemma <ref> implies that the fiber
π^-1(xP_J)=b_0v_0⋃_u'≤ uBu'B/B=b_0v_0X(u)
and hence all fibers of π are isomorphic to X(u).
Now suppose all the fibers of π are the same dimension. Then Lemma <ref> implies the fiber over the identity is π^-1(eP_J)=X^J(u') where u' denotes the maximal element of [e,w]∩ W_J. Similarly, we have the fiber over vP_J is π^-1(vP_J)=vX^J(u). Since the fibers are equidimensional, we have ℓ(u')=ℓ(u). But u≤ u' and hence u=u'. Thus w=vu is a BP decomposition.
What remains to be proved is that when w=vu is a BP-decomposition, then the map π:X(w)→ X^J(v) is locally trivial and hence a X(u)-fiber bundle. We first need the following lemma which is proved in <cit.>.
Let v∈ W^J and let I=S(v) denote the support set of v. Let G_I⊆ G denote the Levi subgroup of P_I. Let P_I,J:=G_I∩ P_J and B_I:=G_I∩ B denote the corresponding Borel and parabolic subgroups of G_I.
Then the inclusion i:G_I/P_I,J↪ G/P_J induces an isomorphism
i:X_I^I∩ J(v)→ X^J(v)
where the Schubert variety
X_I^I∩ J(v):=B_IvP_I,J/P_I,J⊆ G_I/P_I,J.
We can now prove the main theorem of the section.
First observe that if π:X(w)→ X^J(v) is a locally-trivial fiber bundle, then the fibers are equidimensional and hence w=vu is a BP-decomposition by Lemma <ref>.
Now suppose that w=vu is a BP-decomposition and let I=S(v). Lemma <ref> implies the fibers of the map π are all isomorphic to X(u) and hence we only need to show local triviality. Lemma <ref> states that the inclusion i:G_I/P_I,J↪ G/P_J restricts to an isomorphism i:X_I^I∩ J(v)→ X^J(v). The map G_I→ G_I/P_I,J is locally trivial and thus has local sections. Hence for any x∈ X^J(v), there exists a Zariski open neighborhood U_x⊆ X^J(v) with a local section s:U_x→ G_I⊆ G. Define the multiplication map
m:U_x× X(u)→ G/B
by m(x',y):=s(x')· y. We claim that the image of m lies in the Schubert variety X(w). Let x'∈ U_x⊆ X^J(v) and hence x'∈ Bv_0P_J for some v_0≤ v. Thus we can write s(x')=b_0v_0p_0 for some b_0∈ B_I:=G_I∩ B and p_0∈ P_I,J. Since w=vu is a BP-decomposition, Theorem <ref> implies that I∩ J⊆ D_L(u). Since P_I,J⊆ P_I∩ J=BW_I∩ JB, Lemma <ref> implies p_0X(u)=X(u). Hence
m(x',X(u))=b_0v_0p_0X(u)=b_0v_0X(u)⊆ X(v_0u)⊆ X(w).
Consider the commuting diagram:
U_x × X(u) [r]^m[d] X(w) [d]^π
U_x @^(->[r] X^J(v)
and note that the map m identifies (x',X(u)) with the fiber π^-1(x'). For any z∈π^-1(U_x), let g_z:=s(π(z))∈ G_I. Then z↦ (π(z), g_z^-1z) maps π^-1(U_x) to U_x× X(u) and is, in fact, the inverse of m. This implies the map m is an algebraic isomorphism and hence π is locally trivial.
One consequence of Theorem <ref>, is the following cohomological interpretation of BP decompositions. For any variety X, let H^*(X) denote its singular cohomology with complex coefficients.
The decomposition w=vu is a BP decomposition with respect to J if and only if
H^*(X(w)≃ H^*(X^J(v))⊗ H^*(X^*(u))
as H^*(X^J(v))-modules.
If w=vu is a BP decomposition, then Equation (<ref>) follows from Theorem <ref> and the Leray-Hirsch theorem. Conversely, recall that
P_w(q^2)=∑_i^2ℓ(w)(H^i(X(w)) q^i and
P^J_v(q^2)=∑_i^2ℓ(v)(H^i(X^J(v)) q^i.
If Equation (<ref>) holds, then w=vu is a BP decomposition since P_w(q)=P^J_v(q)· P_u(q).
We give some remarks about the fibers if π when w=vu is not necessarily a BP decomposition. The union describing general fibers in Lemma <ref> is taken over all u' such that v_0u'∈ [e,w]∩ v_0W_J. It is not difficult to see that this collection forms a lower order ideal in W_J. In <cit.>, it is independently shown that these lower order ideals have unique maximal elements and hence are intervals in W_J. This leads to the following corollary.
Let w=vu be a parabolic decomposition with respect to J and π:X(w)→ X^J(v). Let xP_J∈ X^J(v) and write x=b_0v_0 for some b_0∈ B and v_0∈[e,v]^J. Then
π^-1(xP_J)=b_0v_0X(u_0)
where u_0 is the unique maximal element of the set v_0^-1([e,w]∩ v_0W_J).
Moreover, if u' denotes the maximal element of the set [e,w]∩ W_J, then u≤ u_0≤ u'. If w=vu is a BP decomposition, then u=u_0=u'.
Let G=_4(). Geometrically, we have
G/B={V_∙=(V_1⊂ V_2⊂ V_3⊂^4) | V_i=i}.
Let E_∙ denote the flag corresponding to eB and w=s_1s_2s_3s_2s_1. Then
X(w) = {V_∙ | (V_2∩ E_2)≥ 1}.
We consider the geometric analogues of Examples <ref> and <ref>.
First, if J={s_1,s_3}, then π(V_∙)=V_2 and
w=vu=(s_1s_3s_2)(s_3s_1)
is a BP decomposition with respect to J as in Example <ref>. In particular, the Schubert variety
X^J(v)= {V_2 | (V_2∩ E_2)≥ 1}
and the fibre over V_2 in the projection π:X^∅(w)→ X^J(v) is
π^-1(V_2)={(V_1,V_3) | V_1⊂ V_2⊂ V_3} X(u)ℂℙ^1×ℂℙ^1.
If J={s_1,s_2}, then π(V_∙)=V_3 and
w=vu=(s_1s_2s_3)(s_2s_1)
is not a BP decomposition as in Example <ref>. The fiber over V_3 is given by
π^-1(V_3) ={(V_1,V_2) | V_1⊂ V_2⊂ V_3
and (V_2∩ E_2)≥ 1}
X(s_2s_1) if (V_3∩ E_2)=1
X(s_1s_2s_1) if E_2 ⊂ V_3
Note that the fibres are not equidimensional.
Combinatorially, Corollary <ref> says that if w=vu is a parabolic decomposition with respect to J and u' denotes the maximal element of [e,w]∩ W_J, then for every v_0∈ [e,v]^J, the coset interval
[e,w]∩ v_0W_J≃ [e,u_0]
for some u≤ u_0≤ u'. At the extremes, we have
[e,w]∩ W_J≃ [e,u'] and [e,w]∩ vW_J≃ [e,u].
§.§ Relative BP decompositions
We finish this section by stating a relative version of Theorem <ref>. In this case, we have two parabolic subgroups P_J⊆ P_K⊆ G corresponding to subsets J⊆ K⊆ S. Consider the projection
π:G/P_J→ G/P_K
and we can ask the question: when does the map π induced a fiber bundle structure when restricted to the Schubert variety X^J(w)? To answer to this equation, we define the relative version of a BP decomposition.
Let J⊆ K⊆ S and w∈ W^J. Let w=vu denote the parabolic decomposition with respect to K. We say w=vu is a BP decomposition with respect to (J,K) if the Poincare polynomial factors
P_w^J(q)=P_v^K(q)· P_u^J(q).
Note that if J=∅, then this is the usual BP decomposition of w with respect to K. We remark that relative BP decompositions are characterized by a similar list of conditions to those given in Theorem <ref>. See <cit.> for a precise statement.
Let w∈ W^J and let w=vu be a parabolic decomposition with respect to (J,K). Then the following are equivalent:
* The decomposition w=vu is a BP decomposition with respect to (J,K).
* The projection π:X^J(w)→ X^K(v) is a Zarisky-locally trivial with fiber X^J(u).
Theorem <ref> is proved in <cit.> and the proof is very similar to that of Theorem <ref>.
Theorems <ref> and <ref> hold for the much larger class of Kac-Moody Schubert varieties. Kac-Moody groups are infinite dimensional generalizations of Lie groups and include the family of affine Lie groups. Their Weyl groups are (not necessarily finite) crystalographic Coxeter groups. While the flag varieties of Kac-Moody groups are also infinite dimensional, their Schubert varieties are finite dimensional. For more on Kac-Moody flag varieties and their Schubert varieties see <cit.>.
§ ITERATED BP DECOMPOSITIONS AND STAIRCASE DIAGRAMS
In this section, we discuss iterations of BP decompositions for Coxeter groups of finite type. In particular, if (W,S) is a Coxeter system and J⊆ S, then each subgroup W_J has a unique longest element we denote by u_J. We begin with the following definition.
We say a factorization
w=v_nv_n-1⋯ v_1
is an iterated BP decomposition if (v_i+1)(v_i⋯ v_1) is a BP decomposition for each 1<i<n.
By Theorem <ref>, iterated BP decompositions correspond to iterated fiber bundle structures on Schubert varieties.
§.§ Staircase diagrams
In this section we combinatorially characterize iterated BP decompositions by objects called labelled staircase diagrams. Staircase diagrams are certain partially ordered sets over a given graph and were introduced by Richmond and Slofstra in <cit.> with the goal of developing a combinatorial framework to study iterated BP decompositions. We focus on staircase diagrams over the Coxeter graph of a Coxeter group. The Coxeter graph is simply the Coxeter diagram of W without the edge labels and we denote this graph by Γ_W (See Figure <ref>). In other words, Γ_W is a graph with vertex set S and edge set {(s,t)∈ S^2 | m_st≥ 3}. Note that the Coxeter groups of types A_n and B_n/C_n all have the same underlying Coxeter graph.
Before stating the definition of a staircase diagram, we need some terminology. Given s,t∈ S, we say s is adjacent to t if (s,t) is an edge in Γ_W. We say a subset B⊂ S is connected if the induced subgraph of B in Γ_W is connected. If is a collection of subsets of S and s∈ S, we define
_s:={B∈ | s∈ B}.
In other words, _s are the elements in that contain s∈ S.
Let (W,S) denote a Coxeter system and let be a collection of subsets of S. We say a partially ordered set (,≺) is a staircase diagram if the following hold:
* Every B∈ is connected, and if B covers B', then B∪ B' is connected.
* The subset _s is a chain for every s∈ S.
* If s is adjacent to t, then _s∪_t is a chain, and _s and _t are saturated subchains of _s∪_t.
* For every B∈, there exists s∈ S (resp. s'∈ S) such that B is the minimum in _s (resp. maximum in _s').
If the generating set S={s_1,…, s_n}, then we use interval notation
[s_i,s_j]:={s_i,s_i+1,…,s_j}
for i≤ j. In type A_n, we have the Coxeter graph
[scale=.4]
[thick] (0,0)–(2,0)–(4.25,0)–(6.5,0);
[thick,fill=white] (0,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_1] ;
[thick,fill=white] (2,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_2];
[white, fill=white] (4.25,0) circle (.75cm);
(4.25,0) node ⋯;
[thick,fill=white] (6.5,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_n];
An example of staircase diagram in this type is
={[s_1,s_3]≺ [s_2,s_4]≺ [s_3,s_5]≻ [s_6]≻ [s_7,s_9]≺ [s_9,s_10]≺[s_10,s_11]}.
In this example, the set _s_3={[s_1,s_3], [s_2,s_4], [s_3,s_5]}. In Figure <ref>, we represent this staircase diagram with a picture of uneven steps where “higher steps" are greater in the partial order:
Since elements of a staircase diagram are connected, we will refer to them as “blocks". Note the blocks may not necessarily be ordered intervals. In type D_5, we have Coxeter graph
[scale=.4]
[thick] (0,0)–(2,0)–(4,0)–(6,0);
[thick] (2,0)–(2,2);
[thick,fill=white] (0,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_2] ;
[thick,fill=white] (2,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_3];
[thick,fill=white] (4,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_4];
[thick,fill=white] (6,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_5];
[thick,fill=white] (2,2) circle (.3cm) node[label=[label distance=.1cm]0:s_1];
with examples of staircase diagrams in Figure <ref>.
In Figure <ref>, we give some non-examples of staircase diagrams. The first diagram is of type A_6 violates parts (3) and (4) of Definition <ref>. The second diagram is of type D_5 and violates part (2) of Definition <ref>.
It is not hard to check that Definition <ref> is symmetric with respect to the partial order. Given a staircase diagram , we can define the dual staircase diagram () to be the set with the reverse partial order. Pictorially this corresponds to “flipping" the staircase from top to bottom:
If ' is a saturated subset of , then the induced partial order on ' makes it a staircase diagram. In this case, we say ' is a subdiagram of .
For any J⊆ S, define
_J:={B∈ | J⊆ B}.
The following lemma describes some combinatorial properties of staircase diagrams.
Let be a staircase diagram of a Coxeter system (W,S). Then:
* For any J⊆ S, the set _J is a chain in .
* If B,B'∈, then B⊈ B'.
* If B,B'∈ and B∪ B' is connected, then B and B' are comparable.
Part (1) follows from the fact that _J is the intersection of _s where s∈ J and each _s is a chain.
For part (2), select s, s'∈ S such that B is the maximal and minimal block of _s and _s' respectively. Then _{s,s'} consists only of B. If B⊆ B', then B'∈_{s,s'} and hence B=B'.
For part (3), if B∪ B' is connected, then there exist s∈ B and t∈ B' such that s is adjacent to t. Thus B,B' belong to the chain _s∪_t and hence B,B' are comparable.
§.§ Labellings of staircase diagrams
Staircase diagrams provide the framework for building iterated BP decompositions. Let be a staircase diagram. For any B∈, define the sets
J_R(B):=B∩(⋃_B'≺ B B') and J_L(B):=B∩(⋃_B'≻ B B').
Pictorially, we can think of the set J_R(B) as the elements of B that are “covered below" by other blocks in and J_L(B) as the elements of B that are “covered above". For example, if
={[s_1,s_3]≺ [s_2,s_6]≻ [s_6,s_7]}
then
J_R([s_2,s_6])={s_2,s_3,s_6}
which we highlight in Figure <ref>.
We define a labelling of a staircase diagram which assigns a Coxeter group element to each block in . For any J⊆ S, we let u_J denote the longest element of W_J.
Let be a staircase diagram on a Coxeter system (W,S). We say a function
λ:→ W
is a labelling of if for every B∈, we have
* J_R(B)⊆ D_R(λ(B)),
* J_L(B)⊆ D_L(λ(B)), and
* S(λ(B)u_J_R(B))=B=S(u_J_L(B)λ(B)).
We denote a labeled staircase diagram by the pair (,λ).
The function λ:→ W given by λ(B)=u_B is a labelling of . This labelling is called the maximal labelling of .
Note that while staircase diagrams of type A_n and B_n/C_n are the same, labelled staircase diagrams are different since they depend on the group W and not just the underlying graph Γ_W.
The definition of a labelling is compatible with the dual of staircase diagram. For any labeled staircase diagram (,λ), define the inverse labelling
λ^-1:()→ W
by λ^-1(B):=λ(B)^-1. It is easy to check that ((),λ^-1) is also a labelled staircase diagram. The condition J_R(B)⊆ D_R(λ(B)) implies that λ(B)u_J_R(B) is the minimal right coset representative of λ(B) in W^J_R(B). Similarly, we have that u_J_L(B)λ(B) is a minimal left coset representative of λ(B). These coset representatives play an important role in the next definition, so for any labelled staircase diagram (,Λ) and B∈, we define
λ(B):=λ(B)u_J_R(B).
Given a labeled staircase diagram (,λ) define
Λ(,λ):=λ(B_n)λ(B_n-1)⋯λ(B_1)
where B_1,…,B_n is some linear extension of the poset . If the labelling λ is clear from the context, then we will denote Λ()=Λ(,λ).
By part (3) of Lemma <ref>, if B_i and B_j are not comparable, then λ(B_i) and λ(B_j) have commuting supports and hence commute as elements of W. This implies that Λ() is independent of choice of linear extension and is well defined.
Let ={[s_1,s_3],[s_5,s_6],[s_2,s_5]} in type A_6.
[scale=0.4]
80,3,3,0,1,1,1,0,
0,0,2,2,2,2,0,0,
Then
λ([s_1,s_3])=s_1s_2s_3s_1s_2s_1, λ([s_5,s_6])=s_5s_6s_5,
λ([s_2,s_5])=(s_3s_2s_4s_3s_5s_4s_5s_2s_3s_2)(s_2s_3s_2s_5)=s_3s_2s_4s_3s_5s_4
and
Λ()=(s_3s_2s_4s_3s_5s_4)(s_5s_6s_5)(s_1s_2s_3s_1s_2s_1).
Highlighted in red is the element u_{s_2,s_3,s_5} since J_R([s_2,s_5])={s_2,s_3,s_5}.
Note that if λ:→ W is a maximal labelling, then λ(B) is the maximal element W_B∩ W^J_R(B)
We define the support of to be the set
S():=⋃_B∈ B.
Note that if λ:→ W is a labeling, then S()=S(Λ()).
Furthermore, since the support set
S(λ(B_i-1)⋯λ(B_1))=B_1∪⋯∪ B_i-1
is disjoint with B∖ J_R(B_i), the product
λ(B_i)·(λ(B_i-1)⋯λ(B_1))
is a parabolic decomposition with respect to B_1∪⋯∪ B_i-1.
We will show that this decomposition is in fact a BP decomposition and thus the factorization of Λ() in Definition <ref> corresponds to an iterated BP decomposition. The next lemma gives several properties on how the Coxeter theoretic data of the element Λ() is extracted from the combinatorial data of the staircase diagram .
Let (,λ) be a labeled staircase diagram. Then the following are true:
* Λ()^-1=Λ((), λ^-1).
* The right descents of Λ() consist of all s∈ S() that satisfy:
* min(_s)≼min(_t) for all t adjacent to s and
* s is a right descent of λ(min(_s)).
* The left descents of Λ() consists of all s∈ S() that satisfy:
* max(_s)≽max(_t) for all t adjacent to s and
* s is a left descent of λ(max(_s)).
* Let ' be a lower order ideal in and let ”:=∖. Then
Λ()=(Λ(”)u_K)·Λ(')
is a parabolic decomposition with respect to S(') where
K={s∈ S | min(”_s)≠min(_s)}.
The proof of Lemma <ref> is technical, so we refer the reader to <cit.> for more details. Observe that part (3) follows from parts (1) and (2).
Let be a staircase diagram with a linear extension B_1,…,B_n. For i≥ 2, let ^i denote the subdiagram
^i:={B_1,…,B_i-1}.
If λ is a labelling of , then
Λ(^i+1)=λ(B_i)·Λ(^i)
is a BP decomposition with respect to S(^i).
Lemma <ref> part (4) implies λ(B_i)·Λ(^i) is a parabolic decomposition, so it suffices to show that the decomposition satisfies the BP condition in Theorem <ref> part (4). First observe that
S(λ(B_i))∩ S(^i)=B_i∩ S(^i)=J_R(B_i).
Thus λ(B_i)·Λ(^i) is a BP decomposition if and only if J_R(B_i)⊆ D_L(Λ(^i)). Let s∈ J_R(B_i). We use the characterization given in Lemma <ref> part (3) to show that s∈ D_L(Λ(^i)) . Suppose that t∈ S(^i) is adjacent to s. Observe that if B_j the predecessor of B_i in the chain _s, then B_j=max(^i_s). By definition of staircase diagram, _s is a saturated subchain of the chain _s∪_t. Since B≼ B_i for all B∈^i, it follows that max(^i_t)≼ B_j. By Lemma <ref>, it remains to show that s∈ D_L(λ(B_j)). Since s∈ B_i∩ B_j, we have s∈ J_L(B_j) and, by the definition of a labelling, J_L(B_j)⊆λ(B_j). Thus s∈ D_L(Λ(^i)) which completes the proof.
§.§ Complete BP decompositions
In this section we discuss a special class of decompositions called complete BP decompositions. We start with the following definition which was introduced in Section <ref>.
A BP composition w=vu with respect to J is a Grassmannian BP decomposition if |J|=|S(w)|-1. In other words, J is maximal proper subset of S(w).
Geometrically, Grassmannian BP decompositions correspond to projections π: G/B→ G/P where P is taken to be a maximal parabolic. In the classical type A setting this partial flag variety G/P corresponds to a Grassmannian variety. If w=vu is a Grassmannian BP decomposition, then Theorem <ref> implies the Schubert variety X(w) is an X(u)-fiber bundle over the Grassmannian Schubert variety X^J(v). Note that the decompositions that arise in Theorems <ref> and <ref> are Grassmannian BP decompositions.
Let n=|S(w)|. We say
w=v_nv_n-1⋯ v_1
is a complete BP decomposition if (v_i+1)(v_i⋯ v_1) is a Grassmannian BP decomposition for each 1≤ i<n.
Complete BP decompositions are iterated BP decompositions where the number of non-trivial factors is maximized in the sense that each iteration adds exactly one additional generator to the support set of w. For example, in type A_3, we have that
w=(s_1s_2s_3)(s_1s_2)(s_1)
is a complete BP decomposition of the longest element. Note that these decompositions are not unique. For w above, the decomposition
w=(s_2s_1s_3s_2)(s_1)(s_3)
is also a complete BP decomposition. The goal of this section is to classify which elements w∈ W that have complete BP decompositions. The key to this classification is the notion of nearly-maximal elements.
We say an element w∈ W is nearly-maximal if there is a Grassmannian BP decomposition w=vu such that S(u)⊂ S(v).
Furthermore, we say a labelled staircase diagram (,λ) is nearly-maximal if each B∈, λ(B) is nearly-maximal.
If w=vu is nearly-maximal, then
S(u)⊆ S(v)∩ J⊆ D_L(u)
and hence S(u)=D_L(u). This implies that u is the maximal element of W_J. Geometrically, this corresponds to the fiber X(u) being isomorphic to the flag variety P_J/B. Not all Grassmannian BP decompositions satisfy the nearly-maximal condition. For example, in type A_4,
w=(s_1s_2)(s_1s_3s_4)
is Grassmannian BP decomposition with respect to J={s_1,s_3,s_4}, but w is not nearly maximal. Note that the maximal labelling of a staircase diagram is nearly-maximal. The importance of nearly-maximal labelings is that they can used to construct complete BP decompositions. In fact, this construction will yield the following bijection:
Let W be a Coxeter group. Then the map (,λ)↦Λ() defines a bijection between staircase diagrams over W with a nearly-maximal labelling λ, and elements of W with a complete BP decomposition.
First note by Theorem <ref> and Definition <ref>, if λ is a nearly maximal labelling of , then Λ() has a complete BP-decomposition and thus the map (,λ)↦Λ() is well defined.
To show that the map is injective, suppose we have two nearly-maximal labelled staircase diagrams (_1,λ_1) and (_2,λ_2) such that Λ(_1)=Λ(_2). Choose s∈ S such that Λ(_i)=vu is a BP decomposition with respect to J=S∖{s}. It can be shown that B:=S(v) is a maximal block of _i and hence, by induction on the number of blocks, _1=_2. To show that λ_1=λ_2, note that, by Lemma <ref> part (1), the parabolic decomposition of Λ(_i)^-1 with respect to B is given by
Λ(_i)^-1=v'·λ_i(B)^-1
for some v' and thus λ_1(B)=λ_2(B). We also have
J_R(B,_i)=B∩ S(_i∖{ B})=B∩ S(λ_i(B)·Λ(_i))
and hence J_R(B,_1)=J_R(B,_2).
This implies λ_1(B)=λ_2(B). Since Λ(_1)=Λ(_2) and λ_1(B)=λ_2(B), we have that the induced labelling on lower order ideals satisfies Λ(_1∖{B})=Λ(_2∖{B}). By induction on |_i|, we have that the labellings λ_1=λ_2.
To show that the map is surjective, suppose x∈ W has a complete BP decomposition x=v_n⋯ v_1. By induction, suppose that (,λ) is a nearly-maximal labelled staircase diagram such that Λ()=v_n-1⋯ v_1. Define the staircase diagram
:=^0∪{S(v_n)} where ^0:={B∈ | B⊈ S(v_n)}
with the added covering relations max(^0_s)≺ S(v_n) for every s∈ S(^0) contained in, or adjacent to S(v_n). It can be shown that satisfies Definition <ref> of staircase diagram. Finally, define the labelling λ̃:→ W by
λ̃(B):=λ(B) if B∈^0
v_n· u_S(v_n)∩ S() if B=S(v_n).
Again, it can shown that λ̃ is a nearly maximal labelling of such that Λ(,λ̃)=x. This completes the proof.
Next we apply Theorem <ref> to rationally smooth elements of Coxeter groups of finte Lie-type. The following rephrasing of Theorem <ref>.
Let w∈ W be rationally smooth with |S(w)|≥ 2. Then either w or w^-1 has a Grassmannain BP decomposition vu with respect to J=S(w)∖{s} such that s is a leaf in the Coxeter diagram of W_S(w) and vu_S(v)∩ J is nearly maximal.
The proof of Theorem <ref> follows from checking that the list of elements given in Theorem <ref> all satisfy the definition of nearly maximal given in Definition <ref>. We remark that there exist nearly maximal elements that are not rationally smooth. Hence Theorem <ref> is slightly weaker statement than Theorem <ref>. In <cit.>, Richmond and Slofstra define the stronger condition of “almost-maximal" to make these theorems equivalent. Our next goal is to give an outline of a proof of Theorem <ref> which states that rationally smooth elements always have Grassmannian BP decompositions.
Let w∈ W be (rationally) smooth. Then there exists a Grassmannian BP decomposition w=vu with respect to some maximal proper subset J=S(w)∖{s}.
Moreover, u is (rationally) smooth and v is (rationally) smooth with respect to J.
First note that if w=vu is a BP decomposition, then Theorem <ref> implies that if X(w) is (rationally) smooth, then both X(u) and X^J(v) are also (rationally) smooth.
Recall that Theorem <ref> states that if w is rationally smooth, then either w or w^-1 has a Grassmannian BP decomposition with respect to J=S(w)∖{s} for some leaf s∈ S(w) in the Coxeter diagram of W_S(w). If w has such a BP decomposition, then the theorem is proved. Now suppose w^-1 has such a BP decomposition and hence we can write w=uv where u∈ W_J and v^-1∈ W^J and w^-1=v^-1u^-1 is a Grassmannian BP decomposition with respect to J=S(w)∖{s}. Since w is (rationally) smooth, we have that w^-1 is (rationally) smooth and hence u, u^-1 are also (rationally) smooth. Since |S(u)|<|S(w)|, we can inductively assume that there exists a Grassmannian BP decomposition u=v'u' with respect to some maximal proper set J'=J∖{s'}. It can shown that s'∈ J can be selected appropriately so that
w=v'(u'u)
is a Grassmannian BP decomposition with respect to S(w)∖{s'}.
If w∈ W is (rationally) smooth, then w has a complete BP-decomposition. In particular, there exists a staircase diagram over W and nearly-maximal labelling λ such that Λ()=w.
We say a nearly maximal labelling λ:→ W is (rationally) smooth if Λ() is (rationally) smooth. In fact, if λ is rationally smooth, then for each B∈, the element λ(B) must correspond to one of the elements in the list found in Theorem <ref>. In particular, if W is simply laced, then λ must be the maximal labelling. This implies the following corollary.
Let W be a simply laced of finite type. Then there is a bijection between staircase diagrams over Γ_W and smooth elements of W.
Let be a staircase diagram and let λ:→ W denote the maximal labeling. Then the Schubert variety X(Λ()) is an iterated fiber bundle of smooth Schubert varieties and hence smooth. Conversely, if X(w) is smooth, then Theorem <ref> and Corollary <ref> imply there is a unique smoothly labelled staircase diagram (,λ) such that Λ()=w. Since W is simply laced, Theorem <ref> implies λ is the maximal labelling.
§.§ Enumerating smooth Schubert varieties
An application of Theorem <ref> and Corollary <ref> is that we can enumerate smooth Schubert varieties by counting staircase diagrams. We give an overview of this enumeration in type A. Recall that the Coxeter graph of type A_n is a path on n vertices:
[scale=.4]
[thick] (0,0)–(2,0)–(4.25,0)–(6.5,0);
[thick,fill=white] (0,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_1] ;
[thick,fill=white] (2,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_2];
[white, fill=white] (4.25,0) circle (.75cm);
(4.25,0) node ⋯;
[thick,fill=white] (6.5,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_n];
We will denote this graph by Γ_n.
Let a_n denote the number of staircase diagrams over Γ_n (equivalently, the number of smooth permutations in _n+1) and define the generating function
A(x):=∑_n=0^∞ a_n t^n.
Then
A(x)=1-5t+4t^2+t√(1-4t)/1-6t+8t^2-4t^3.
A proof of Theorem <ref> first appeared in an unpublished paper by Haiman <cit.>. The first published proof of Theorem <ref> is due to Bousquet-Mélou and Butler in <cit.>. In this section, we provide an alternate proof using staircase diagrams from <cit.> and <cit.>.
We first focus on diagrams that are chains.
We say a staircase diagram is increasing over Γ_n if is fully supported (i.e. S()={s_1,…,s_n}) and if for every B,B'∈ such that s_i∈ B and s_j∈ B' with i<j, we have B≼ B'. Pictorially, increasing staircase diagrams are represented by a sequence of blocks that are “going up" from left to right with no gaps. For example, ={[s_1,s_2]≺ [s_2,s_5]≺ [s_4,s_6]} is increasing over Γ_6 as in Figure <ref>. We say that is decreasing over Γ_n if () is increasing over Γ_n.
The number of increasing staircase diagrams over Γ_n is the n-th Catalan number.
We show that increasing staircase diagrams over Γ_n are in bijection with Dyck paths. Let
={B_1≺ B_1≺⋯≺ B_m}
be such a diagram. For each B_i∈ define
r(B_i):=#{s∈ B_i∖ B_i-1} and
u(B_i):=#{s∈ B_i∖ B_i+1}
where we set B_0=B_m+1=∅. Let P() denote the lattice path in ^2 from (0,0) to (n,n) which takes r(B_1) steps to the right, then u(B_1) steps going up, followed by r(B_2) steps to the right, then u(B_2) steps going up and so forth (See Example <ref>). It is easy to check that P() is a Dyck path that stays below the diagonal in ^2. One can also check that the map P is invertible and hence a bijection.
Consider the staircase diagram =(s_1≺[s_2,s_5]≺[s_4,s_6]) on Γ_6. The sequence of pairs (r(B_i),u(B_i)) is ((1,1),(4,2),(1,3)) and corresponding Dyck path P() is given in Figure <ref>.
We use the enumeration of increasing staircase diagrams as the starting point to enumerate general staircase diagrams of type A. The next step is to decompose staircase diagram with connected support into a smaller staircase diagram and an increasing/decreasing “diagram" as follows:
Note that second part of the decomposition in Figure <ref> may not be a valid staircase diagram which leads to the following definition. First, we set Γ_n⊆Γ_n+1 as a subgraph by removing the vertex s_1 (See Figure <ref>).
We say a is an increasing (decreasing) broken staircase diagram over Γ_n if we can write
={B∩ [s_2,s_n] | B∈'}
where ' is some increasing (decreasing) staircase diagram over Γ_n+1.
Let b_n denote the number of increasing (equivalently decreasing) broken staircase diagrams over Γ_n. Then b_n=c_n+1-c_n where c_n denotes the n-th Catalan number.
Let ={B_1≺ B_2≺⋯≺ B_k} be an increasing staircase diagram over Γ_n+1 and let
():={B∩[s_2,s_n] | B∈}
denote the corresponding broken staircase diagram over Γ_n. By Lemma <ref>, the number of increasing staircase diagrams over Γ_n+1 is c_n+1. We prove the lemma by determining the pre-images of the map . First note that since is increasing, we have that s_1∈ B_i if and only if i=1. Hence B_i∩[s_2,s_n]=B_i unless i=1 and the pre-image of B is determined by the changes on B_1. Now if B_1∩[s_2,s_n]⊂ B_2∩ [s_2,s_n], then () is uniquely determined by as in Figure <ref>.
Otherwise, () has two pre-images as in Figure <ref>.
Broken staircase diagrams over Γ_n with two pre-images under the map can be identified with increasing staircase diagrams over Γ_n via the second pre-image in Figure <ref>. The Lemma now follows from Lemma <ref>.
We first note that the generating function for Catalan numbers is given by
(t):=∑_n=0^∞ c_n t^n=1-√(1-4t)/2t
and by Lemma <ref>, the Catalan number c_n denotes the number of increasing staircase diagrams over Γ_n. Recall the b_n denotes the number of increasing broken staircase diagrams over Γ_n and let
(t):=∑_n=0^∞ b_n t^n.
Lemma <ref> implies that
t+t(t)=(t)-t(t).
Now suppose is a fully supported staircase diagram over Γ_n. Then either is increasing on Γ_n or decomposes, as in Figure <ref>, into a smaller fully supported staircase diagram and a broken staircase diagram (note that this second case includes decreasing diagrams).
Let a̅_n denote number of fully supported staircase diagrams over Γ_n and define
A(t):=∑_n=0^∞a̅_n t^n.
We now have
A(t)=(t)+A(t)·(t).
Finally, any staircase diagram is a disjoint union of fully supported staircase diagrams and hence
A(t)=1+A(t)/1-t-tA(t).
The theorem follows from combining Equations (<ref>), (<ref>), (<ref>), and (<ref>).
One advantage of using staircase diagrams to enumerate smooth elements is that the techniques can extended to calculate generating functions for smooth and rationally smooth elements of other families of Coxeter groups. Define the generating series
B(t):=∑_n=0 b_n t^n, C(t):=∑_n=0 c_n t^n, D(t):=∑_n=3 d_n t^n, and BC(t):=∑_n=0 bc_n t^n
where b_n,c_n,d_n denote number of smooth elements of type B_n, C_n, D_n respectively and bc_n denotes the number of rationally smooth elements of type B_n/C_n. The following theorem is one of the main results of <cit.>.
Let W(t) := ∑_n w_n t^n denote one of the above generating series,
where W = A, B, C, D or BC. Then
W(t)=P_W(t)+Q_W(t)√(1-4t)/(1-t)^2(1-6t+8t^2-4t^3)
where P_W(t) and Q_W(t) are polynomials given in Table <ref>.
The proof of Theorem <ref> involves enumerating staircase diagrams similar fashion to the proof of Theorem <ref>. For type D, we can apply Corollary <ref>. Since types B and C are not simply-laced, we need to consider (rationally) smooth labellings of staircase diagrams that are not the maximal labelling. These additional labellings are characterized by Theorem <ref> parts (1a) and (1b).
§ BP DECOMPOSITIONS AND PATTERN AVOIDANCE
In this section we give an overview of how permutation pattern avoidance is related to BP decompositions. Here we will only consider permutation groups (type A). Recall that _n is permutation group on [n]={1,…, n}. The permutation group _n is generated by the set of simple transpositions S={s_1,…, s_n-1} where s_i denotes the transposition swapping i and (i+1) and with the relations
s_i^2=(s_is_j)^2=(s_is_i+1)^3=e for all |i-j|>1.
Any w∈_n has a unique expression in one-line notation w=w(1)⋯ w(n). We use matrices to represent permutations with nodes marking the points (w(i),i) using the convention that (1,1) marks the upper left corner. For example, w=3241 corresponds to the matrix:
[scale=0.4]
[step=1.0,black] (0,0) grid (3,3);
(0,1) circle (7pt);
(1,2) circle (7pt);
(2,0) circle (7pt);
(3,3) circle (7pt);
Let u∈_k and w∈_n. We say w contains the pattern u if there exists a subsequence (i_1<⋯<i_k) such that w(i_1)⋯ w(i_k) has the same relative order as u(1)⋯ u(k). If no such sequence exists, we say that w avoids the pattern u. For example, in Figure <ref>, we see that w=416253 contains the pattern 3412, but avoids the pattern 1234.
Permutation pattern avoidance has been used to characterize many geometric properties of Schubert varieties of type A. A survey of these results can be found at <cit.>. Most notably, Lakshmibai and Sandhya prove that a Schubert variety X(w) is smooth if and only if w avoids the patterns 3412 and 4231 in <cit.>. Combining this result with Corollary <ref>, we have the following theorem:
If the permutation w avoids the patterns 3412 and 4231, then w has a complete BP decomposition.
[scale=0.4]
[step=1.0,black] (0,0) grid (3,3);
(0,1) circle (7pt);
(1,0) circle (7pt);
(2,3) circle (7pt);
(3,2) circle (7pt);
(1.5,-2) node 3412;
[scale=0.4]
[step=1.0,black] (0,0) grid (3,3);
(0,0) circle (7pt);
(1,2) circle (7pt);
(2,1) circle (7pt);
(3,3) circle (7pt);
(1.5,-2) node 4231;
The geometric version of Theorem <ref> states that smooth Schubert varieties of type A are iterated fiber bundles of Grassmannian varieties. This geometric result was proved by Ryan in <cit.>. Wolper gives an analogous result for Schubert varieties over algebraically closed fields in characteristic zero in <cit.>. Note that it is not necessary for w to avoid 3412 and 4231 for w to have a complete BP decomposition. In fact, if w=4231, then
w=(s_1s_3s_2)(s_3)(s_1)
is a complete BP decomposition of w. The following theorem from <cit.> is a precise pattern avoidance characterization of permutations that have complete BP decompositions.
The permutation w avoids the patterns 3412, 52341, and 635241 if and only if w has a complete BP decomposition.
[scale=0.4]
[step=1.0,black] (0,0) grid (3,3);
(0,1) circle (7pt);
(1,0) circle (7pt);
(2,3) circle (7pt);
(3,2) circle (7pt);
[scale=0.4]
[step=1.0,black] (0,0) grid (4,4);
(0,0) circle (7pt);
(1,3) circle (7pt);
(2,2) circle (7pt);
(3,1) circle (7pt);
(4,4) circle (7pt);
[scale=0.4]
[step=1.0,black] (0,0) grid (5,5);
(0,0) circle (7pt);
(1,3) circle (7pt);
(2,1) circle (7pt);
(3,4) circle (7pt);
(4,2) circle (7pt);
(5,5) circle (7pt);
3412 52341 635241
§.§ Split pattern avoidance
The proof of Theorem <ref> relies on the idea of split pattern avoidance which is used to characterize Grassmannian BP decompositions of permutations with respect to J=S∖{s_r} for any s_r∈ S.
A split pattern w=w_1|w_2∈_n is a divided permutation with
w_1=w(1)⋯ w(j) and w_2=w(j+1)⋯ w(n)
for some 1≤ j<n.
Let k≤ n and r<n. Let w∈_n and let
u=u(1)⋯ u(j)|u(j+1)⋯ u(k)
denote a split pattern. We say w contains the split pattern u with respect to position r if there exists a sequence (i_1<⋯<i_k) such that
* w(i_1)⋯ w(i_k) has the same relative order as u.
* i_j≤ r<i_j+1.
Otherwise, we say the permutation w avoids the split pattern u with respect to position r.
In other words, w contains u=u_1|u_2 if it contains u in the usual sense of pattern containment, but with the extra condition that u_1 appears to the right of the r-th position and u_2 to the left of the r-th position in the one-line notation of w. For example w=416253 contains the split pattern 3|412 with respect to positions r=1,2 but avoids 3|412 with respect to r=3,4,5 (See Figure <ref>).
The next theorem is from <cit.> and completely characterizes Grassmannian BP decompositions in terms of split pattern avoidance.
Let r<n and w∈_n. Then w has a Grassmannian BP decomposition with respect to J=S∖{s_r} if and only if w avoids the split patterns 3|12 and 23|1 with respect to position r.
[scale=0.4]
[step=1.0,black] (0,0) grid (2,2);
(0,0) circle (7pt);
(1,2) circle (7pt);
(2,1) circle (7pt);
[dashed,thick, red] (0.5,-1)–(0.5,3);
(1,-2) node 3|12;[scale=0.4]
[step=1.0,black] (0,0) grid (2,2);
(0,1) circle (7pt);
(1,0) circle (7pt);
(2,2) circle (7pt);
[dashed,thick, red] (1.5,-1)–(1.5,3);
(1,-2) node 23|1;
We use Theorem <ref> part (4) which states that a parabolic decomposition w=vu with respect to J is a BP decomposition if and only if S(v)∩ J⊆ D_L(u). The next lemma gives an explicit description of these ideas in terms of the one-line notation of permutations. We leave the proof as an exercise.
Let w∈_n and r<n and write
w=w_1|w_2=w(1)⋯ w(r)|w(r+1)⋯ w(n).
Let w=vu denote the parabolic decomposition with respect to J=S∖{s_r}. Then the following are true:
* v=v_1|v_2 where v_1 and v_2 respectively consist the entries in w_1 and w_2 arranged in increasing order and
S(v)={s_k∈ S | v(r+1)≤ k<v(r)}.
* u=u_1|u_2 where u_1 and u_2 are respectively the unique permutations on {1,…,r} and {r+1,…,n} with relative orders of w_1 and w_2 and
D_L(u)={s_k∈ S | u^-1(k+1)<u^-1(k)}.
The description of the decent set in part (2) of Lemma <ref> is equivalent to saying that s_k is a left descent of u if and only if the node in the k-th row is to the right of the node in the (k+1)-th row in the permutation matrix of u. The proof of Theorem <ref> follows from showing that avoiding the split patterns 3|12 and 23|1 with respect to position r is equivalent to S(v)∖{s_r}⊆ D_L(u) using Lemma <ref>. We illustrate this connection with the following examples:
Let w=17264|5938 and note that w avoids 3|12 and 23|1 with respect to position r=5. If w=vu is the parabolic decomposition with respect to J=S∖{s_5}, then v=12467|3589 and u=15243|7968 as seen in Figure <ref>. Lemma <ref> says that
S(v)∖{s_5}={s_3,s_4,s_6} and D_L(u)={s_3,s_4,s_6,s_7}
and hence S(v)∖{s_5}⊆ D_L(u).
If we take the parabolic decomposition of w=1726|45938 with respect to J=S∖{s_4}, then w contains 3|12 with respect to r=4. In this case v=1267|34589 and u=1423|57968 (See Figure <ref>). Lemma <ref> says that
S(v)∖{s_4}={s_3,s_5,s_6} and D_L(u)={s_3,s_5,s_7}
and hence S(v)∖{s_5}⊈ D_L(u).
We remark the an explicit formula for the number of permutations w∈_n which avoid 3|12 and 23|1 with respect to a given position r is calculated by Grigsby and Richmond in <cit.>. The connection between Theorem <ref> and <ref> is the following proposition.
If w∈_n avoids the patterns 3412, 52341, and 635241, then there exists s_r∈ S(w) where w avoids the split patterns 3|12 and 23|1 with respect to position r.
One can prove Proposition <ref> by contradiction and we refer the reader to <cit.> for more details.
Proposition <ref> implies that if w∈_n avoids the patterns 3412, 52341, and 635241, then w has a Grassmannian BP decomposition w=vu with respect to some r<n. It can be shown that u also avoid these patterns and hence we can iterate this process yielding a complete BP decomposition of w. For more details, see <cit.>.
If w avoids 3412, 52341, and 635241, then we can we construct complete BP decompositions of w by finding positions such that w avoids 3|12 and 23|1 and then iterating the process on factor u.
Observe that w=513462 avoids 3412, 52341, and 635241 and hence w has a complete BP decomposition. Complete BP decompositions of w correspond to sequences of “splittings" along lines that avoid 3|12 and 23|1. For example, we can split w along the sequence of positions (3,2,4,1,5) as in Figure <ref>.
Next we use Theorem <ref> to give a new proof of Gasharov's Theorem that Poincaré polynomials of smooth permutations are products of q-integers. The following proposition is the “forward" direction of Theorem <ref>.
Let w∈_n. If w avoids 3412 and 4231, then either w or w^-1 has a BP decomposition vu with respect to J=S∖{s_n-1} where
P_w(q)=[ℓ(v)+1]_q· P_u(q)
and u∈ W_J≃_n-1 also avoids 3412 and 4231.
We prove the proposition by contradiction. Let w∈_n and assume w avoids 3412 and 4231. For the sake of contradiction, suppose that both w and w^-1 do not have BP decompositions with respect to J=S∖{s_n-1}. Theorem <ref> implies that both w and w^-1 contain the split pattern 23|1 with respect to position r=n-1.
Since w^-1 corresponds to the transpose of w, we consider a “horizontal" analogue of split pattern containment in Figure <ref>. Note that it is not possible for either w or w^-1 to contain the other split pattern, 3|12, with respect to position r=n-1. Let w(d)=n and w(n)=e and consider the matrix diagram of w where we mark the nodes (d,n) and (n,e) as in Figure <ref>. These nodes divide the matrix into four regions of which we label three of them A,B, and C.
We have two cases to consider when containing the split pattern 23|1 both vertically and horizontally with respect to position r=n-1. First, if either region A or B contain no nodes, then region C must contain 2 increasing nodes which implies that w contains 4231. Otherwise, each of regions A and B must contain at least one node which implies w contains 3412 (See Figure <ref>).
In either case, we have a contradiction and hence at least one of w or w^-1 has a BP decomposition vu with respect to J. The fact that u is smooth follows from Lemma <ref> part (2). Since s_n-1 is a leaf in the Coxeter diagram of _n, the interval [e,v]^J is chain and hence P_v^J(v)=[ℓ(v)+1]_q. This completes the proof.
§.§ Related results on pattern avoidance
In this section by state two analogues of the following theorem which summarizes various characterizations of smooth permutations.
Let w∈_n. Then the following are equivalent.
* w avoids 3412 and 4231.
* X(w) is an iterated fiber-bundle of Grassmannian varieties.
* The interval [e,w] is rank symmetric.
Theorem <ref> follows from the combined works of Lakshmibai-Sandhya <cit.>, Ryan <cit.>, and Carrell <cit.>. Note that Theorem <ref> can be viewed as analogue of the equivalence of parts (1) and part (2) in Theorem <ref> where we replace part (2) with an iterated fiber-bundle of Grassmannian Schubert varieties. Each Grassmannian has a co-dimension one Schubert variety which is unique in the sense that, as a Weil divisor, it generates the Picard group of the Grassmannian. We call this variety a Grassmannian Schubert divisor. The following theorem is another analogue of the equivalence of parts (1) and (2) and is proved by Azam in <cit.>.
Let w∈_n. Then the following are equivalent.
* w avoids 3412, 52341, 52431, and 53241.
* X(w) is an iterated fiber bundle of Grassmannian varieties or Grassmannian Schubert divisors.
The class of permutations in Theorem <ref> is larger than the smooth class of permutations, but within the class of permutations that have complete BP decompositions. Note that Grassmannian Schubert divisors are almost always singular varieties. One consequence of Theorem <ref> is that the generating function for permutations that avoid 3412, 52341, 52431, and 53241 can be calculated using labelled staircase diagrams. This calculation uses “Catalan type" objects similar to those used to prove Theorem <ref>. For more details see <cit.>.
The next theorem is a analogue of the equivalence of parts (1) and (3) in Theorem <ref>. Given a poset P, the dual poset P^* is obtained by reversing the partial order. We say P is self-dual if P≃ P^* as posets. It is easy to check that any graded self-dual poset is rank symmetric. However the converse may not be true. The next theorem is proved by Gaetz and Gao in <cit.>.
Let w∈_n. Then the following are equivalent.
* w avoids 3412, 4231, 34521, 45321, 54123, and 54312.
* The interval [e,w] is self-dual (as a poset).
The authors refer to permutations characterized in Theorem <ref> as “polished" permutations since the condition of self-duality on the interval [e,w] is sufficient for smoothness, but not necessary.
§.§ Affine permutations
In this section we discuss applications of BP decompositions to the group of affine permutations denoted _n. An affine permutation is a bijection w:→ such that
* w(i+n)=w(i)+n for all i∈ and
* ∑_i=1^n w(i)=n(n+1)/2.
Note that a regular permutation extends to an affine permutation by applying part (1) above to the one-line notation sequence w(1)⋯ w(n).
Similarly, any affine permutation is uniquely determined by the “window" of values
⋯ w(-1), w(0),[w(1),w(2),⋯, w(n)],w(n+1),w(n+2),⋯
by the same extension. For example [4,2,3,1], [8,1,-2,3], and [-7, 7,6,4] are all examples of affine permutations in _4.
The group of affine permutations is an infinite Coxeter group with generating set S={s_0,s_1,…,s_n-1} and Coxeter graph:
[scale=.5]
[thick] (0,0)–(2,0)–(4.25,0)–(6.5,0)–(3.25,1.5)–(0,0);
[thick,fill=white] (0,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_1] ;
[thick,fill=white] (2,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_2];
[white, fill=white] (4.25,0) circle (.65cm);
(4.25,0) node ⋯;
[thick,fill=white] (6.5,0) circle (.3cm) node[label=[label distance=.1cm]-90:s_n-1];
[thick,fill=white] (3.25,1.5) circle (.3cm) node[label=[label distance=.1cm]90:s_0];
Affine permutations are referred to as Coxeter groups of affine type A. Note that all maximal parabolic subgroups of _n are isomorphic to the finite permutation group _n.
As with finite permutations, (rational) smoothness is closely tied to pattern avoidance and was studied by Billey and Crites in <cit.>. We say an affine permutation w contains the (finite) pattern u∈_k if there is a sequence (i_1<⋯ <i_k) such that w(i_1)⋯ w(i_k) has the same relative order as u. Note that the sequence (i_1<⋯ <i_k) does not necessarily have to be contained in the integers [n]={1,…,n}. If w does not contain u, we say it avoids the pattern u. The following thoerem if proved by Billey and Crites in <cit.>
The affine permutation w∈_n is rationally smooth if and only if one of the following hold:
* w avoids the patterns 3412 and 4231 or
* w is a twisted spiral permutation (see <cit.>).
It is shown by Mitchell in <cit.>, that if w is a twisted spiral permutation, then the Schubert variety X(w) is not smooth. Hence smooth is not equivalent to rationally smooth for affine permutations. One technical result used to prove Theorem <ref> is the following analogue of Theorem <ref>.
If w ∈_n avoids 3412 and 4231, then either w or w^-1 has a Grassmannian BP decomposition vu where both v and u belong to proper parabolic subgroups of _n.
Billey and Crites show that, for the BP decomposition vu found in Proposition <ref>, the Poincaré polynomial P_v^J(q) is a q-binomial and hence palindromic. They also show that u is a smooth (finite) permutation. So Theorem <ref> implies P_w(q) is palindromic.
The next theorem was partially conjectured in <cit.> and proved by Richmond and Slofstra in <cit.>.
Let w∈_n. Then the following are equivalent:
* X(w) is smooth.
* w avoids the patterns 3412 and 4231.
* w has a Grassmannian BP decomposition vu with respect to some J where both v and u belong to proper parabolic subgroups of _n.
Furthermore v is the maximal element of W_S(v)∩ W^J and u is a smooth permutation in the W_S(u).
The proof of Theorem <ref> is similar to the proof of Theorem <ref>. We remark that part (3) of Theorem <ref> implies that for an affine permutation w∈_n, the affine Schubert variety X(w) is smooth if and only if it is an iterated fiber bundle of Grassmannian varieties.
Theorem <ref> also implies an analogue of Corollary <ref> on staircase diagrams of affine type A which we state below. Since _n is an infinite Coxeter groups, we say a staircase diagram is spherical if for each B∈, the parabolic subgroup W_B is a finite Coxeter group. The next statement is from <cit.>.
The maximal labelling gives a bijection between spherical staircase diagrams over the Coxeter graph of _n and smooth affine permutations in _n.
One immediate consequence of Corollary <ref> is that the number of affine permutations that avoid 3412 and 4231 in _n is finite. This fact also follows directly from results in <cit.>. For single patterns, Crites proved in <cit.> that the number affine permutations in _n avoiding a pattern u is finite if and only if u contains 321.
Staircase diagrams of affine type A can be thought of as staircase diagrams of finite type A that “loop" back on themselves since the Coxeter graph is a cycle. Figure <ref> gives an example of an affine staircase diagram. For more details see <cit.>.
As with Theorems <ref> and <ref> we can use staircase diagrams to enumerate smooth affine permutations. The following is proved in <cit.>.
Let A(t):=∑ã_n t^n where ã_n denotes the number of smooth affine permutations in _n. Then
A(t) = P(t) - Q(t) √(1-4t)/(1-t)(1-4t)(1-6t+8t^2-4t^3)
where
P(t) = (1-4t)(2-11t+18t^2-16t^3+10t^4-4t^5)
and
Q(t) = (1-t)(2-t)(1-6t+6t^2).
§ FUTURE DIRECTIONS
We state some open questions and possible future directions for the study of Coxeter groups in relation to BP-decompositions.
While rational smoothness for Coxeter groups of finite Lie type have been extensively studied. Characterizations of rationally smooth elements for arbitrary Coxeter groups are relatively unknown. For example, if w is rationally smooth, does w have a Grassmannian BP decomposition? Does Theorem <ref> hold for inversion hyperplane arrangements of rationally smooth elements in arbitrary Coxeter groups? We remark that Richmond and Slofstra study rationally smooth elements in Coxeter groups that avoid certain rank 3 parabolic subgroups in <cit.>.
Let W be a Coxeter group and for u≤ v∈ W, define the Poincaré polynomial of the interval
P_u,w(q):=∑_z∈[u,w] q^ℓ(z)-ℓ(u).
If u=e, then this is the usual Poincaré polynomial P_w(q). For example, if u=s_2 and v=s_2s_1s_3s_2, then Figure <ref> shows the interval [u,v] and
P_u,v(q)=1+4q+4q^2+q^3.
We ask is under what conditions does the polynomial P_u,v(q) factor nicely? and if so, does the interval [u,v] also decompose as a poset? Is there a generalization of the characterization Theorem <ref> for arbitrary intervals [u,v]? We remark that the poset structure of the interval [u,v] has connections to Kazhdan-Lustig theory and Richardson varieties <cit.>.
Let w=vu be a parabolic decomposition with respect to J. In Remark <ref> we describe the coset intervals [e,w]∩ v_0W^J for v_0 ∈ [e,v]^J and show they are poset isomorphic to [e,u_0] for some u≤ u_0≤ u' where u' is the maximal element of [e,w]∩ W_J.
We ask if there is nice description of the set of all u_0∈[u,u'] that appear for some v_0∈[e,v]^J. If u=u', then w=vu is a BP decomposition by Theorem <ref>. Note that not every element of [u,u'] may appear in this set.
In Sections <ref> and <ref>, we see several cases of BP-decompositions and staircase diagrams used to enumerate classes of (rationally) smooth elements. To what extend can these structures help with enumerating other classes of Coxeter group elements. For example, can we use BP decompositions to calculate the generating series for the number of permutations that avoid 3412, 52341, and 635241 from Theorem <ref>?
plain
|
http://arxiv.org/abs/2409.03321v1 | 20240905075146 | Willmore-type inequality in unbounded convex sets | [
"Xiaohan Jia",
"Guofang Wang",
"Chao Xia",
"Xuwen Zhang"
] | math.DG | [
"math.DG",
"53C42, 53C20"
] |
Willmore inequality]Willmore-type inequality in unbounded convex sets
Jia]Xiaohan Jia
[X.J]School of Mathematics
Southeast University
211189, Nanjing, P.R. China
[email protected]
XJ is supported by the NSFC (Grant No. 12401249) and Natural Science Foundation of Jiangsu Province, China (Grant No. BK20241258)
Wang]Guofang Wang
[G.W]Mathematisches Institut
Universität Freiburg
Ernst-Zermelo-Str.1
79104
Freiburg
Germany
[email protected]
Xia]Chao Xia
[C.X]School of Mathematical Sciences
Xiamen University
361005
Xiamen
P.R. China
[email protected]
CX is supported by the NSFC (Grant No. 12271449, 12126102)
Zhang]Xuwen Zhang
[X.Z]Mathematisches Institut
Universität Freiburg
Ernst-Zermelo-Str.1
79104
Freiburg
Germany
[email protected]
§ ABSTRACT
In this paper we prove the following Willmore-type inequality: On an unbounded closed convex set K⊂^n+1 ( n≥ 2), for any embedded hypersurface ⊂ K with boundary ⊂ K satisfying certain contact angle condition, there holds
1/n+1∫_H^n A≥AVR(K)^n+1.
Moreover, equality holds if and only if is a part of a sphere and K∖ is a part of the solid cone determined by . Here is the bounded domain enclosed by and K, H is the normalized
mean curvature of , and AVR(K) is the asymptotic volume ratio of K.
We also prove an anisotropic version of this Willmore-type inequality.
As a special case, we obtain a Willmore-type inequality for anisotropic capillary hypersurfaces in a half-space.
MSC 2020: 53C42, 53C20 .
Keywords: Willmore inequality, free boundary hypersurface, capillary hypersurface, anisotropic mean curvature,
asymptotic volume ratio
[
[
=====
§ INTRODUCTION
The classical Willmore inequality Willmore68,Chen71 states that for a bounded domain ⊂^n+1 with smooth boundary, it holds that
1/n+1∫_H^n A≥^n+1
,
where H is the normalized mean curvature of and ^n+1 is the volume of unit ball ^n+1.
Moreover, equality in (<ref>) holds if and only if is a round ball.
Recently, Agostiniani-Fogagnolo-Mazzieri <cit.> proved the following Willmore-type inequality in Riemannian manifolds with nonnegative Ricci curvature.
[<cit.>*Theorem 1.1]
Let (M^n+1,g) (n≥2) be a complete Riemannian manifold with nonnegative Ricci curvature and ⊂ M a bounded open set with smooth boundary.
Then
1/n+1∫_
H^n A≥AVR(g)^n+1,
where AVR(g) is the asymptotic volume ratio of M. Moreover, if AVR(g)>0, equality holds if and only if M∖ is isometric to ([r_0,∞)×, r^2+(r/r_0g_)^2) with
r_0
=(/AVR(g)^n)^1/n.
Soon after Wang <cit.> gave a short proof of Theorem <ref>, which is based on Heintze-Karcher's comparison theorem in Riemannian geometry.
In this paper, we study a similar problem in Euclidean unbounded closed convex sets. Let us first introduce some terminologies and properties of unbounded closed convex sets, which are needed to state our main result.
Let K⊂^n+1 ( n≥2) be an unbounded closed convex set.
We denote by Reg( K) the regular part of K, that is, the set of points near which K can be locally written as a C^1-hypersurface, and Sing( K)= K∖ Reg( K) the singular part of K.
For x∈ Reg( K), we denote by N̅(x) the outward unit normal to K at x. Let B_R(x) denote the open ball of radius R centered at x.
We define
AVR(K)
=lim_R→∞ |B_R(0)∩K|/^n+1R^n+1.
For the well-definedness of AVR(K) and its generalization see Appendix <ref> below. Moreover,
It is not difficult to check that
AVR(K)
=lim_R→∞ |B_R(p)∩K|/^n+1R^n+1,
for any p∈^n+1.
Our main result in this paper is the following Willmore-type inequality in unbounded convex sets.
Let K⊂^n+1 be an unbounded closed convex set.
Let ⊂K be a compact, embedded C^2-hypersurface with boundary ⊂ Reg( K) intersecting K transversally such that
ν(x),N̅(x)≥0, for any x∈.
Here ν denotes the outward unit normal to with respect to , the bounded domain enclosed by and K.
Then there holds
1/n+1∫_H^n A≥AVR(K)^n+1.
Moreover, equality in (<ref>) holds if and only if is a part of a sphere and
K∖ is a part of the solid cone determined by .
Our theorem can be viewed as an extrinsic counterpart of Theorem <ref>.
The approach to this theorem is inspired by Wang's short proof <cit.> of Theorem <ref> based on Heintze-Karcher's comparison <cit.>, and also inspired by our recent works on Heintze-Karcher-type inequalities in various circumstances, see JWXZ22,JWXZ23,JWXZ23b.
We refer the interested reader to <cit.>*Theorem 1.2 for the Heintze-Karcher-type inequality in arbitrary convex sets, which could be viewed as a "dual" version of the above Willmore inequality in unbounded convex sets.
We would like to call attention to a result by Choe-Ghomi-Ritore <cit.>, which says that for a compact free boundary hypersurface ⊂^n+1 outside a convex set, it holds that
1/n+1∫_H^n dA≥1/2 ^n+1 with equality holds if and only if is a hemisphere lying in a half-space. This inequality leads to an optimal relative isoperimetric inequality in <cit.>. See also <cit.>.
In contrast all our results in this paper hold for compact hypersurfaces in a convex set.
In view of our previous work <cit.>, it is not surprising that we could in fact establish the following Willmore-type inequality in unbounded convex sets, with anisotropy taken into account. Here anisotropy means a smooth positive function F: 𝕊^n→ℝ_+ on the unit sphere (𝕊^n,σ) such that
(∇^2 F+F σ) is positive definite. In literature F is usually called a Minkowski norm. (F induces a norm if F is even, i.e, F(-x)=F(x). In the paper we do not require the evenness.)
With respect to a Minkowski norm F,
one can define
(unit) Wulff ball ^F, asymptotic volume ratio AVR_F for unbounded convex sets, the anisotropic unit normal ν_F and the normalized anisotropic mean curvature H^F for a hypersurface. For more definitions and notation see
Section <ref> and also Appendix <ref> below.
Let K⊂^n+1 be an unbounded closed convex set.
Let ⊂K be a compact, embedded C^2-hypersurface with boundary ⊂ Reg( K) intersecting K transversally such that
ν_F(x),N̅(x)≥0, for any x∈.
Then there holds
1/n+1∫_F(ν)H^F^n A≥AVR_F(K)^F.
Moreover, equality in (<ref>) holds if and only if is a part of a Wulff shape and K∖ is a part of the solid cone determined by .
Note that Theorem <ref> is a special case of Theorem <ref> with F(ξ)=ξ.
The Willmore-type inequality (<ref>) may be applied to prove relative isoperimetric inequality in unbounded convex sets, as Choe-Ghomi-Ritore <cit.> did outside convex sets. We remark that the relative isoperimetric-type inequality in unbounded convex sets in the isotropic case, i.e. F(ξ)=|ξ|, has been proved by Leonardi-Ritoré-Vernadakis <cit.>.
When K=^n+1_+, the upper half-space, by choosing F(ξ)=|ξ|-cosθ_0 ξ, E_n+1 for some θ_0∈ (0, π), we get from Theorem <ref> the following Willmore-type inequality for capillary hypersurfaces.
Given θ_0∈(0,π).
Let ⊂_+^n+1 be a compact, embedded C^2-hypersurface with boundary ⊂_+^n+1 intersecting ^n+1_+ transversally such that
<ν, -E_n+1>≥-cosθ_0, for any x∈.
Then there holds
1/n+1∫_Σ(1-cosθ_0ν, E_n+1)H^n A≥B_1,θ_0,
where
B_1,θ_0=B_1(-cosθ_0 E_n+1)∩_+^n+1.
Moreover, equality in (<ref>) holds if and only if is a θ_0-capillary spherical cap in _+^n+1.
More generally, by choosing F(ξ) in Theorem <ref> to be F(ξ)+_0<ξ,E^F_n+1> for some ω_0, we get the following Willmore-type inequality for anisotropic capillary hypersurfaces.
Given _0∈(-F(E_n+1),F(-E_n+1)).
Let ⊂^n+1_+ be a compact, embedded C^2-hypersurface with boundary ⊂_+^n+1 intersecting ^n+1_+ transversally such that
ν_F(x), -E_n+1=(x)≥_0, for any x∈∂Σ.
Then there holds
1/n+1∫_(F(ν)+_0<ν,E^F_n+1>)H^F^nA≥^F_1,_0,
where ^F_1,_0=^F_1(_0 E^F_n+1)∩_+^n+1.
Moreover, equality in (<ref>) holds if and only if is an anisotropic _0-capillary Wulff shape in _+^n+1.
In fact, as already observed in De Phillipis-Maggi's work <cit.>, anisotropic capillary problems with respect to F can be regarded as anisotropic free boundary problems with respect to another Minkowski norm F̃ in the half-space.
Hypersurfaces with the aforementioned special boundary conditions naturally arise from Calculus of Variations and are nowadays of particular interest.
We refer the readers to <cit.> and the references therein for a short historical introduction.
For the specialty of the half-space, we could in fact prove Theorem <ref> in the half-space case in an alternative way.
This is done by a geometric observation on the Gauss image of , stated in Proposition <ref>.
Once this is established, the rest of the proof of the Willmore inequality in the half-space then follows from the classical argument based on an area formula.
The rest of the paper is organized as follows.
In Section <ref>, we collect basic knowledges on Minkowski norm, anisotropic geometry, and anisotropic capillary hypersurfaces.
In Section <ref>, we prove Theorem <ref>.
In Section <ref>, we give an alternative proof for Theorem <ref> in the half-space case and then apply it to anisotropic capillary hypersurfaces.
§ PRELIMINARIES
§.§ Notations
The Euclidean metric, scalar product, and Levi-Civita connection of the Euclidean space ^n+1 are denoted respectively by g_ euc,<·,·>, and D.
When considering the topology of ^n+1, we adopt the following notations:
for a set E⊂^n+1,
we denote
by E the topological closure of E, by int(E) the topological interior of E, and by E the topological boundary of E in ^n+1.
Regarding the use of the symbol ·,
if we plug in a vector e∈^n+1, then e denotes the Euclidean length of e.
If we plug in a k-dimensional submanifold M⊂^n+1, then we write
M
ℋ^k(M),
where ℋ^k is the k-dimensional Hausdorff measure in ^n+1.
In particular, if we plug in an open set of ^n+1, then we mean
ℒ^n+1().
To avoid ambiguity, we also use Vol(·) to denote the ^n+1-measure of certain sets in ^n+1, see for example (<ref>).
§.§ Minkowski norm and anisotropic geometry
Let F: 𝕊^n→ℝ_+ be a smooth positive function on the standard sphere (𝕊^n,σ) such that
A_F
∇^2 F+F σ>0,
where denote the Levi-Civita connection corresponding to σ.
A Minkowski norm is the one homogeneous extension of any such F to the whole ^n+1, namely,
F(ξ)=|ξ|F(ξ/|ξ|) for ξ≠ 0 and F(0)=0.
Note that condition (<ref>) is equivalent to saying that 1/2F^2 is uniformly convex, in the sense that
D^2(1/2F^2)(ξ)>0, ∀ξ∈^n+1.
Let Φ be the Cahn-Hoffman map associated to F, which is given by
Φ(z)=DF(z)=∇F(z)+F(z)z, ∀z ∈𝕊^n.
The image Φ(𝕊^n) of Φ is called (unit) Wulff shape.
The dual Minkowski norm of F, denoted by F^o, is given by
F^o(x)=sup{<x,z>/F(z)| z∈^n}.
We collect some well-known facts on F, F^o, and Φ, see e.g., <cit.>*Proposition 2.1.
For any z∈^n+1∖{0}
the following statements hold for an Minkowski norm.
(i) F^o(tz)= t F^o(z) , for any t>0.
(ii) F^o(x+y)≤ F^o(x)+F^o(y), for x, y ∈^n+1.
(iii) <Φ(z),z>=F(z).
(iv) F^o(Φ(z))=1.
(v) The following Cauchy-Schwarz inequality holds:
z, ξ≤ F^o(z)F(ξ), ∀ξ∈^n+1.
(vi) The Wulff shape Φ(𝕊^n)={x∈^n+1|F^o(x)=1}.
We denote the Wulff ball of radius r and centered at x_0∈^n+1 by
_r^F(x_0)
={x∈^n+1|F^o(x-x_0)<r},
and the corresponding Wulff shape _r^F(x_0) is given by
_r^F(x_0)
={x∈^n+1|F^o(x-x_0)=r}.
We also use ^F and ^F_R to abbreviate ^F_1(0) and ^F_R(0).
Let ⊂^n+1 be a C^2-hypersurface and ν a unit normal field of .
The anisotropic normal of with respect to ν and F is given by
ν_F=Φ(ν)=∇F(ν)+F(ν)ν,
and the anisotropic principal curvatures {κ_i^F}_i=1^n of with respect to ν and F are given by the eigenvalues of the anisotropic Weingarten map
ν_F=A_F(ν)∘ν: T_p→ T_p.
The eigenvalues are real since A_F is positive definite and symmetric.
Let
H^F=1/n∑_i=1^nκ_i^F and H^F_n=∏_i=1^nκ_i^F
denote the normalized anisotropic mean curvature and the anisotropic Gauss-Kronecker curvature of Σ respectively.
It is easy to check that the anisotropic principal curvatures of ^F_r(x_0) are 1/r, since
ν_F(x)=x-x_0/r, on ^F_r(x_0).
We record
the following very useful anisotropic angle comparison principle.
Let x,z∈^n be two distinct points and y∈^n lie in a length-minimizing geodesic joining x and z in ^n,
then we have
<Φ(x),z>≤<Φ(y),z>.
Equality holds if and only if x=y.
§.§ Anisotropic capillary hypersurfaces in a half-space
Let us first recall the definition of anisotropic capillary hypersurface in the half-space.
Given _0∈(-F(E_n+1),F(-E_n+1)), any hypersurface ⊂^n+1_+ is said to be anisotropic _0-capillary in ^n+1_+ if it intersects ^n+1_+ transversally with
<ν_F(x),-E_n+1>≡_0,∀x∈.
In particular, is said to be anisotropic free boundary if it is anisotropic _0-capillary with _0=0.
Here we record some facts concerning anisotropic capillary hypersurfaces in the half-space.
Let be a hypersurface in ^n+1_+ which meets ^n+1_+ transversally
and define a function on ∂Σ by (x)<ν_F(x),-E_n+1>. Then for any x∈,
(x)∈(-F(E_n+1),F(-E_n+1)).
Define a constant vector E^F_n+1∈ℝ^n+1 as in <cit.>,
E^F_n+1
={
[ -Φ(-E_n+1)/F(-E_n+1), if _0>0,; Φ(E_n+1)/F(E_n+1), if _0<0,; E_n+1, if _0=0,; ]
.
whose definition is strongly related to the Cauchy-Schwarz inequality.
When _0=0, one can also define E_n+1^F as Φ(E_n+1)/F(E_n+1) or -Φ(-E_n+1)/F(-E_n+1).
Note that E^F_n+1, E_n+1=1, and when F is the Euclidean norm, E^F_n+1 is indeed E_n+1.
For _0∈(-F(E_n+1),F(-E_n+1)), there holds
F(z)+_0z,E^F_n+1>0, for any z∈^n.
We have the following integral formula.
Let be a compact, embedded C^2-hypersurface in ^n+1_+ which intersects ^n+1_+ transversally.
Denote by the bounded domain enclosed by and ^n+1_+.
Then
there holds
∫_ν,E_n+1^F A
=∫_∩^n+1_+ A.
Notice that div(E_n+1^F)=0.
Integrating this over , using integration by parts, then invoking the fact that <E_n+1,E^F_n+1>=1, we obtain the assertion.
As mentioned in the introduction, we shall interpret the anisotropic capillary problem as an anisotropic free boundary problem.
This is done by introducing the following Minkowski norm.
Given _0∈(-F(E_n+1),F(-E_n+1)).
Let be a compact, embedded C^2 _0-capillary hypersurface in ^n+1_+.
Then is an anisotropic free boundary hypersurface in ^n+1_+ with respect to the Minkowski norm F̃, defined by
F̃(ξ)
F(ξ)+_0ξ,E_n+1^F, ∀ξ∈^n+1.
Moreover,
* A_F=A_F̃ on Σ, and hence the anisotropic curvatures of Σ w.r.t. to F and F̃ are the same.
* The unit Wulff shapes of associated to F and F̅ are the same up to a translation, precisely,
^F̃=_1^F(_0E_n+1^F).
By direct computation, we see that
DF̃(ξ)
=D F(ξ)+_0E_n+1^F.
It follows that for every p∈⊂^n+1_+, there holds
<ν_F̃(p),-E_n+1>
=<ν_F(p)+_0E_n+1^F,-E_n+1>
=_0-_0=0,
where we have used the fact that <E_n+1,E^F_n+1>=1.
This proves the first part of the assertion.
It is direct to see that A_F(p)=A_F̃(p) for any p∈, since we have
D^2F̃(ξ)
=D^2F(ξ).
The last part of the assertion follows from (<ref>), and the fact that the unit Wulff shape with origin as its center could be characterized by ^F̃=DF̃(^n).
Since F is in general not an even function on ^n, for later purpose, we need the following Minkowski norm. Define F_∗ by
F_∗(z)=F(-z), z∈^n.
Geometrically speaking, F_∗ is induced by the convex body which is centrally symmetric to ^F. It is clear that F_∗ also induces a smooth Minkowski norm, still denote by F_∗, and we
denote by Φ_∗ and F_∗^o the Cahn-Hoffman map and the dual Minkowski norm associated to F_∗.
There hold that
*
F_∗^o(x) =F^o(-x), ∀x∈^n+1.
*
Φ_∗(z) =-Φ(-z), ∀z∈^n.
* For any x∈,
let {κ_i^F(x)}_i=1^n denote the anisotropic principal curvatures of at x with respect to ν and F.
Then {-κ_i^F(x)}_i=1^n
are the anisotropic principal curvatures of at x with respect to the unit inner normal -ν and F_∗, which we denote by {κ_i^F_∗(x)}_i=1^n.
To prove (<ref>), we verify by definition.
Precisely, thanks to (<ref>), we have
F^o_∗(x)
=sup{<x,z>/F_∗(z)| z∈^n}
= sup{<x,z>/F(-z)| z∈^n}
= sup{<-x,-z>/F(-z)| -z∈^n}
=F^o(-x).
(<ref>) follows directly by
differentiating both sides of the equality F_∗(z)=F(-z).
To prove (3), we fix any x∈ and let {e_i(x)}_i=1^n denote the anisotropic principal vector at x∈ corresponding to κ_i^F, that is,
D_e_i(x)Φ(ν(x))
=κ^F_i(x)e_i(x).
Letting z=-ν(x) in (<ref>) then
differentiating with respect to e_i(x), we obtain
D_e_i(x)Φ_∗(-ν(x))
=-D_e_i(x)Φ(ν(x))
=-κ^F_i(x)e_i(x),
thereby showing that {e_i(x)}_i=1^n are also the anisotropic principal vectors at x∈, corresponding to {κ_i^F_∗(x)}_i=1^n, with
κ_i^F_∗(x)
=-κ_i^F(x), ∀i=1,…,n.
This completes the proof.
We close this subsection by stating the following geometric result, which follows
simply by using integration by parts for the divergence of the position vector field.
We have
∫_^F∩^n+1_+F(ν) A
=(n+1)^F∩^n+1_+
and
∫_^F_1,_0∩^n+1_+(F(ν)+_0<ν,E_n+1^F>)A
=(n+1)_1,_0^F,
where ^F_1,_0=^F_1(_0 E^F_n+1)∩_+^n+1.
(<ref>) follows from integration by parts of the divergence of the position vector field. (<ref>)
follows from plugging F̃ given by (<ref>) into (<ref>).
§ WILLMORE INEQUALITY IN UNBOUNDED CONVEX SETS
As in <cit.>, we introduce the following flow which generates parallel hypersurfaces in the anisotropic free boundary sense, defined by
ζ_F(x,t)
=x+tΦ(ν(x)), (x,t)∈×_+ Z.
In the rest of the section,
we write ^F_r,K=^F_r(0)∩K for any r>0.
Given an unbounded, closed convex sets K in ^n+1.
Let ⊂ K be a compact, embedded C^2-hypersurface with boundary ⊂ Reg( K) such that (<ref>) holds.
Denote by the bounded domain delimited by and K.
Assume that 0∈ int(∩ K).
Define a distance function on K by
d_F_∗(y,Ω)=inf_r>0{r|^F_∗_r(y)∩Ω≠∅}.
Then we have following statements.
* For any R>0, there holds
{y∈K∖Ω:d_F_∗(y,Ω) ≤R}
⊂ {ζ_F(x,t)|x∈Σ,t∈(0,min(R,τ(x))]},
where
τ is a function defined on by
τ(x)=
+∞, if κ_i^F_∗(x)≤0 for any i=1,⋯,n,
1/max_i κ_i^F_∗(x), otherwise.
* For any R>0, there holds
{ y∈K:d_F_∗( y,^F_r,K) ≤R}
=^F_r+R,K.
*
lim_R→+∞Vol({ y∈K:d_F_∗( y,) ≤R})/^FR^n+1
=AVR_F(K),
where AVR_F(K) is defined and discussed in Appendix <ref> below.
(1)
For any y∈K∖Ω satisfying d_F_∗(y,Ω)≤ R, we use a family of closed Wulff balls {^F_∗_r(y)}_r>0 to touch Ω.
Clearly there must exist x∈∂Ω and 0<r_0≤ R,
such that _r_0^F_∗(y) touches Ω for the first time at a point x.
Let ν^∗(x) be the outer unit normal of _r_0^F_∗(y) at x, and Φ_∗(ν^∗(x)) the anisotropic normal at x satisfying
Φ_∗(ν^∗(x))
=x-y/r_0.
By convexity of K and strictly convexity of _r_0^F_∗(y), x cannot be obtained at ∂Ω∖Σ, thus only the following two cases are possible:
Case 1. x∈∂Σ.
Since ^F_∗_r_0(y) touches Ω from outside at x, ν^∗(x), -ν(x) and -N̅(x) lie on the same 2-plane and moreover, ν^∗ lies in a length-minimizing geodesic on ^1 joining -ν and -N̅(x).
Thanks to Proposition <ref>, we find
Φ_∗(ν^∗(x)),-N̅(x)
≥Φ_∗(-ν(x)),-N̅(x)
=-Φ(ν(x)),-N̅(x)
=ν_F(x),N̅(x)
≥0.
On the other hand, since K is convex and y∈ K, we must have
0
≥y-x,N̅(x)=-r_0Φ_∗(ν^∗(x)), N̅(x)
= r_0Φ_∗(ν^∗(x)),-N̅(x).
Hence, the above two inequalities must hold as equalities simultaneously; that is to say, x must belong to {x∈∂Σ|ν_F(x),N̅(x)=0}, while ν^∗(x)=-ν(x), then there hold that
y=x-r_0Φ_∗(-ν)
=x+r_0Φ(ν)
=ζ_F(x,r_0),
and that Ω and ^F_∗_r_0(y) are mutually tangent at x.
Recall that κ_i^F_∗(x) denotes the anisotropic principal curvatures of at x, with respect to -ν and F_∗, we therefore find max_iκ_i^F_∗(x)≤1/r_0.
Taking the definition of τ(x) into account, this readily implies r_0≤τ(x).
Case 2. x∈ int(Σ).
In this case, it is easy to see that ν^∗(x)=-ν(x) from the first touching property. We may conduct a similar
argument as above to find that y can be written as y=ζ_F(x,r_0), with r_0≤τ(x).
Therefore, for any y∈K∖Ω, if d_F_∗(y,Ω)≤ R, we can find x∈Σ and r_0∈(0,min(R,τ(x))], such that y=ζ_F(x,r_0), which implies (<ref>).
(2) Now we show (<ref>).
Recall that we have set ^F_r,K=^F_r(0)∩ K.
Our first observation is that,
if y∈^F_r,K, it is easy to see that d_F_∗( y,^F_r,K)=0.
Since it is trivial to see that ^F_r,K⊂^F_r+R,K, to prove (<ref>), it suffices to show that
{ y∈K∖^F_r,K:d_F_∗(y,^F_r,K) ≤R}=^F_r+R,K∖^F_r,K, ∀R>0.
Claim.
For any y∈K∖^F_r,K,
let R̃=d_F_∗(y,^F_r,K),
then F^o(y)=r+R̃.
To prove the claim,
we consider the family of closed Wulff balls {^F_∗_r(y)}_r>0, as r increases, by the definition of d_F_∗, we have that
_R̃^F_∗(y) touches ^F_r,K for the first time.
Suppose that _R̃^F_∗(y) touches ^F_r,K at x, then
x∈∂^F_∗_R̃(y)∩∂^F_r,K.
Since
_R̃^F_∗(y) is strictly convex,
and K is convex,
we deduce that
x∈∂^F_∗_R̃(y)∩∂^F_r(0)∩K.
Let ν(x), ν̃(x) denote respectively the unit outer normals of ^F_∗_R̃(y) and ^F_r(0) at x.
It follows that
x-y=R̃ Φ_∗(ν(x)), x=rΦ(ν̃(x)).
If x∈∂^F_∗_R̃(y)∩∂^F_r(0)∩∂ K, we have -ν(x) lies in a length-minimizing geodesic joining ν̃(x) and N̅(x) in ^n.
By Proposition <ref>, it holds that
Φ(ν̃(x)),N̅(x)≤Φ(-ν(x)), N̅(x)=-Φ_∗(ν(x)),N̅(x).
On the other hand, it follows from (<ref>) and convexity of K that
Φ(ν̃(x)),N̅(x)
=x/r,N̅(x)≥0,
-Φ_∗(ν(x)),N̅(x)
=-x-y/R̃,N̅(x)
≤0.
Combining all above,
we thus obtain ν(x)=-ν̃(x) and
x,N̅(x)=<y,N̅(x)>=0.
If x∈∂^F_∗_R̃(y)∩∂^F_r(0)∩ int(K), it is easy to see that ν(x)=-ν̃(x).
Summarizing, in either case, we always have ν(x)=-ν̃(x), thus
Φ_∗(ν)=-Φ(-ν)=-Φ(ν̃).
Using (<ref>) again, we get
F^o(y)
= F^o(y-x +x)
= F^o(-R̃Φ_∗(ν(x))+rΦ(ν̃(x)))
= F^o((r+R̃)Φ(ν̃(x)))
=r+R̃,
which proves the claim.
Now we prove (<ref>).
"⊂": For any y∈ K∖^F_r,K satisfying R̃=d_F_∗( y,^F_r,K)≤ R, we deduce immediately from the above estimate that
F^o(y)
= r+R̃
≤r+R,
thus y∈^F_r+R,K∖^F_r,K.
"⊃": For any y∈^F_r+R,K∖^F_r,K, suppose that R̃=d_F_∗(y,^F_r,K)> R, then it holds that
F^o(y)
=r+R̃
>r+R,
which contradicts to y∈^F_r+R,K. Hence d_F_∗(y,^F_r,K)≤ R, and the proof is completed.
(3)
Since Ω is bounded, we could find r_1 and r_2 such that ^F_r_1,K⊂Ω⊂^F_r_2,K. From the definition of d_F_∗ (<ref>), we find
d_F_∗(y,^F_r_2,K)≤d_F_∗(y,)≤d_F_∗(y, ^F_r_1,K),
∀y∈K,
and it follows that
{ y∈K:d_F_∗( y,^F_r_1,K)≤R}
⊂ { y∈K:d_F_∗( y,Ω) ≤R}
⊂ { y∈K:d_F_∗( y,^F_r_2,K) ≤R}.
Dividing (<ref>) by ^FR^n+1, we get (<ref>) by using (<ref>) and Proposition <ref>.
We have now all the requisites to prove the Willmore inequality.
We first prove the Willmore inequality.
Our starting point is, thanks to (<ref>), we may use the area formula to estimate the volume as follows: for any R>0,
Vol({ y∈K:d_F_∗( y,Ω)
≤R})
≤|Ω|+∫_Σ∫_0^min( R,τ( x))J^Zζ_F(x,t)tA.
By a simple computation, we see, the tangential Jacobian of ζ_F along Z=×_+ at (x,t) is just
J^Zζ_F(x,t)
= F(ν(x))∏_i=1^n(1+tκ_i^F(x)).
Recall the definition of τ (<ref>), and taking also Proposition <ref>(3) into account,
we may rewrite τ as
τ(x)=
+∞, if κ_i^F(x)≥0 for any i=1,⋯,n,
-1/min_i κ_i^F(x), otherwise.
It is clear that for any x∈, there hold 1+tκ_i^F(x)>0, for each i=1,⋯,n, and for any t∈(0,τ(x)).
By the
AM-GM inequality,
we have
J^Zζ_F(x,t)
≤F(ν)(1+H^F(x)t)^n.
To have a closer look at (<ref>), we divide into two parts _+={x∈Σ:H^F(x)>0} and ∖Σ_+.
On ∖_+, we have
0≤(1+H^Ft)^n≤1, ∀t∈[0,τ(x)],
which, in conjunction with (<ref>), gives
∫_∖_+∫_0^min(R,τ(x))J^Zζ_F(x,t)tA
≤O(R).
Thus (<ref>) can be further estimated as follows:
Vol({y∈K:d_F_∗( y,)≤R})
≤
+∫__+∫_0^min( R,τ(x))J^Zζ_FtA
+∫_∖_+∫_0^min(R,τ(x))J^Zζ_FtA
≤ ∫__+∫_0^RF(ν)(1+H^F(x)t)^ntA+O(R)
= R^n+1/n+1∫__+F(ν)(H^F(x))^nA
+O(R^n)
≤ R^n+1/n+1∫_F(ν)H^F^nA
+O(R^n),
where the third inequality holds due to (<ref>).
Dividing both sides of (<ref>) by R^n+1 and
letting R→ +∞, we deduce from (<ref>) that
AVR_F(K)^F
≤1/n+1∫__+F(ν)(H^F)^nA ≤1/n+1∫_F(ν)H^F^nA.
which finishes the proof of the inequality (<ref>).
Now we start to prove the rigidity. First we claim
Claim 1.
If the equality in (<ref>) holds,
then H^F≥0 on ,
τ(x)=+∞ on _+, and equality in (<ref>) holds on _+.
To prove the first assertion of the claim,
we deduce from (<ref>) and the equality case of (<ref>) that
1/n+1∫_F(ν)H^F^nA
= 1/n+1∫__+F(ν)(H^F)^nA,
which implies H^F≥ 0 on .
To prove τ(x)=+∞ on _+,
we argue by contradiction and
assume that there exists a point x_0∈_+ satisfying τ(x_0)<+∞. Since ∈ C^2, we can find a neighborhood of x_0 in _+, denoted by U(x_0), such that τ(x)≤ 2τ(x_0)<+∞ on U(x_0). For any R>2τ(x_0), we obtain from (<ref>) that
Vol({y∈K:d_F_∗( y,)≤R})
≤ ∫__+∫_0^min(R,τ(x))F(ν)(1+H^F(x)t)^ntA +O(R)
≤ ∫__+∖U(x_0)∫_0^RF(ν)(1+H^F(x)t)^ntA
+∫_ U(x_0)∫_0^τ(x)F(ν)(1+H^F(x)t)^ntA +O(R)
≤ ∫__+∖U(x_0)∫_0^RF(ν)(1+H^F(x)t)^ntA +O(R)
= R^n+1/n+1∫_Σ_+∖U(x_0) F(ν)(H^F)^nA +O(R^n).
Dividing both sides by R^n+1 and
letting R→ +∞, it follows that
AVR_F(K)^F
≤1/n+1∫__+∖U(x_0)F(ν)(H^F)^nA,
which is a contradiction to the assumption that equality in (<ref>) holds.
By a similar contradiction argument, we also get that
equality in (<ref>) holds on _+, and we omit the proof here for brevity.
In particular, this proves Claim 1.
In virtue of this claim, we see that =_+ must be an anisotropic umbilical hypersurface, and hence a part of a Wulff shape.
Write =_R_0^F(x_0)∩ K for some R_0>0 and x_0∈^n+1.
Clearly, determines an solid cone with vertex at x_0, given by
_x_0
={x_0+tx-x_0/R_0:x∈int(), t∈(0,+∞)}.
Consider now the modified convex set
K=(K∖)∪(_x_0∩_R_0^F(x_0)).
Due to convexity of K and the boundary condition (<ref>), for any x∈∂Σ, there exists at least one supporting hyperplane passing through x, hence K is also convex.
Claim 2. ^F_R(x_0)∩K/R^n+1 is non-increasing on R∈ [0,+∞).
This can be proved similarly as Proposition <ref> below.
Indeed,
by using the co-area formula, we have
/R|^F_R(x_0)∩K|/R^n+1
=∫_^F_R(x_0)∩K1/|DF^o(x-x_0)| A- (n+1)|^F_R(x_0)∩K|/R/R^n+1.
On the other hand, by the divergence theorem, we have for R≥ R_0,
(n+1)|^F_R(x_0)∩K|= ∫_^F_R(x_0)∩Kdiv(x-x_0)x
= ∫_^F_R(x_0)∩KR/DF^o(x-x_0)A
+∫_^F_R(x_0)∩K∖Ωx-x_0,N̅(x)A
+∫__x_0∩^F_R_0(x_0)x-x_0,N̅(x)A.
Note that x-x_0,N̅(x)=0 on _x_0∩^F_R_0(x_0) since it is part of the boundary of a cone, and x-x_0,N̅(x)≥0 on ^F_R(x_0)∩ K∖Ω since K is convex.
It follows that
(n+1)|^F_R(x_0)∩K|≥ R∫_^F_R(x_0)∩K1/DF^o(x-x_0)A,
and in turn
/ R|^F_R(x_0)∩K|/R^n+1≤ 0,
which proves the claim.
From the second claim, we know that
AVR_F(K)^F
= lim_R→+∞|^F_R(x_0)∩K|/R^n+1
≤ |^F_R_0(x_0)∩K|/R_0^n+1
=|^F_R_0(x_0)∩_x_0|/R_0^n+1.
On the other hand,
using equality in (<ref>) and the fact that is anisotropic umbilical, we get
AVR_F(K)^F= 1/n+11/R_0^n∫__R_0^F(x_0)∩KF(ν) A
= 1/n+11/R_0^n∫__R_0^F(x_0)∩_x_0F(ν) A
= |^F_R_0(x_0)∩_x_0|/R_0^n+1,
which
implies that |^F_R(x_0)∩K|/R^n+1 is a constant for R∈[R_0,+∞).
See Fig. <ref> for an illustration when F is the Euclidean norm, where the red part denotes the boundary of the modified set K and the blue part denotes the boundary of the cone _x_0.
Because of the boundary condition (<ref>), we point out that Fig. <ref> is what we could expect so far.
Next, we are going to show by virtue of the claim that K is in fact _x_0, namely, the blue portion and the red portion in Fig. <ref> actually coincide.
In fact,
from the proof of the claim, we see that
x-x_0,N̅(x)=0 on K∖Ω,
which implies that K∖Ω is a part of boundary of the cone centered at x_0. Finally from the information that
is a part of Wulff shape centered at x_0 and K∖Ω is part of boundary of the cone centered at x_0, we see that is indeed an anisotropic free boundary Wulff shape in K, i.e., ν_F,N̅=0 along .
This completes the proof.
§ WILLMORE INEQUALITIES IN A HALF-SPACE
§.§ An alternative approach in the half-space case
In this section, we use an alternative approach to prove Theorem <ref> in the half-space case. We restate it here.
Let ⊂^n+1_+ be a compact, embedded, C^2-hypersurface with boundary ⊂_+^n+1 such that
ν_F(x), -E_n+1≥0, for any x∈∂Σ.
Then we have
1/n+1∫_ F(ν)H^F^n A≥|^F∩_+^n+1|.
Equality in (<ref>) holds if and only if is an anisotropic free boundary Wulff shape.
The alternative approach is based on the following geometric observation for the Gauss image of .
Let ⊂^n+1_+ be a compact, embedded, C^2-hypersurface with boundary ⊂_+^n+1 such that
ν_F(x), -E_n+1≥0, for any x∈∂Σ.
Then the anisotropic Gauss map ν_F:^F satisfies
(^F∩_+^n+1)
⊂ν_F(_+).
where _+ is the subset of where the anisotropic Weingarten map ν_F is positive-semi definite.
For any y∈^F∩^n+1_+, denote by N(y) the outer unit normal to ^F at y. Then Φ(N(y))=y.
Let _y be a closed half-space whose inward unit normal is given by N(y), such that ∩_y=∅, and let Π_y be the boundary of _y, which is a hyperplane in ^n+1.
Parallel translating Π_y towards and denote by Π_y the hyperplane that touches for the first time at some p∈.
If p∈⊂^n+1_+, then by the first touching property, we have
ν(p), -E_n+1≤N(y), -E_n+1.
Thanks to the first touching property, ν(p),N(y), and -E_n+1 are on the same 2-plane and from the above relation we know, N(y) lies in the length-minimizing geodesic on ^n joining ν(p) and -E_n+1.
Therefore using the anisotropic angle comparison principle (Proposition <ref>), we obtain
<ν_F(p),-E_n+1>
≤<Φ(N(y)),-E_n+1>= <y,-E_n+1>.
On the other hand, by assumption,
ν_F(p), -E_n+1
≥0
>y, -E_n+1,
which gives a contradiction.
Hence the first touching point p∈ int() and is again tangent to Π_y at p. Thus
ν(p)=N(y) and ν_F(p)=Φ(ν(p))=y.
Note that the touching of with Π_y is from the interior, therefore at the touching point p, ν(p)≥ 0 and in turn, ν_F(p)≥ 0.
Thus we have p∈_+ and ν_F(p)=y, which completes the proof.
Our starting point is Proposition <ref>.
To proceed, note that the Jacobian of the Gauss map with respect to is just J^ν_F= H_n^F, where H^F_n is the anisotropic Gauss-Kronecker curvature. By using (<ref>), the area formula and AM-GM inequality, we have
(n+1)^F∩^n+1_+
=∫_^F∩^n+1_+F(ν(y))A(y)
≤∫__+F(ν(p))H_n^F(p) A(p)
≤∫__+F(ν)(H^F)^n A
≤∫_F(ν)H^F^n A,
which is (<ref>).
If equality in (<ref>) holds, then all the inequalities in the above argument are actually equalities, so we readily infer that agrees with _+
and is in fact an anisotropic umbilical hypersurface, thanks to the AM-GM inequality. It is not hard to deduce that <ν_F(x),-E_n+1>=0. Hence is an anisotropic free boundary Wulff shape in ^n+1_+.
§.§ Capillary hypersurfaces in a half-space
As corollaries of Theorem <ref>, we may deduce various Willmore-type inequalities for anisotropic capillary hypersurfaces in a half-space.
First, through the new Minkowski norm F̃ in (<ref>) (see Proposition <ref>), the anisotropic capillary problem can be rephrased as an anisotropic free boundary problem.
Second, a special Minkowski norm to our interest is defined as
F(ξ)=ξ-cosθ_0<ξ,E_n+1>,
for some angle constant θ_0∈(0,π), by virtue of which the capillary problem in a half-space could be interpreted in terms of anisotropic terminology (see e.g., DeMasi22,LXX23).
Plugging these Minkowski norms into Theorem <ref>, we thereby obtain the following Willmore inequalities as claimed in the introduction, which
we state for readers' convenience.
Given _0∈(-F(E_n+1),F(-E_n+1)).
Let ⊂^n+1_+ be a compact, embedded, C^2-hypersurface with boundary ⊂_+^n+1 intersecting ^n+1_+ transversally such that
ν_F(x), -E_n+1=(x)≥_0, for any x∈∂Σ.
Then there holds
1/n+1∫_(F(ν)+_0<ν,E^F_n+1>)H^F^nA≥^F_1,_0,
where ^F_1,_0=^F_1(_0 E^F_n+1)∩_+^n+1.
Moreover, equality in (<ref>) holds if and only if is an anisotropic _0-capillary Wulff shape in ^n+1_+.
Given θ_0∈(0,π).
Let ⊂_+^n+1 be a compact, embedded C^2-hypersurface with boundary ⊂_+^n+1 intersecting ^n+1_+ transversally, such that
<ν, -E_n+1>≥-cosθ_0, for any x∈.
Then there holds
1/n+1∫_Σ(1-cosθ_0ν, E_n+1)H^n A≥B_1,θ_0,
where
B_1,θ_0=B_1(-cosθ_0 E_n+1)∩_+^n+1.
Moreover, equality in (<ref>) holds if and only if is a θ_0-capillary spherical cap in ^n+1_+.
On the other hand, by revisiting the geometric property concerning Gauss map in the (anisotropic) capillary settings, we could prove a variant of Theorem <ref> as follows.
Given _0∈(-F(E_n+1),F(-E_n+1)).
Let ⊂^n+1_+ be a compact, embedded, C^2-hypersurface with boundary ⊂_+^n+1 intersecting ^n+1_+ transversally such that
ν_F(x), -E_n+1=(x)≥_0, for any x∈∂Σ.
Then there holds
∫_F(ν)H^F^n A≥∫_^F_1,_0∩^n+1_+F(ν)A.
Moreover, equality in (<ref>) holds if and only if is an anisotropic _0-capillary Wulff shape in ^n+1_+.
The revisited variant of Proposition <ref> reads as follows.
Given _0∈(-F(E_n+1),F(-E_n+1)).
Let ⊂^n+1_+ be a compact, embedded, C^2-hypersurface with boundary ⊂_+^n+1 such that
ν_F(x), -E_n+1=(x)≥_0, for any x∈∂Σ.
Then the anisotropic Gauss map ν_F:^F satisfies
{y∈^F:y, -E_n+1≤_0}
⊂ν_F(_+).
Proposition <ref> follows from Proposition <ref> by introducing F̃ in (<ref>) and using Proposition <ref>.
We note that
{y∈^F:<y,-E_n+1>≤_0}+_0E_n+1^F
=^F_1(_0 E^F_n+1)∩_+^n+1,
and hence
∫_{y∈^F:<y,-E_n+1>≤_0}F(ν(y)) A(y)=∫__1,_0^F∩^n+1_+F(ν(y)) A(y).
By virtue of Proposition <ref>,
we use the area formula in the following way,
∫__1,_0^F∩^n+1_+F(ν(y))A(y)
≤∫__+F(ν(p))H_n^F(p) A(p)
≤∫__+F(ν)(H^F)^n A
≤∫_F(ν)H^F^n A,
which is (<ref>).
The proof of rigidity follows similarly as that in the proof of Theorems <ref>.
In view of (<ref>),
it is clear that the two Willmore-type inequalities (<ref>) and (<ref>) are different unless _0=0.
§ PROPERTIES OF UNBOUNDED CONVEX SETS
The following two Propositions may be familiar to experts, especially in the case F(ξ)=ξ.
Let K be an unbounded, closed convex set with boundary K
which contains the origin.
Fix 0≤ r<+∞,
then
|^F_r+R∩ K|/R^n+1 is non-increasing. In particular, |^F_R∩ K|/R^n+1 is a constant if and only if K is a cone with vertex at the origin.
By using the co-area formula, for any 0≤ r, we have
/R|^F_r+R∩K|/R^n+1
=R^-(n+1)(∫_^F_r+R∩K1/|DF^o|A- (n+1)|^F_r+R∩K|/R).
On the other hand, by the divergence theorem, we have
(n+1)|^F_r+R∩K|= ∫_^F_r+R∩Kdiv(x)x
= ∫_^F_r+R∩Kx,DF^o(x)/DF^o(x)A
+∫_^F_r+R∩Kx,N̅(x)A
≥ ∫_^F_r+R∩KR/DF^oA.
In the last inequality, we have used x, N̅(x)≥ 0 thanks to the convexity of K.
It is then direct to see that
/ R|^F_r+R∩ K|/R^n+1≤ 0.
If
/ R|^F_R∩ K|/R^n+1= 0, then
x,N̅(x)=0 for ^n-a.e. x∈ K, it follows that K is a cone with vertex at the origin, see e.g., <cit.>*Proposition 28.8.
A direct consequence is that one can define the asymptotic volume ratio with respect to F for K as
AVR_F(K)
=lim_R→∞ |^F_R∩K|/^FR^n+1.
Similarly, we show the following asymptotic volume ratio for any F.
Let K be an unbounded, closed convex set with boundary K
which contains the origin.
There is a unique tangent cone at infinity of K, say K_∞.
Moreover, one has
AVR_F(K)
=|^F ∩K_∞|/^F.
Denote by K_R=1/RK. From the compactness result, see e.g., <cit.>*Corollary 12.27, we know that, up to a subsequence, say {R_h}_h∈,
K_R_hK_∞
for some sets of locally finite perimeter , with convergence in the sense that, for every compact set E⊂^n+1,
lim_h∞E∩(K_∞K_R_h)=0.
By the monotonicity, one can prove that K_∞ must be a cone. In fact,
for each s>0, we have
lim_h→∞|^F_s∩K_R_h|=lim_h→∞|^F_sR_h∩K|/R_h^n+1
(<ref>)=
s^n+1AVR_F(K)^F.
On the other hand, from (<ref>) we deduce
lim_h→∞|^F_s∩K_R_h|=|^F_s∩K_∞|.
It follows that, regardless of the choice of convergence subsequence, there always holds
|^F_s∩K_∞|/s^n+1
=AVR_F(K)^F.
As the LHS is a constant about s, it follows from Proposition <ref> that
is a cone,
revealing the fact that
AVR_F(K)
=|^F∩K_∞|/^F.
In other words, we have proved that for any such non-compact convex set K, there exists a unique tangent cone at infinity, denoted by K_∞.
This completes the proof.
alpha
|
http://arxiv.org/abs/2409.03085v1 | 20240904211859 | Penalized Subgrouping of Heterogeneous Time Series | [
"Christopher M. Crawford",
"Jonathan J. Park",
"Sy-Miin Chow",
"Anja F. Ernst",
"Vladas Pipiras",
"Zachary F. Fisher"
] | stat.ME | [
"stat.ME"
] |
Emergence of two inertial sub-ranges in solar wind turbulence:
dependence on heliospheric distance and solar activity
[
School of Physics, Nankai University, Tianjin, 300071, China
=======================================================================================================================
Recent technological advances have decreased the burden associated with collecting intensive longitudinal data (ILD) in the social, behavioral, and health sciences. The availability of such data has catalyzed interest in the study of constructs defined by the complex interplay between dynamic biopsychosocial processes. Despite these increases in both access to ILD and interest in the study of dynamic processes Hamaker2017, how best to model multivariate time series data arising from multiple individuals is still an open question. Central to this question is how researchers should best accommodate the persistent heterogeneity observed in many aspects of human behavior.
Current methods for analyzing dynamic processes vary in the degree to which this heterogeneity is addressed [e.g.,][]Liu2023. Multilevel modeling approaches, for example, allow individuals to differ quantitatively on a limited set of dynamic features through the inclusion of random effects [e.g.,][]Bringmann2013. However, because standard multilevel models assume no qualitative differences in the pattern of relations among the dynamic processes, these approaches may be overly restrictive. For example, data generating processes characterized by individual differences in the patterning of zero and nonzero dynamics are poorly represented by such approaches. Moreover, violation of assumptions regarding the structure and distribution of random effects can adversely impact estimation and inference McNeish2017. Idiographic methods, conversely, allow for a great deal of flexibility through the specification of person-specific models (<cit.>; <cit.>). Indeed, recent work has shown that the dynamics of commonly studied constructs—such as depression and anxiety Fisher2017, externalizing behavior Wright2015, and personality <cit.>—are characterized by marked heterogeneity in both the magnitude of the features and the patterning of dynamics, suggesting that person-specific methods may be needed to fully capture the complexity of the data generating processes. The increased flexibility afforded by idiographic approaches, however, may limit generalizability and inhibit the identification of shared dynamics critical for areas such as intervention development.
Recently, a number of approaches have emerged that offer alternatives for characterizing heterogeneity in dynamic processes by seeking to bridge the divide between nomothetic and idiographic approaches to modeling multiple-subject, multivariate time series. One of these approaches, the multi-VAR framework (<cit.>; <cit.>), is built upon the vector autoregressive (VAR) model and simultaneously estimates group- and individual-level models. Importantly, the multi-VAR approach accommodates both quantitative and qualitative heterogeneity in dynamics across individuals—that is, heterogeneity in both the magnitude and pattern of zero and nonzero dynamics—and is compatible with a number of penalization methods for structuring how information is shared across individuals. Currently, multi-VAR is only capable of estimating a single group-level model, presumably by all individuals. It may be the case, however, that for many processes shared patterns of dynamics also exist among subgroups, or clusters, of individuals.
To address this limitation, the current project extends the multi-VAR framework to allow for data-driven identification of subgroups and penalized estimation of subgroup-level dynamics. The approach detailed herein is characterized by a number of advantageous features. First, both quantitative and qualitative heterogeneity are accommodated in the estimation of group-, subgroup-, and individual-level models. In contrast to most existing subgrouping frameworks, this allows for individual differences in the magnitude of dynamics both within and between subgroups. Moreover, these effects are not assumed to follow any specific distributional form, thereby capitalizing on the flexibility inherent in fully idiographic approaches while also prioritizing generalizability through the group- and subgroup-level models. Second, sparsity in the group-, subgroup-, and individual-level dynamics is induced through a penalized estimation procedure, addressing challenges associated with overparameterization in both single-subject [e.g.,][]Sims1980 and multilevel approaches. Indeed, recent work on the development of a subgrouping approach incorporating finite mixture modeling into the multilevel VAR framework notes that estimation can be burdensome when specifying a large number of random effects Ernst2024. Finally, the simultaneous estimation procedure for group- and individual-level dynamics Fisher2022 extends to subgroup-level effects. Conversely, iterative approaches to the identification and estimation of subgroup dynamics may encounter issues associated with similarly iterative variable selection methods, including overfitting and suboptimal solutions McNeish2015. These features are described in detail in the following sections.
The remainder of the article is organized as follows. First, we review existing methods for subgrouping multiple-subject, multivariate time series. We then introduce the multi-VAR framework and the penalized subgrouping extension. We conclude with a presentation of results from both a simulation study and empirical example conducted to examine the efficacy and utility of the multi-VAR subgrouping extension.
§ SUBGROUPING METHODS FOR TIME SERIES
The identification of subgroups of individuals for whom shared patterns of dynamics exist, and the estimation of parameters that provide meaningful information about said subgroups, are of substantive interest in a variety of domains. Recent work, for example, has noted that current taxometric procedures for the diagnosis of psychopathological syndromes fail to account for the observed heterogeneity within putatively homogeneous diagnostic categories Kotov2017. Appropriately parsing such heterogeneity can accelerate the development of more effective treatment paradigms [e.g.,][]Fisher2019, echoing calls for more personalized approaches to diagnosis and treatment in both clinical science Wright2020 and medicine Hamburg2010. Importantly, however, there is not a clear consensus regarding best practices for the identification of subgroups in multiple-subject, multivariate time series. Liao (), for example, notes that time series clustering methods can be organized according to which aspects of the data are used in the subgrouping procedure. For the purposes of the current work, we focus our attention on methods that derive clusters based on parameter estimates in the VAR framework.
One popular VAR-based clustering method is the alternating least squares (ALS) approach Bulteel2016, an iterative algorithm consisting of three steps. First, individuals are assigned to initial clusters, which can be done randomly or through the use of alternative clustering approaches, such as hierarchical clustering using Ward's criterion Ward1963. Next, a VAR model is fit to each cluster to obtain subgroup-specific parameter estimates. Finally, subgroup membership is updated by reassigning individuals to the cluster for which the sum of their squared prediction errors are minimized. These steps are repeated until there are no changes in subgroup membership. A notable limitation of this framework is that only between-cluster quantitative heterogeneity is modeled—that is, the ALS approach assumes a priori that individuals within a cluster are governed by the same data generating process, and that there are no qualitative differences between clusters. Thus, different clusters are assumed to share the same pattern of zero and nonzero dynamics.
Work by Ernst and colleagues () relaxes the within-group homogeneity assumption with a VAR-based clustering approach that incorporates the Gaussian finite mixture model (GMM; <cit.>; <cit.>). Proceeding in a two-step fashion, person-specific VAR models are first fit to data for each individual, with the restriction that there are no between-person differences in lag order. A GMM is then applied to the person-specific parameter estimates, thereby allowing for quantitative differences in within-cluster estimates. With this added flexibility, however, comes increased complexity in both the estimation of the parameters that define the multivariate Gaussian distribution and selection of the model that best represents the structure of the data. Indeed, the use of relative fit statistics to compare models in the GMM framework often requires bootstrap-based methods Mclachlan2000. Simulation studies comparing these two VAR-based clustering methods have found that the GMM approach outperforms the ALS framework when within-group quantitative differences are present Ernst2021, whereas the opposite is true when the homogeneity assumption is met Takano2021.
Another approach is the subgrouping extension within the group iterative multiple model estimation (S-GIMME) algorithm Gates2017. The S-GIMME algorithm can be described in three sequential stages. In the group stage, the S-GIMME framework estimates a structural VAR model Lutkepohl2005 for each individual. Modification indices Sorbom1989 are used to identify paths to be added to the model if they significantly improve model fit for a majority of individuals, thereby returning estimates that are shared across individuals. Next, the subgroup stage uses these group-level results to construct a similarity matrix such that each element of said matrix represents the number of parameter estimates shared between two individuals in terms of sign and significance. The community detection algorithm Walktrap Pons2006 is then applied to the similarity matrix to identify clusters of individuals with shared dynamics. Once subgroups have been identified, estimates for individuals within each subgroup are obtained in the same manner as in the group stage. Finally, the third stage estimates additional dynamics for each individual by iteratively adding paths until the model is deemed acceptable via commonly used fit indices and associated cutoff values Lane2019. The S-GIMME approach therefore returns estimates for each individual and whether each estimate is unique, subgroup-, or group-specific, thereby accommodating both quantitative and qualitative heterogeneity. Moreover, challenges associated with the estimation of VAR processes comprised of many variables are addressed through the use of stopping criteria at each stage, which promotes sparsity and parsimony. However, forward-selection procedures, such as S-GIMME, can be limited by their sequential nature, and thresholds for the fit indices used in the individual-stage search procedure have been shown to be inadequate in many settings Mcneish2023.
Finally, a recently developed subgrouping approach, the subgrouped chain graphical VAR (scGVAR), extends the graphical VAR framework epskamp2018 by estimating group-, subgroup-, and individual-level dynamics in a three-stage process Park2024. First, individual-level results are obtained by estimating a graphical VAR model for each individual. Next, in a procedure similar to that employed by the S-GIMME framework, results from the first stage are used to create an adjacency matrix quantifying structural similarities in estimated dynamics across individuals. However, whereas S-GIMME defines each element of said matrix as the number of shared nonzero paths between each pair of individuals, scGVAR includes shared zero paths in this definition. Following the construction of this similarity matrix, the Walktrap Pons2006 algorithm is used to identify subgroups of individuals. These subgroup assignments are then used in the estimation of subgroup-level dynamics, wherein a graphical VAR model is fit to the aggregated time series of individuals in each subgroup. Group-level dynamics are then obtained by fitting a single graphical VAR model to the aggregated time series of all individuals. Similar to S-GIMME, scGVAR is therefore able to accommodate both quantitative and qualitative heterogeneity through the estimation of group-, subgroup-, and individual-level dynamics. However, whereas the S-GIMME procedure allows group- and subgroup-level dynamics to vary in magnitude between individuals, these effects are fixed in the scGVAR approach—that is, all individuals in the same subgroup are characterized by identical parameter estimates. Moreover, the two frameworks differ with respect to the challenges associated with estimation in high-dimensional settings. Indeed, in contrast to the use of fit indices in S-GIMME, scGVAR relies on a penalized estimation procedure to promote sparsity and parsimony.
§ MULTI-VAR
The motivation for the development of the multi-VAR framework is twofold. First, ILD collected in the social and behavioral sciences are generally composed of data from multiple individuals. From the single-subject VAR perspective, this now involves the estimation of transition matrices for each individual. It is unlikely, however, that the dynamic processes of interest are entirely heterogeneous across all individuals in a given sample. Indeed, a key feature of common ILD modeling approaches, such as multilevel modeling, is that information is shared across individuals in the parameter estimation procedure. A purely idiographic approach, wherein each of the individual transition matrices are estimated independently, disregards this shared information. Thus, the single-subject VAR, though capable of modeling both qualitative and quantitative heterogeneity, is unable to account for shared dynamics and may be suboptimal in many commonly encountered data settings where sharing information across individuals is beneficial (e.g., when time series length is small).
Second, as noted previously, the single-subject VAR is plagued by profligate parameterization Sims1980, such that the number of parameters grows quadratically with each additional component series. Given typical sample sizes (i.e., number of time points) in the social, behavioral, and health sciences [e.g.,][]Rot2012, this could result in the estimation of a large number of parameters given the available data, with such estimates lacking precision Bruggemann2012. Solutions to this dimensionality issue in the single-subject VAR have focused on reducing the parameter space using methods ranging from sequential search procedures to regularization via the least absolute shrinkage and selection operator (Lasso; <cit.>). Central to these remedies is the assumption that the true model is sparse in nature. The feasibility of this assumption is open to interpretation; however, the bet on sparsity principle Hastie2001 argues that methods assuming sparsity are preferable because if this assumption is false—that is, the true model is dense—then existing approaches are unable to recover the true model without a large amount of data. The multi-VAR framework aims to address the challenges associated with common modeling approaches, such as the single-subject VAR and multilevel modeling, while retaining their desirable features.
To motivate the construction of the multi-VAR framework, we first consider a multivariate (d-variate) time series for a single individual, {𝐗_t}_t ∈ℤ = {(X_j,t)_j=1,…,d}_t ∈ℤ, where 𝐗_t follows the canonical VAR model of order p, VAR(p), if
𝐗_t =
Φ_1𝐗_t-1 +
… +
Φ_p𝐗_t-p +
𝐄_t, t ∈ℤ,
for d × d transition matrices Φ_1, …,Φ_p containing autoregressive and cross-regressive parameters and a white noise series {𝐄_t}_t ∈ℤ∼WN(0, Σ_𝐄) characterized by 𝔼(𝐄_t)=0 and 𝔼(𝐄_t𝐄^'_s)=0 for s ≠ t. We assume here that 𝐗_t is of zero mean for simplicity, though developments that follow can easily accommodate time series with nonzero means. Generally, a unique causal stationary solution to (<ref>) can be ensured when the roots of det( Φ(z)), where Φ(z)=𝐈_d -Φ_1z - … - Φ_p z^p, all have moduli greater than unity. With observations 𝐗_1, …, 𝐗_T, we can more concisely express (<ref>) in the familiar regression format:
𝐘=Φ𝐙 + 𝐔,
where 𝐘 = (𝐗_p+1, …, 𝐗_T) is a d × (T-p) outcome matrix, Φ = (Φ_1, …, Φ_p) is a d × (dp) transition matrix, 𝐙 is a (dp) × (T-p) design matrix, and 𝐔 is a d × (T-p) matrix of process noise. In the remainder of this work we only consider first-order VAR models—that is, VAR(1). However, all arguments can be extended to accommodate models with arbitrary lag orders without any loss of generality.
The multi-VAR framework extends the above single-subject VAR representation to readily accommodate multiple-subject, multivariate time series through the estimation of Φ^1, …,Φ^K sparse transition matrices for K individuals, each composed of common and unique effects. To do so, the multi-VAR approach relies on the following decomposition of Φ^k,
Φ^k = Γ + Υ^k,
k = 1, … , K,
where Γ∈ℝ^d × d corresponds to the common effects across K individuals and Υ^k∈ℝ^d × d represents the unique effects for individual k. This decomposition allows for heterogeneity in the structure of the dynamics through the inclusion of unique, person-specific effects while allowing for some degree of homogeneity in the common effects. Further, as no distributional assumptions are imposed on (<ref>), individual transition matrices are free to vary both quantitatively and qualitatively across individuals. Notably, shared paths in the common effects matrix are allowed to vary in magnitude—that is, the effects are freely estimated while preserving the structure of Γ for all K individuals.
One approach for sparse estimation of (<ref>) was proposed by Fisher and colleagues () using the Lasso penalization paradigm,
Γ, Υ^1,...,Υ^Kargmin1/N∑_k=1^K 𝐘^k - (Γ + Υ^k) 𝐙^k _2^2 + P_standard
P_standard = λ_1 Γ_1 + ∑_k = 1^K λ_2,kΥ^k _1,
where P_standard is the standard Lasso penalty, 𝐀_1 denotes the ℓ_1 norm of vec(𝐀), and N = (T - p). Sparsity and heterogeneity in the multi-VAR solution is therefore determined and governed by the two penalty parameters, λ_1 and λ_2,k, which are chosen using cross-validation. The inclusion of penalty parameters on both the common and unique effects allows for flexibility in the approximation of potentially heterogeneous data generating processes. If, for example, individuals share little in common, it would be expected that the solution would be equivalent to estimating independent, single-subject VAR models. This would correspond to large values of λ_1, such that Γ̂ = 0 and the solution returns Φ̂^k = Υ̂^k. Conversely, if individuals are highly homogeneous, large values of λ_2,k would result in Υ̂^k = 0 and Φ̂^k = Γ̂, paralleling approaches that pool the time series and estimate a single transition matrix. It is likely that the reality falls between these two extremes, thereby resulting in a solution in which the common and unique effects reflect the degree of heterogeneity observed.
Despite widespread use of the standard Lasso penalization framework, it is characterized by several important limitations [for a review, see][]Freijeiro‐González2022. Indeed, it is well known that the Lasso exhibits drawbacks with respect to consistent path selection, excessive false positives, and bias in many situations, such as when the variables are strongly dependent. The adaptive Lasso—developed, in part, to address these limitations—replaces the standard ℓ_1 penalty term with a re-weighted version, where the weights are determined by consistent initial estimates of the model parameters Zou2006. This weighting enables differential penalization across parameters of interest, such that larger initial estimates correspond to smaller weights (and vice versa for small initial estimates). Using this idea, the adaptive multi-VAR was also proposed by Fisher and colleagues (), wherein the weights are constructed using initial estimates Φ^k as in
P_adaptive = λ_1 ∑_i,j=1^d 1/|ϕ_i,j,median|^α
|Γ_i,j| + ∑_k=1^K λ_2,k∑_i,j=1^d
1/|ϕ_i,j^k - ϕ_i,j,median|^α|Υ_i,j^k|,
where Γ_i,j corresponds to the {i,j}^th element of Γ, Υ_i,j^k corresponds to the {i,j}^th element of Υ^k, and similarly with ϕ_i,j^k for Φ^k and ϕ_i,j,median for Φ_median, with Φ_median representing the matrix of median coefficient estimates for all K individuals, and α≥ 1. Substituting the standard Lasso penalty, P_standard, in (<ref>) for the adaptive Lasso, P_adaptive, in (<ref>) results in the adaptive multi-VAR objective function. Prior work has shown that multi-VAR with adaptive Lasso performs well across a range of factors Fisher2024.
§ SUBGROUPING MULTI-VAR
The subgrouping multi-VAR procedure consists of two steps. First, subgroup enumeration and classification occur. That is, prior to estimation of subgroup-specific effects, the number of relevant subgroups is determined, and each individual is assigned to a subgroup. Next, this information is used in both the construction of the design matrix, Z, and the decomposition and penalized estimation of the individual transition matrices, Φ^k. These two steps are described in detail in the following subsections.
§.§ Identifying Subgroups
To identify the number and membership of clusters, subgrouping multi-VAR first estimates Φ^k for K individuals using the standard multi-VAR framework. The individual-level effects from these matrices, Υ^k, are then used to construct a K × K similarity matrix, where the off-diagonal elements represent the number of shared dynamics between each pair of individuals in terms of presence and sign. If, for example, a specific path is both nonzero and of the same sign (i.e., positive or negative) for individuals i and j, then the {i,j}^th element of the similarity matrix increments by one. The use of individual-level effects in the construction of the similarity matrix ensures that deviations from common effects contribute to subgroup identification. The construction of the similarity matrix in subgrouping multi-VAR is similar to the procedures implemented in S-GIMME and scGVAR (<cit.>; <cit.>).
The community detection algorithm Walktrap Pons2006 is then applied to this similarity matrix. The Walktrap algorithm uses the information in the similarity matrix to compute a transition matrix, such that each element corresponds to the probability of transitioning from one individual to another for a random walk of a given length. Intuitively, this allows the Walktrap algorithm to identify densely connected areas (i.e., communities). Ward's () hierarchical clustering procedure is then used to determine the optimal number of clusters by iteratively merging communities until all individuals are in a single cluster. Subgroup enumeration then proceeds by identifying the configuration with the maximum modularity Newman2004, which indicates the degree to which individuals within a cluster are similar relative to those from different clusters. The Walktrap algorithm has been found to perform well when applied to count matrices Gates2016, and has been successfully implemented in similar VAR-based methodological frameworks (e.g., <cit.>; <cit.>).
§.§ Estimating Subgrouping Effects
To extend the standard multi-VAR framework to allow for estimation of subgroup-specific effects, we consider the further decomposition of Φ^k,
Φ^k = Γ + Π^s +
Υ^k,
s = 1, … , S,
k = 1, … , K,
where Γ and Υ^k continue to correspond to the common and unique effects, respectively, and Π^s∈ℝ^d × d represents the subgroup effects for subgroup s. Paralleling the initial decomposition in (<ref>), individual transition matrices in (<ref>) are free to vary both in structure and magnitude across individuals. This decomposition can be further incorporated into the objective function detailed in (<ref>) within the standard Lasso penalization framework
Γ, Π^1, …,Π^S, …, Υ^Kargmin1/N∑_k=1^K 𝐘^k - (Γ + Π^s + Υ^k)
𝐙^k _2^2 + P_standard
P_standard = λ_1 Γ_1 +
∑_s = 1^S α_sΠ^s _1 +
∑_k = 1^K λ_2,kΥ^k _1,
where the addition of α_s in P_standard indicates that the sparsity and heterogeneity of the solution is now determined by three penalty parameters, λ_1, α_s, and λ_2,k. Importantly, this suggests that the competition of the three penalty terms affords even greater flexibility in approximating the underlying data generating processes. In addition to the scenarios noted previously, for example, the subgrouping multi-VAR can accommodate data characterized by within-subgroup homogeneity and a high degree of between-subgroup heterogeneity, resulting in large values of λ_1 and λ_2,k, such that Φ̂^k = Π̂^s.
This decomposition can also be incorporated into the penalty function in (<ref>) for the adaptive multi-VAR
P_adaptive = λ_1 ∑_i,j=1^d 1/|ϕ_i,j,median|^α
|Γ_i,j| +
∑_s=1^S α_s∑_i,j=1^d
1/|ϕ_i,j,median^s|^α|Π_i,j^s| +
∑_k=1^K λ_2,k∑_i,j=1^d
1/|ϕ_i,j^k - ϕ_i,j,median|^α|Υ_i,j^k|,
where Π_i,j^s corresponds to the {i,j}^th element of Π^s, and similarly for ϕ^s_i,j,median and Φ^s_median. In the current project—as with the standard multi-VAR—the weights in the adaptive Lasso are constructed using initial estimates Φ^k, Φ_median, and Φ^s_median, with Φ^s_median representing the matrix of median coefficient estimates across all individuals in a given subgroup. Note that for the oracle properties of the adaptive Lasso to hold—that is, identification of the correct subset model and optimal estimation rate—consistent estimates for the weights must be chosen Zou2006. In practice, however, selecting an appropriate estimator can be nontrivial in many situations, such as high dimensional settings or when covariates are strongly dependent. In such cases, estimation of adaptive weights via ℓ_1 or ℓ_2 penalization, respectively, may convey benefits over standard procedures, such as ordinary least squares or maximum likelihood. For more information regarding the performance and availability of various adaptive weights estimators in the multi-VAR framework, see Fisher et al. ().
Selection of the unknown penalty parameters—λ_1, α_s, and λ_2,k—in the subgrouping multi-VAR framework is done using a blocked cross-validation approach [BCV;][]Bulteel2018, which proceeds as follows. First, each of the K multivariate time series is divided into F equally sized folds. Next, one of the F folds is removed from each time series to serve as the testing block. Using the remaining folds comprising the training block, estimates of Γ, Π^s, and Υ^k are obtained, which are subsequently used to predict the testing block and obtain the mean squared error [MSE; though see][for an alternative to the MSE]Revol2024. This procedure continues such that each of the F folds serves as a testing block on which the MSE is computed. The performance is then aggregated across the K individuals and F folds for each combination of λ_1, α_s, and λ_2,k, wherein total error is calculated as
MSE_λ_1,α_s,λ_2,k =
1/K∑_k=1^K 1/F∑_f=1^F
Ŷ_f^k - Y_f^k _2^2,
and the values of the penalty parameters corresponding to the smallest total MSE are selected for the final model. Note that though the subscripts associated with penalty parameters α_s and λ_2,k suggest the possibility of subgroup- and person-specific penalization, the current project only examines a single parameter for each—that is, α_s = α and λ_2,k = λ_2. This more general notation is nonetheless included to emphasize the flexibility of the subgrouping multi-VAR framework.
§ SIMULATION STUDY
To evaluate the performance of the subgrouping multi-VAR framework, we conducted a Monte Carlo simulation study, wherein factors of interest were selected to represent scenarios typically encountered in ILD applications. Factors examined in the current study included number of individuals, K = (50, 100); length of individual time series, T = (50, 100); number of subgroups, S = (2, 3); and composition of the subgroups, balanced or unbalanced. For each condition, 50 replications were generated, yielding a fully factorial design with 16 conditions (2 × 2 × 2 × 2) and 800 (16 × 50) unique data sets. In the K = 50 conditions, for example, Monte Carlo estimates were computed using 2500 (50 × 50) multivariate time series. Number of variables was held constant at d = 10 for each condition. The performance of the subgrouping extension was assessed in comparison to alternative methods for modeling multi-subject, multivariate time series, namely, S-GIMME and scGVAR (<cit.>; <cit.>). As a benchmark, performance was also compared to the standard multi-VAR framework and multi-VAR with confirmatory subgrouping—that is, subgrouping multi-VAR when subgroup membership is known a priori.
The motivation for the selection of the comparison methods, S-GIMME and scGVAR, assessed in the current simulation study was twofold. First, like subgrouping multi-VAR, both frameworks seek to accommodate quantitative and qualitative heterogeneity through the estimation of group-, subgroup-, and individual-level dynamics. Second, both methods address the challenges associated with modeling VAR processes comprised of many variables by inducing sparsity and parsimony in the estimation procedure. As noted previously, however, the manner in which these features are incorporated varies considerably between approaches. For example, whereas subgrouping multi-VAR estimates group-, subgroup-, and individual-level dynamics simultaneously, S-GIMME and scGVAR are characterized by iterative and stage-based estimation procedures. In addition to assessing the performance of subgrouping multi-VAR across a range of factors, the current simulation study therefore represents a rigorous evaluation of competing approaches for the analysis of multiple-subject, multivariate time series.
§.§ Data Generation and Model Estimation
Across all conditions, 10 × 10 transition matrices were generated for each individual in the following manner. First, subgroup membership was specified. For the balanced conditions, subgroups were constructed such that each contained the same number of individuals. For the unbalanced conditions with two subgroups, 30% of individuals were placed into the first subgroup, and the remaining 70% were specified as members of the second subgroup. For the unbalanced conditions with three subgroups, the first and second subgroups consisted of 20% of individuals, and the third subgroup contained the remaining 60%. Next, common effects, Γ, were specified as the 10 diagonal elements of each transition matrix—that is, the autoregressive effects. The location of subgroup, Π^s, and unique, Υ^k, effects were then chosen at random, wherein the number of effects for each represented 5% of possible paths. Thus, each transition matrix consisted of 10 common effects, five subgroup effects, and five unique effects, yielding a sparse matrix with 20% nonzero entries. All effects were drawn from a 𝒰(0,1) distribution until a stationary solution was obtained, as determined by the stationarity condition noted previously. The data generation procedure therefore incorporated both qualitative and quantitative heterogeneity through variation in the location of the subgroup and unique effects and variation in the magnitude of group, subgroup, and unique effects, respectively.
All models analyzed within the multi-VAR framework—that is, subgrouping multi-VAR, multi-VAR with confirmatory subgrouping, and standard multi-VAR—were estimated using the adaptive Lasso procedure detailed above, with adaptive weights obtained via ℓ_1 penalization. Paralleling the group-level stage in the S-GIMME framework, however, all autoregressive paths were assumed to be nonzero, and were therefore not subjected to penalization in either the subgroup enumeration or estimation stages. All multi-VAR and S-GIMME analyses were conducted using the multivar and gimme R packages, respectively.
§.§ Outcome Measures
The performance of the subgrouping multi-VAR framework and comparison methods was evaluated in three ways: model recovery, quality of estimated effects, and accuracy of subgroup identification. A number of metrics were used to assess model recovery, including sensitivity, specificity, and Matthews correlation coefficient [MCC;][]Matthews1975. Sensitivity and specificity can be interpreted as the true positive and true negative rates, respectively, and were computed as follows:
Mean sensitivity = 1/K∑_k = 1^K( ∑_i,j (ϕ̂_i,j^k ≠ 0 and ϕ_i,j^k ≠ 0 ) /∑_i,j (ϕ_i,j^k ≠ 0) ),
Mean specificity = 1/K∑_k = 1^K( ∑_i,j (ϕ̂_i,j^k = 0 and ϕ_i,j^k = 0 ) /∑_i,j (ϕ_i,j^k = 0) ),
where ϕ̂_i,j^k and ϕ_i,j^k correspond to the {i,j}^th element of the estimated and true transition matrices for the k^th individual, respectively. Mean sensitivity and specificity were then averaged across all replications, resulting in sensitivity and specificity values for each condition. MCC similarly provides a metric for assessing the recovery of the data generating model by incorporating both sensitivity and specificity in its calculation:
Mean MCC = 1/K∑_k = 1^K
( TP_k × TN_k - FP_k × FN_k/√((TP_k + FP_k)(TP_k + FN_k)(TN_k + FP_k)(TN_k + FN_k))),
where TP is the number of parameters correctly estimated as nonzero, TN is the number of parameters correctly estimated as zero, FP is the number of parameters incorrectly estimated as nonzero, and FN is the number of parameters incorrectly estimated as zero. Notably, MCC ranges from perfect disagreement between estimated and true models (MCC = -1) to perfect agreement between the two (MCC = 1), and can therefore be interpreted as a discretization of the standard correlation coefficient in the context of binary classification Boughorbel2017. As such, the MCC can be interpreted with respect to commonly employed benchmarks [e.g.,][]Cohen1988.
To evaluate the quality and variability of the estimated parameters, we computed the mean absolute bias and root mean squared error (RMSE):
Mean Absolute Bias = 1/K∑_k=1^K 1/d^2∑_i,j=1^d
|ϕ̂_i,j^k - ϕ_i,j^k|,
RMSE = 1/K∑_k=1^K √(1/d^2∑_i,j=1^d
(ϕ̂_i,j^k - ϕ_i,j^k)^2),
where ϕ̂_i,j^k and ϕ_i,j^k again correspond to the {i,j}^th element of the estimated and true transition matrices for the k^th individual, respectively, and d represents the number of variables (i.e., 10). These values were then averaged over all replications to provide mean absolute bias and RMSE metrics for each condition.
To assess the accuracy of subgroup identification, we computed the Hubert-Arabie adjusted Rand index [ARI;][]Hubert1985:
ARI = K 2 (a+d) - [(a+b)(a+c) + (c+d)(b+d)]/K 2^2 - [(a+b)(a+c) + (c+d)(b+d)],
where a represents the number of pairs of individuals correctly placed in the same cluster, b is the number of pairs incorrectly placed in different in different cluster, c indicates the number of pairs incorrectly placed in the same cluster, and d is the number of pairs correctly placed in different clusters. The ARI therefore incorporates information about the number of true positive, false negative, false positive, and true negative classifications. The ARI has an upper bound of 1, which corresponds to perfect subgroup identification. ARI values of 0, conversely, indicate that subgroup assignments were equal to chance. ARI values greater than or equal to 0.90 were considered excellent, values between 0.80 and 0.89 were considered good, values between 0.65 and 0.79 indicated moderate recovery, and values below 0.64 suggested poor subgroup identification Steinley2004. Monte Carlo errors (MCE), defined as the standard deviation of the Monte Carlo estimates across all replications, were computed to quantify the uncertainty in all outcome measure estimates Koehler2009.
§.§ Simulation Results
Model Recovery
Sensitivity, specificity, and MCC values across all conditions for each estimator can be seen in Tables <ref> and <ref>, and are visualized in Figure <ref>. Given that MCC incorporates both sensitivity and specificity, assessment of model recovery focused on this metric. Across all conditions examined in the current study, subgrouping multi-VAR demonstrated more accurate model recovery than S-GIMME, scGVAR, or standard multi-VAR, and displayed only a slight decrement in performance when compared to the confirmatory subgrouping framework, where the true subgroup membership is assumed to be known. When K = 50 and S = 2, mean MCC for subgrouping multi-VAR increased as a function of time series length, with values of 0.81 and 0.88 when T = 50 and T = 100, respectively, for balanced subgroups, and values of 0.79 and 0.84 when subgroup membership was unbalanced. Subgrouping multi-VAR therefore demonstrated good model recovery for these conditions, with some MCC values reflecting near perfect agreement between true and estimated models. Differences in model recovery between multi-VAR with data-driven subgrouping and confirmatory subgrouping were minimal across these conditions (less than 0.02). Model recovery for S-GIMME, scGVAR, and standard multi-VAR similarly increased as a function of time series length. Mean MCC values for standard multi-VAR were 0.68 and 0.77 when T = 50 and T = 100, respectively, when subgroups were balanced, and 0.69 and 0.78 when subgroups were unbalanced. Mean MCC values for S-GIMME were 0.66 and 0.80 for both balanced and unbalanced subgroups, whereas values for scGVAR were 0.56 and 0.64 for balanced subgroups and 0.55 and 0.63 for unbalanced subgroups.
Similar results were observed when K = 50 and S = 3. Indeed, mean MCC values for subgrouping multi-VAR increased from 0.80 to 0.88 as T increased from 50 to 100 for subgroups with balanced membership, and from 0.78 to 0.84 for unbalanced subgroups. Observed results therefore suggest that model recovery for subgrouping multi-VAR was largely unaffected by the addition of a third subgroup when K = 50. Discrepancies between data-driven and confirmatory subgrouping approaches were again minimal across these conditions. Mean MCC values for standard multi-VAR were 0.68 and 0.78 when subgroups were balanced, and 0.68 and 0.76 when membership was unbalanced. Comparable metrics were observed for both S-GIMME and scGVAR. Thus, model recovery for subgrouping multi-VAR and comparison approaches was largely unaffected by the inclusion of an additional subgroup when K = 50.
When K = 100 and S = 2, mean MCC values for subgrouping multi-VAR similarly increased as a function of time series length, with values of 0.82 when T = 50 and 0.88 for T = 100 when subgroups were balanced, and values of 0.79 and 0.82 when unbalanced. The addition of a third balanced subgroup resulted in similar model recovery metrics for subgrouping multi-VAR, with mean MCC values 0.81 and 0.87. Notably, model recovery was better for three unbalanced subgroups than two unbalanced subgroups, with mean MCC values of 0.81 when T = 50 and 0.86 when T = 100. In general, observed results suggest that increasing K from 50 to 100 resulted in a slight improvement in model recovery. Discrepancies between data-driven and confirmatory approaches remained minimal. Model recovery for S-GIMME mirrored results observed in previously reported conditions, with mean MCC values of 0.66 and 0.80 when T = 50 and T = 100, respectively, for each combination of number of subgroups and membership composition. Conversely, model recovery for scGVAR improved when K = 100, with mean MCC values ranging from 0.58 to 0.59 when T = 50 and from 0.66 to 0.67 when T = 100. Standard multi-VAR displayed a slight decrease in model recovery when K = 100, with mean MCC values of approximately 0.67 for T = 50 and 0. 75 for T = 100 across all subgroup number and membership composition conditions.
Parameter Estimates
Mean absolute bias and RMSE values can be found in Table <ref> and are visualized in Figure <ref>. Mean absolute bias for subgrouping multi-VAR was largely consistent across all conditions examined in the current study, with values ranging from 0.018 to 0.023. In general, bias was at a minimum when K = 50 and T = 100, mirroring results observed for model recovery. Conversely, bias was largest when K = 100 and T = 50, suggesting that an increase in sample size without a corresponding increase in time series length results in reduced performance. Thus, large sample sizes may result in a high degree of heterogeneity, thereby diminishing model performance when the number of time points is small [e.g.,][]Fisher2024. It is worth noting, however, that these differences were small in magnitude, and may be a function of sampling variability, consistent with both observed MCE estimates and visualization of estimates in Figure <ref>. There were no meaningful differences observed between data-driven and confirmatory approaches with respect to mean absolute bias. Differences were observed, however, between subgrouping and standard multi-VAR, such that failing to account for the presence of subgroups increased mean absolute bias across all conditions, with values ranging from 0.021 to 0.032. Notably, compared to subgrouping multi-VAR, mean absolute bias for S-GIMME estimates was both smaller and less variable across all conditions, with values ranging from 0.014 to 0.018. Mean absolute bias for S-GIMME was smallest when K = 100 and T = 100, and largest when K = 50 and T = 100. Conversely, mean absolute bias for scGVAR estimates was larger than that observed for subgrouping multi-VAR across all conditions, with values ranging from 0.030 to 0.037.
A similar pattern of results was observed for the RMSE of subgrouping multi-VAR estimates. Indeed, RMSE was at a minimum when K = 50 and T = 100, and at a maximum when K = 100 and T = 50, with values ranging from 0.087 to 0.109 across all conditions. Differences in RMSE values between data-driven and confirmatory subgrouping approaches were less than 0.01 across all conditions. RMSE values for standard multi-VAR were larger across all conditions, with values ranging from 0.091 to 0.117. RMSE values for S-GIMME estimates were larger than those observed for subgrouping multi-VAR when both K and T were at a minimum, and smaller for all other conditions. Thus, the subgrouping multi-VAR estimates were less variable than S-GIMME estimates when both sample size and time series length were small. In general, subgrouping multi-VAR estimates were less variable than scGVAR estimates. However, RMSE values for scGVAR estimates were smaller than values for subgrouping multi-VAR estimates when both K and T were at a maximum.
Subgroup Recovery
ARI values for subgrouping multi-VAR, S-GIMME, and scGVAR are shown in Table <ref>. In general, subgroup recovery was considered good or excellent for subgrouping multi-VAR across all conditions Steinley2004. Subgroup recovery was at a maximum (ARI = 0.98) when K = 50, T = 100, and subgroup composition was balanced, indicating near perfect subgroup recovery. Subgroup recovery was at a minimum when K = 100, T = 50, with three unbalanced subgroups (ARI = 0.88). Conversely, subgroup recovery for S-GIMME was considered poor across all conditions, with ARI values ranging from 0.01 to 0.64. The minimum ARI value for S-GIMME was observed when K = 50, T = 50, with two unbalanced subgroups, and the maximum value occurred when K = 100, T = 100, with two balanced subgroups. Subgroup recovery for scGVAR ranged from moderate to excellent across all conditions, such that ARI values were at a minimum of 0.73 when K = 50, T = 50, with three unbalanced subgroups, and a maximum of 0.98 when K = 100, T = 100, with two subgroups.
Mirroring observed ARI values, subgrouping multi-VAR generally recovered the correct number of subgroups. Observed differences in ARI values between subgrouping multi-VAR and S-GIMME were similarly consistent with discrepancies in the number of subgroups identified by both frameworks. When K = 100 and T = 50, for example, the number of subgroups recovered ranged from 12 to 17 for S-GIMME and three to six for subgrouping multi-VAR. Thus, though both approaches overestimated the number of subgroups in these conditions, S-GIMME was adversely impacted to a greater degree. This is consistent with prior work showing that recovery of nonzero dynamics in GIMME is reduced when the number of time points is small Nestler2021. Notably, scGVAR subgroup enumeration was not affected by time series length, suggesting that it may be particularly well-suited to settings in which number of time points is small.
§ EMPIRICAL EXAMPLE
To demonstrate the utility of the subgrouping multi-VAR framework, we present an empirical example using data from Fisher et al. (). Data consisted of 40 individuals with a primary diagnosis of either major depressive disorder (MDD) or generalized anxiety disorder (GAD) who were assessed four times per day for 30 days. For the purposes of the current application, we restricted our analyses to the 10 variables related to MDD (e.g., down and depressed) and GAD (e.g., worried) symptomatology, thereby ensuring that the number of variables examined mirrored those assessed in the simulation study. All variables were measured using a visual analogue scale ranging from 0-100, such that 0 indicated that the participant did not experience the symptom at all during the preceding hours and 100 indicated that the symptom was experienced as much as possible. As the multi-VAR framework does not currently accommodate missing data, linear imputation via the package imputeTS was employed. For more information regarding data characteristics and study procedures, see Fisher et al. ().
The motivation for the selection of the current empirical dataset was influenced by several factors. First, both the number of individuals and length of time series were consistent with data analyzed in the simulation study. The quality of the group-, subgroup-, and individual-level estimates were therefore unlikely to be impacted by characteristics of the dataset, such as sample size. Second, the symptomatology of many psychopathological syndromes, including MDD and GAD, is characterized by persistent heterogeneity [e.g.,][]Kotov2017. Indeed, prior work identified 1030 unique symptom profiles in a sample of individuals diagnosed with MDD Fried2015. Thus, the current example represents an evaluation of the degree to which subgrouping multi-VAR accommodates quantitative and qualitative heterogeneity in an empirical setting. Finally, all participants had a primary diagnosis of either MDD or GAD. Comparisons between diagnostic status obtained via structured clinical interview and subgroup membership derived by subgrouping multi-VAR were therefore possible.
Estimation of group-, subgroup-, and individual-level effects proceeded in the manner detailed above. First, standard multi-VAR without subgrouping was fit to the data. Estimates of the individual-level effects, Υ̂^k, were then used to derive subgroup membership. Using these subgroup labels, subgrouping multi-VAR was then fit to the data. Paralleling the modeling procedure for the simulated data, the autoregressive effects were not subjected to penalization via the adaptive Lasso in either the subgroup enumeration or estimation stages. In addition to analysis of the estimated group-, subgroup-, and individual-level effect, results were examined with respect to several between-person variables. That is, associations between subgroup membership and constructs of interest, such as diagnostic status, were assessed. All analyses were conducted using the R package multivar.
§.§ Empirical Results
Estimated effects are depicted in Figure <ref>. Estimates of common effects consisted of all autoregressive paths, with magnitudes ranging from 0.23 to 0.31. This autoregressive behavior is consistent with findings from the original study Fisher2017. With respect to subgroup enumeration, subgrouping multi-VAR identified four subgroups. The first three subgroups consisted of 25, 10, and four individuals, respectively, and the fourth subgroup was defined by a single individual. Given its singleton status, the fourth subgroup was excluded from subsequent analyses. Estimated effects for the first subgroup consisted of all autoregressive paths, with magnitudes ranging from 0.17 to 0.26, and a bidirectional relationship between anhedonia and down and depressed. The second subgroup was also characterized by estimated autoregressive effects, though most of these were negative, indicating that participants in this subgroup had autoregressive dynamics that were smaller in magnitude than those estimated at the group level. A relation between down and depressed and anhedonia was also observed for participants in this subgroup, though it was unidirectional in nature. Thus, the first two subgroups were distinguished by differences in both the sign of the estimated autoregressive effects and the nature of the dynamic relation between down and depressed and anhedonia.
The third subgroup also featured estimates of all autoregressive effects, with magnitudes ranging from -0.01 to 0.08, suggesting that, in general, participants in this subgroup were characterized by slightly stronger autoregressive dynamics than those derived at the group level. Notably, differences between this subgroup and the first two were observed with respect to the number of estimated cross-lagged effects. Indeed, out of 90 possible cross-lagged paths, 19 were estimated as nonzero, suggesting that the network of dynamic relations between MDD and GAD symptoms was much denser for individuals in this subgroup. Despite the observed qualitative and quantitative heterogeneity in estimated subgroup effects, subgroup membership was not associated with diagnostic status or between-person measures of depression and anxiety. However, given the degree of comorbidity in the present sample—75% of individuals had at least one comorbid diagnosis—identifying clear distinctions between participants with respect to diagnostic status and symptom level is a challenging endeavour.
Estimates of individual-level effects varied both qualitatively and quantitatively between individuals. For some individuals, these effects corresponded to the autoregressive paths, thereby serving to increase or decrease the magnitude of these dynamics, depending on the sign, compared to group- and subgroup-level estimates. Estimated effects for other individuals epitomized the persistent qualitative heterogeneity inherent in many dynamic processes of interest, such that nonzero cross-lagged paths represented an increase in both the complexity and density of symptom relations. Notably, estimates of individual-level effects for one individual were all zero, suggesting that their pattern of dynamics was fully captured by the group- and subgroup-level estimates.
§ DISCUSSION
Many dynamic processes are characterized by persistent heterogeneity in both the magnitude of the effects of interest and the functional form of the process itself. Failing to appropriately account for such heterogeneity when present limits the degree to which observed results can be generalized across various levels of analysis Molenaar2004. The current paper sought to address this challenge through the introduction of a subgrouping extension to the multi-VAR framework. Built on the VAR model, subgrouping multi-VAR is a method for analyzing multiple-subject, multivariate time series that incorporates both data-driven subgroup identification and penalized estimation of group-, subgroup-, and individual-level effects. In contrast to similar approaches, the incorporation of penalty parameters at each level of analysis allows for both sparsity in the estimated transition matrices and flexible approximation of potentially heterogeneous data generating processes. In doing so, it helps to bridge the divide between nomothetic and idiographic approaches to the analysis of ILD. The efficacy and utility of the subgrouping multi-VAR approach was demonstrated in a simulation study and empirical example.
The simulation study assessed the performance of the subgrouping multi-VAR framework across a range of design factors. Moreover, the current approach was compared to two alternative multiple-subject, multivariate time series methods, as well as benchmark comparisons with the standard multi-VAR framework and multi-VAR with confirmatory subgrouping (i.e., when true subgroup membership was known). With respect to model recovery, subgrouping multi-VAR outperformed S-GIMME, scGVAR, and the standard multi-VAR approach across all conditions, and demonstrated only a slight decrease in performance compared to multi-VAR with confirmatory subgrouping. Model recovery was strongest for the condition corresponding to 50 individuals, 100 time points, and three balanced subgroups, and weakest for 50 individuals, 50 time points, and three unbalanced subgroups. Even in this weakest condition, however, subgrouping multi-VAR demonstrated good recovery of the true data generating process. Observed differences in model recovery between subgrouping multi-VAR and comparison methods are likely a function of both the unique manner in which the current approach models heterogeneity in dynamic processes and the quality of the estimates used in the subgroup enumeration stage. Indeed, the structured nature of the penalization procedure and simultaneous estimation of effects of interest ensures that the heterogeneity of the solution is determined by the competition of the three penalty parameters corresponding to the group-, subgroup-, and individual-level effects. Conversely, estimation of heterogeneous dynamics in the S-GIMME framework proceeds iteratively and is a function of user-specified thresholds, modification indices, and fit statistics. The scGVAR approach is similarly iterative in nature, such that effects at different levels of analysis are estimated separately. Though prior work has shown that both S-GIMME and scGVAR demonstrate good model recovery (e.g., <cit.>; <cit.>), the subgrouping multi-VAR estimation procedure described herein may better approximate data generating processes characterized by a high degree of quantitative and qualitative heterogeneity, consistent with observed results.
Notably, this was also observed in the subgroup enumeration stage, such that subgrouping multi-VAR exhibited excellent identification of subgroup membership across nearly all conditions. Thus, standard multi-VAR estimates used in the construction of the Walktrap adjacency matrix Pons2006 can effectively inform the derivation of shared patterns of dynamics. These results contrast those observed for S-GIMME, wherein subgroup recovery was considered poor across all conditions. Indeed, S-GIMME often overestimated the number of subgroups, whereas subgrouping multi-VAR and scGVAR more consistently recovered the correct subgroup structure. These results therefore suggest that subgroup enumeration in the S-GIMME framework, wherein group-stage estimates are used to derive subgroup membership, is less effective than approaches that employ individual-level estimates, such as subgrouping multi-VAR and scGVAR. It is worth noting, however, that observed differences in subgroup identification between subgrouping multi-VAR and comparison methods may not generalize to situations in which subgroup-level effects are driven by contemporaneous dynamics, as the multi-VAR framework does not accommodate such dynamics. Prior S-GIMME work, for example, observed subgroup-level effects that consisted entirely of directed contemporaneous relations Lane2019. Future work should examine the behavior of subgrouping multi-VAR under such conditions.
Mirroring model recovery results, absolute bias and RMSE for subgrouping multi-VAR estimates were lowest for conditions corresponding to 50 individuals, 100 time points, and three subgroups. Largest values of absolute bias and RMSE were observed for conditions with 100 individuals and 50 time points, suggesting that analyses characterized by a large number of individuals may require a correspondingly large number of time points. In general, however, the quality and variability of estimated parameters—as indicated by absolute bias and RMSE—remained relatively stable across conditions. More work is needed to determine the behavior of these metrics as the discrepancy between number of individuals and number of time points increases. In contrast to model recovery results, absolute bias and RMSE values for subgrouping multi-VAR estimates were not consistently preferable when compared those obtained for S-GIMME estimates. Across all conditions, for example, subgrouping multi-VAR parameter estimates exhibited a greater degree of bias than S-GIMME estimates, though these differences were small in magnitude. Similar results were observed for RMSE values when data consisted of either 100 individuals or 100 time points. However, when both number of time points and number of individuals were small, multi-VAR parameter estimates were less variable than those obtained from S-GIMME. Observed differences in the quality and variability of parameter estimates between subgrouping multi-VAR and S-GIMME are generally consistent with prior work showing that GIMME estimates exhibit lower absolute bias and RMSE values than standard multi-VAR when the data are highly heterogeneous Fisher2024. Differences in absolute bias and RMSE values between subgrouping multi-VAR and scGVAR were more consistent with model recovery results. That is, subgrouping multi-VAR estimates were less biased and variable than scGVAR estimates across most conditions.
Results from an empirical example using data from Fisher et al. () demonstrated the utility of the subgrouping multi-VAR framework. At the group level, symptom relations were characterized entirely by autoregressive dynamics, and were consistent with results observed in other clinical samples (e.g., <cit.>; <cit.>). Symptoms of MDD and GAD therefore exhibited some degree of inertia [e.g.,][]Kuppens2010 for all individuals. In addition to group-level effects, three primary subgroups were identified, each distinguished by differences in both the magnitude and structure of symptom dynamics. Though subgroup membership was not associated with between-person variables of interest, such as diagnostic status, identification of shared patterns of symptom relations is not without utility. Indeed, observed findings are consistent with prior work showing that meaningful patterns of subgroup-specific dynamics are not necessarily associated with mean levels of symptoms Lane2019. Such dynamics could, for example, inform future research and treatment efforts, consistent with the recent shift in focus toward the development of personalized models of psychopathology Wright2020. The degree of quantitative and qualitative heterogeneity observed with respect to estimated individual-level effects similarly indicates a need for person-specific approaches to prevention and intervention [e.g.,][]Fisher2019.
Simulation and empirical results observed in the current study should be interpreted in the context of several limiting factors, each of which represents an area for future research. First, as noted above, the multi-VAR framework cannot estimate contemporaneous dynamics. As such, subgrouping multi-VAR may not be an appropriate methodological approach for processes characterized primarily by said dynamics. Next, despite work showing that Walktrap is a reliable community detection algorithm Gates2016, the efficacy of alternative clustering approaches was not assessed. Recent work, for example, suggests that Walktrap could be improved by replacing Ward's hierarchical clustering algorithm Ward1963 with methods for K-means clustering Brusco2024. Moreover, approaches relying on modularity optimization may fail to identify subgroups consisting of a small number of individuals Fortunato2007. More work is needed to determine if alternative approaches improve the performance of subgrouping multi-VAR. Finally, observed results may not generalize to conditions not assessed in the current simulation and empirical studies. Indeed, prior work has demonstrated that model recovery for standard multi-VAR is impacted by the number of variables comprising the multivariate time series, such that performance improves as number of variables increases Fisher2022.
The current study introduced a novel methodological framework for analyzing multiple-subject, multivariate time series characterized by persistent quantitative and qualitative heterogeneity. Results from both an extensive simulation study and empirical example suggest that subgrouping multi-VAR is an effective approach for estimating group-, subgroup-, and individual-level dynamics. Notably, the current approach demonstrated good model recovery under commonly encountered conditions, such as when both sample size and time series length are small. Moreover, the advantage of subgrouping multi-VAR over popular alternatives with respect to model recovery was greatest when time series length was small, suggesting that the current approach may be particularly well suited for the types of data frequently collected in the social, behavioral, and health sciences.
|
http://arxiv.org/abs/2409.02384v1 | 20240904022059 | STAB: Speech Tokenizer Assessment Benchmark | [
"Shikhar Vashishth",
"Harman Singh",
"Shikhar Bharadwaj",
"Sriram Ganapathy",
"Chulayuth Asawaroengchai",
"Kartik Audhkhasi",
"Andrew Rosenberg",
"Ankur Bapna",
"Bhuvana Ramabhadran"
] | cs.CL | [
"cs.CL",
"cs.SD",
"eess.AS"
] |
STAB: Speech Tokenizer Assessment Benchmark
Shikhar Vashishth^*, Harman Singh^*, Shikhar Bharadwaj^*^*Equal Contribution., Sriram Ganapathy, Chulayuth Asawaroengchai,
Kartik Audhkhasi, Andrew Rosenberg, Ankur Bapna, Bhuvana Ramabhadran
Google
September 9, 2024
====================================================================================================================================================================================================================
§ ABSTRACT
Representing speech as discrete tokens provides a framework for transforming speech into a format that closely resembles text, thus enabling the use of speech as an input to the widely successful large language models (LLMs). Currently, while several speech tokenizers have been proposed, there is ambiguity regarding the properties that are desired from a tokenizer for specific downstream tasks and its overall generalizability. Evaluating the performance of tokenizers across different downstream tasks is a computationally intensive effort that poses challenges for scalability. To circumvent this requirement, we present STAB (Speech Tokenizer Assessment Benchmark), a systematic evaluation framework designed to assess speech tokenizers comprehensively and shed light on their inherent characteristics. This framework provides a deeper understanding of the underlying mechanisms of speech tokenization, thereby offering a valuable resource for expediting the advancement of future tokenizer models and enabling comparative analysis using a standardized benchmark.
We evaluate the STAB metrics and correlate this with downstream task performance across a range of speech tasks and tokenizer choices.
speech tokenization, evaluation benchmark, multimodal representation learning
#1
§ INTRODUCTION
Speech representation learning, the task of developing models that extract succinct feature representations of speech for downstream tasks, has been area of active interest in the recent years.
Motivated by zero-resource speech processing to develop methods that can learn sub-word or word units directly from unlabeled raw speech <cit.>, several unsupervised methods have been proposed for learning continuous representations <cit.> and discrete acoustic units <cit.>.
Techniques based on predictive coding <cit.> and self-supervision learning such as the class of wav2vec models <cit.>, have been developed to derive quantized representations of audio. More recently, iterative learning of discrete units and acoustic representations such as HuBERT <cit.> and joint learning of denoising and self-supervision in wavLM <cit.> have shown promising results.
Discrete representations are a natural fit for speech and language given their ability to be represented as a sequence of symbolic, phonetic, graphemic or sub-word/word units. The approach of representing speech in the form of discrete tokens offers a significant advantage by converting speech into a format that mirrors text, thereby leveraging the application of speech as an input for various large language models (LLMs) <cit.>. Furthermore, speech tokens have the ability to capture non-verbal cues such as emotion and rhythm, which contain additional information compared to their textual counterparts <cit.>.
Utilizing discrete speech tokens has proven advantageous in tasks such as automatic speech translation and speech-to-speech translation, while demonstrating comparable performance on automatic speech recognition <cit.>. This also contributes to the advancement of multimodal LMs <cit.>.
Speech tokenizers optimized for specific downstream task(s) exist <cit.>, however, measuring their generalization ability remains a challenging problem.
Assessing the performance of all tokenizers across various downstream tasks is a computationally expensive endeavor that presents challenges for scalability. Additionally, speech tokenizers are often utilized as a black box, with limited examination <cit.> of the nature of the tokens they generate or their adherence to specific properties.
Therefore, it is timely to create a low-compute evaluation benchmark for assessing tokenizers across multiple dimensions.
Our contributions can be summarized as follows:
* We propose STAB, a speech tokenizer assessment benchmark which evaluates capabilities of a given speech tokenizer.
* STAB presents a cost-effective evaluation approach and holds potential for expediting research on speech tokenization.
* Through extensive experiments, we demonstrate that STAB provides a reliable indication of the speech tokenizer's performance on a range of downstream tasks.
§ RELATED WORK
Speech Tokenization:
Self-supervised learning for speech historically relied on contrastive loss on audio embeddings, as exemplified by wav2vec <cit.>.
Vq-wav2vec <cit.> and DiscreteBERT <cit.> introduced tokenization based objectives for learning better speech representations.
Following this, HuBERT <cit.> introduced iterative refinement of speech tokens within a masked language modeling (MLM) framework.
W2v-BERT <cit.> combined the benefits of the contrastive approaches and MLM with speech tokens in a single model.
Interestingly, BEST-RQ <cit.> utilizes random projection to generate target tokens.
Hence, speech tokens have become central to self-supervised pre-training models and are typically obtained through methods such as K-means or vector quantization <cit.>.
AudioLM <cit.> and AudioPaLM <cit.> auto-regressively model the speech token sequences derived from clustering representations generated by an audio encoding model. In this study, we assess various speech tokenizers employed in existing methods.
Speech Benchmarks
With the development of various representation learning frameworks, there have also been efforts to evaluate and benchmark speech representations. In the latest edition of the Zero Resource Speech Challenge, evaluations focused on exploring text-less speech language modeling tasks <cit.>. The speech processing universal performance benchmark (SUPERB) considers a multitude of downstream evaluation tasks that included semantic and para-linguistic tasks <cit.>. An extension to multi-lingual tasks is benchmarked in ML-SUPERB <cit.>. For multitask evaluation in a zero-shot setting, Dynamic-SUPERB <cit.> has been introduced recently. A non-semantic evaluation benchmark, NOSS has also been proposed for audio representations <cit.>.
§ STAB DETAILS
§.§ Invariance
For tasks such as ASR, extracting semantic meaning from speech is crucial. Previous studies have introduced the concept of semantic and acoustic speech tokenization <cit.>. Semantic tokens focus solely on extracting semantic information from the speech signal, while acoustic tokens capture other properties such as speaker information, language, and emotion.
Here, we assess the ability of a speech tokenizer to accurately capture semantics by evaluating it along the following dimensions.
* Speaker invariance: Examines variance in tokenization of identical sentences uttered by two different speakers such as comparing a sentence spoken by a female/male speaker.
* Context invariance: Analyses how tokens are altered when a part of the speech context is masked. This measurement reflects the influence neighborhood of a token. We compare the tokens extracted from a segment (initial 4 seconds) of the utterance against the same segment with its original context.
* Language invariance: Measures the variation in tokenization of the same concept spoken in two different languages. For example, comparing the tokens of "Cat is drinking the milk" in English and "Eine Katze trinkt Milch" in German.
§.§ Robustness
We evaluate the resilience of a speech tokenizer to different types of noise and acoustic variations in speech signals. This is crucial for effectively handling real-world data, which may include recordings from a variety of microphones and speakers.
* Pitch Change: Pitch change is a common phenomenon in speech, often resulting from factors such as equipment imperfections or signal processing <cit.>. We investigate how tokenization varies when the pitch of a speech signal is modified, while ensuring that the audio remains intelligible.
* Playback speed: Modifying the playback speed of audio involves adjusting the rate at which the audio is played. We examine how tokenization changes when we alter this.
* Background Noise: Here, we introduce background noise (𝒩(0, v) where v is s.t. SNR = 10dB) into the original speech signal and assess the behavior of a speech tokenizer in response to the added noise.
§.§ Compressibility
In natural language processing (NLP), models based on words or subwords have been shown to outperform character-based models <cit.>. However, most speech tokenizers tokenize at a level lower than phonemes. Previous studies <cit.> have shown that training a sentence piece tokenizer on speech sequences yields subword-level tokens, resulting in improvements in downstream tasks. Nevertheless, the degree of compressibility varies among different tokenizers. Hence, we propose the following dimensions to measure this property.
* Huffman Encoding Efficiency: Huffman coding algorithm <cit.> is widely used for lossless data compression. We use the Huffman coding algorithm to compress a corpus of speech sequences of a particular language, following which we calculate the compression efficiency.
* Byte-pair Encoding Efficiency: Byte-pair encoding (BPE) <cit.> is a tokenization technique that involves iteratively merging the most frequent pair of consecutive tokens to create new tokens. This merging process is repeated until a predefined vocabulary size is reached. Using BPE, it is possible to learn subword-level tokens by merging repeated patterns found in speech token sequences.
* De-duplication Efficiency: We assess the compressibility of speech sequences by merging adjacent repeating tokens.
§.§ Vocabulary
Here, we evaluate how a speech tokenizer utilizes its vocabulary and how this utilization varies across languages. A larger vocabulary size in a speech tokenizer increases the number of parameters in the Speech Language Models (SLMs). Therefore, it is crucial to analyze how the vocabulary is being used and to ensure that there are no mode collapse issues. To achieve this, we analyze tokenizers along the following axes,
* Per-language Utilization: We examine the proportion of the total vocabulary utilized for each language, considering a fixed number (500k) of observed tokens.
* Overall Utilization and Entropy: We explore the vocabulary utilization across all languages and compute entropy of the vocabulary distribution to evaluate any bias towards a subset of tokens.
* Vocabulary Distribution Comparison Across Language: We investigate whether the tokenizer captures relationships among languages, with the hypothesis that a tokenizer designed to consider language similarity should exhibit similar vocabulary distributions for related languages.
§ EXPERIMENTAL SETUP
§.§ Datasets
Datasets:
In our proposed benchmark, we employ the FLEURS dataset <cit.>, which is the speech counterpart of the FLoRes-101 machine translation dataset <cit.>. FLEURS comprises 2,000 n-way parallel sentences spoken in 102 languages, enabling evaluation on metrics such as language awareness.
Additionally, we employ the TIMIT dataset <cit.>, which includes recordings of 630 speakers reciting 10 sentences each, accompanied by transcripts for each spoken sentence. This enables us to assess speaker-awareness.
Pre-training Datasets:
In our experiments, we employ the AudioPaLM model <cit.> which involves initializing with a pre-trained text decoder (PaLM-2 <cit.>) and subsequently making it multimodal by expanding its vocabulary and training it on a speech-text data mixture. Our data mixture consists of a blend of 75% original text data <cit.> and 25% automatic speech recognition (ASR) data sourced from the Babel <cit.>, VoxPopuli ASR, Multilingual Librispeech <cit.>, FLEURS, and YouTube ASR datasets <cit.>. In total, the speech dataset comprises 221k hours of ASR data spanning across ∼100 languages.
Evaluation Datasets:
Along with benchmark, we evaluate our models on several downstream tasks as well such as ASR, emotion recognition, speaker identification, and intent classification. For ASR, we utilize transcribed VoxPopuli dataset which spans across 14 languages and CoVoST-2 <cit.> dataset for AST. We use IEMOCAP dataset <cit.> for emotion recognition and VoxCeleb <cit.> for speaker identification. Since AudioPaLM is a decoder-only model we approach the classification task as a seq2seq task. For all these datasets, we fine-tune our model on their training split followed by evaluation on the corresponding dev/test split.
§.§ Baseline systems
In our experimental analysis, we compare several speech tokenizers commonly utilized within the research community.
* w2v2: Similiar to Rubenstein et al. <cit.>, we employ wav2vec 2.0 <cit.>, which is trained on multilingual data, for encoding speech. Subsequently, a k-means (with k = 8k) is trained on the embeddings generated by the model, and the centroid indices are extracted as semantic tokens.
* w2v-BERT: Same as w2v2 with speech encoder replaced by a pre-trained w2v-BERT encoder <cit.> trained using Masked Language Modeling (MLM) objective.
* BEST-RQ: Here, we employ MLM-based BEST-RQ model <cit.> as the speech encoder.
* USM-v1: Following Rubenstein et al. <cit.>, we employ Google Universal Speech model (USM) <cit.>, which is trained using MLM objective for encoding speech. For USM-v1 and subsequent tokenizers, the vocabulary size is 32k.
* USM-v2 <cit.>: Similar to USM-v1, this involves USM but with the inclusion of an auxiliary ASR loss during training. Moreover, instead of K-means, vector quantization <cit.> is used for discretizing representations.
* USM-v3: This is identical to USM-v2 tokenizer. However, it utilizes USM trained with spectrogram reconstruction <cit.> loss in addition to ASR.
Implementation details:
Most of the hyper-parameters are directly adopted from AudioPaLM <cit.>. We report results with models of size 1B, initialized with PaLM-2 checkpoint and pre-trained for 30k steps on our speech-text mixture. For each downstream evaluation task, we fine-tune the pre-trained model on its corresponding training split before evaluation. Please note that no fine-tuning is necessary for , as metrics can be directly computed over the raw tokens.
§ RESULTS
§.§ STAB Performance Comparison
In this section, we evaluate different tokenizers, as outlined in Section <ref>, on various STAB dimensions. The summary of the results is presented in Table <ref>. As previously described, w2v2 is trained using contrastive loss whereas w2v-BERT, BEST-RQ and USM-v1 are trained using Masked Language Modeling (MLM) loss. Further, USM-v2 incorporates both MLM and Automatic Speech Recognition (ASR) losses and USM-v3 additionally includes reconstruction loss. Please note that the vocabulary size of w2v2, w2v-BERT, and BEST-RQ tokenizers is 8k, whereas USM-based tokenizers utilize a vocabulary of 32k.
Thus, the majority of our conclusions are drawn from comparisons within the group of tokenizers having the same vocabulary size.
Invariance: The results demonstrate that the inclusion of ASR loss (such as in USM-v2) makes tokenizers more invariant to speaker information. Moreover, it boosts contextual dependence, as it necessitates a semantic understanding of all frames collectively. This is evident through fall in context invariance metric on USM-v2 among 32k-tokenizers. Further, contrastive loss of w2v2 drastically increases the dependence on context compared to MLM-based losses in w2v-BERT and BEST-RQ. Regarding language invariance, most tokenizers generate distinct token sequences for different languages. However, ASR loss appears to reduce language invariance, as it necessitates generating text in the correct script based on the specific language used in the speech.
Robustness: We observe that USM-based tokenizers exhibit greater robustness to noise compared to other tokenizers, likely due to the more extensive data used during pre-training. Additionally, incorporating ASR loss during training enhances the tokenizers' resilience to noisy speech signals. In contrast, training with a spectrogram reconstruction loss appears to increase the model's susceptibility to noise. Among the 8k-tokenizers, the w2v-BERT tokenizer demonstrates superior noise robustness relative to its counterparts.
Compressibility: The results indicate that 8k-tokenizers demonstrate higher compressibility compared to 32k-tokenizers, which can be attributed to their smaller vocabulary size. Among 8k-tokenizers, w2v-BERT exhibits higher overall compressibility. Additionally, similar to previous findings, incorporating ASR loss enhances tokenizer compressibility.
Vocabulary:
Among all 32k-tokenizers, USM-v1 exhibits the lowest per-language and overall vocabulary utilization. This is attributed to its use of K-means quantization, in contrast to the vector quantization employed by USM-v2 and USM-v3. This indicates that simple K-means representation is ineffective in fully utilizing the entire vocabulary, potentially resulting in the wastage of model parameters. The vocabulary utilization among 8k-tokenizers is higher given their smaller vocabulary.
Language Relationships: The ASR loss enhances tokenizer's awareness of language relationships. As shown in Figure <ref>, USM-v2 exhibits a higher similarity in vocabulary distribution across closely related languages, a characteristic not elicited by tokens from the USM-v1 tokenizer. This demonstrates the potential of ASR-trained tokenizers to exhibit higher levels of cross-lingual knowledge transfer.
§.§ Correlation with Downstream tasks
We evaluate various tokenizers on multiple downstream tasks: Automatic Speech Recognition (ASR), Automatic Speech Translation (AST), Emotion Classification (EC), Speaker Identification (SID), and Language Identification (LID). For each task, we fine-tune our already pre-trained models on the training split of corresponding dataset before evaluation.
The results on downstream tasks are summarized in Table <ref>. Overall, we find that STAB metrics correlates well with the performance on downstream tasks. On ASR and AST tasks, w2vBERT and USM-v2, which are more speaker invariant and robust to noise, perform best in their categories. On the contrary, the tokenizers which have lower speaker invariance performs better on speaker identification tasks as expected.
Previous studies <cit.> on emotion classification using IEMOCAP dataset have shown that utilizing the output of an ASR system yields better results compared to models that directly use the speech modality. Our findings support this observation, as w2v-BERT and USM-v2 outperform other tokenizers in our experiments. USM-v2 also captures language similarity better, as shown in Figure <ref>, which reflects in its improved language identification performance.
For identifying the coupled relationship between the STAB metrics (Table <ref>) and the downstream tasks (Table <ref>), we consider pairs of tokenizers (eg. USM-v1, USM-v2). For this pair, we compute correlation between the binarized relative improvements in a STAB metric and the relative improvements in a downstream task performance. In this manner, the correlation plot is generated (Figure <ref>) using average correlation over all 32k-tokenizer pairs for different choices of STAB metrics and downstream tasks. As seen here, the ASR and AST tasks follow an identical trend with vocabulary utilization metrics showing the maximal correlation while language/context invariance is seen to have the maximal negative correlation. The LID task also shows a similar trend. The EC task shows the highest correlation for speaker and noise invariance, which essentially allows the model to focus on emotion related cues in the tokenized audio signal. The SID task shows somewhat of an opposite trend to most of the other tasks considered, where the language and context invariance are positively correlated while the overall vocabulary utilization is negatively correlated with the SID performance.
These findings illustrate that STAB metrics correlate with downstream tasks and offers insights into a tokenizer's performance on downstream applications.
Cost-Effectiveness of STAB:
For any tokenizer, each STAB metric requires less than 15 minutes of CPU compute on our Apache Beam based implementation. In contrast, evaluating each tokenizer for a downstream task involves approximately 16 hours of pre-training on 256 accelerated hardware chips across multiple datasets, followed by 22 hours of fine-tuning on 128 accelerated hardware chips. Hence, STAB is at least 100x more efficient in terms of compute and data resources compared to downstream evaluation. Consequently, the proposed benchmark has the potential to be a valuable tool in advancing the design of speech tokenizers.
§ CONCLUSION
In this paper, we introduced STAB (Speech Tokenizer Assessment Benchmark), a comprehensive benchmark for evaluating speech tokenizers and illuminating their inherent characteristics. The benchmark offers a deeper understanding of the inner workings of a speech tokenizer, and STAB metrics correlate with the performance on several downstream tasks. STAB is 100x more efficient in terms of compute and data than using downstream tasks to compare speech tokenizers, making it a potential catalyst for the development of speech tokenizers.
IEEEtran
|
http://arxiv.org/abs/2409.02871v1 | 20240904165431 | Hybrid Imitation-Learning Motion Planner for Urban Driving | [
"Cristian Gariboldi",
"Matteo Corno",
"Beng Jin"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG"
] |
[
[
Accepted Sep 3 2024 to ApJ Letters
=======================================
empty
empty
§ ABSTRACT
With the release of open source datasets such as nuPlan and Argoverse, the research around learning-based planners has spread a lot in the last years. Existing systems have shown excellent capabilities in imitating the human driver behaviour, but they struggle to guarantee safe closed-loop driving. Conversely, optimization-based planners offer greater security in short-term planning scenarios. To confront this challenge, in this paper we propose a novel hybrid motion planner that integrates both learning-based and optimization-based techniques. Initially, a multilayer perceptron (MLP) generates a human-like trajectory, which is then refined by an optimization-based component. This component not only minimizes tracking errors but also computes a trajectory that is both kinematically feasible and collision-free with obstacles and road boundaries. Our model effectively balances safety and human-likeness, mitigating the trade-off inherent in these objectives. We validate our approach through simulation experiments and further demonstrate its efficacy by deploying it in real-world self-driving vehicles.
§ INTRODUCTION
Autonomous cars are expected to play a crucial role in future mobility due to their potential for increased safety and road utilization. To ensure these benefits, their planning components must provide safe, comfortable, and collision-free trajectories that account for both static and dynamic traffic elements. Traditional trajectory planning approaches include rule-based, sample-based, and optimization-based methods, which rely on manually defined costs and objective functions optimized using classical techniques like A*, RRT, dynamic programming, and Model Predictive Trajectory algorithms. These methods are reliable and interpretable but struggle to scale in complex urban scenarios and do not improve with data, requiring extensive engineering effort for tuning.
The availability of open-source datasets such as nuPlan and Argoverse has advanced research in learning-based planners, which are very good at generating human-like trajectories. However, these models trained in open-loop settings do not guarantee safety in closed-loop applications, especially in novel scenarios, due to their dependence on training data. To address these limitations, perturbations can be introduced into training datasets to help vehicles recover from dangerous situations and mitigate covariate shift problems. Alternatively, a differentiable simulator can be used for closed-loop training. Despite these improvements, learning-based models still struggle to generalize well in unseen domains, making them unsafe for real-world traffic.
The paper proposes two key contributions:
1) Integration of learning-based and optimization-based techniques to create a hybrid imitation-learning model. This combination aims to generate safe, human-like trajectories, balancing the trade-offs between these objectives. This approach is the first of its kind.
2) Validation of the hybrid model on a real vehicle in urban environments, demonstrating its practical effectiveness and robustness beyond simulation.
Most research in this field is confined to simulations, which may not translate to real-world performance. The goal is to improve the short-term planning capabilities of learning-based models, ensuring their safety and reliability in real urban settings. The research focuses on planning, assuming that localization, perception, mapping, and control modules are already in place.
§ RELATED WORK
Generating a comfortable, feasible and collision-free trajectory is a complex task for autonomous driving that has attracted considerable academic interest with several approaches proposed.
§.§ Optimization-based planners
Rule-based and sample-based approaches have been valuable for global and local trajectory planning [1, 2]. However, their complexity makes them unsuitable for real-world autonomous driving in complex scenarios. Consequently, optimization-based planners [3-9] have been proposed, which find optimal trajectories by minimizing predefined cost functions and apply the best control actions for tracking.
Despite their advantages, optimization-based planners face significant challenges:
1) They often struggle to find the global optimum in complex scenarios, as real-time solutions to these optimization problems are difficult, frequently resulting in convergence to local minima in non-convex problems.
2) Even when these planners generate safe, collision-free trajectories, the paths differ significantly from those a human would choose. This discrepancy can confuse and destabilize other agents around the self-driving vehicle, who are not used to predicting the behavior of autonomous cars, potentially leading to unsafe situations.
To address these issues, researchers have turned to machine learning approaches, which have shown promise due to recent advancements in the field.
§.§ Reinforcement Learning
Reinforcement Learning for autonomous driving [10-13] removes some human engineering complexity since it uses Machine Learning techniques to learn an optimal policy by maximizing a reward (cost) function, by exploring and exploiting the environment. Even though it is possible to obtain good performance in simulation or
in laboratory's experiments, its performance does not easily translate to real world and complex scenarios and cannot guarantee safety in every driving conditions. This may happen because of its difficulty
to converge to stability.
§.§ Imitation-Learning
Imitation-learning models learn driving policies from expert demonstrations, mapping states to actions. Recently, large datasets of human driving behavior have been released by companies and open-source projects such as Argoverse <cit.>, Lyft <cit.>, Waymo <cit.>, and nuPlan <cit.>, enhancing the development of imitation-learning in autonomous driving. This approach has led to state-of-the-art solutions for motion forecasting <cit.> and robust path planning capabilities.
Our work focuses on leveraging imitation-learning for motion planning by analyzing several methods in this category. ChaufferNet <cit.> uses a convolutional neural network to encode a top-down representation of the environment, training it to imitate human driving. The Urban Driver model <cit.> optimizes trajectories using a policy gradient method and a differentiable simulator for closed-loop training. In contrast, the Neural Motion Planner system <cit.> uses sensor and HD map data to generate 3D detections, future trajectories, and a cost volume, selecting the trajectory with the minimum learned cost.
A multimodal prediction strategy combines a transformer with a Mixture of Experts approach <cit.> to model probability distributions over multiple future trajectories, selecting the one minimizing a predefined cost function. Hybrid models, like SafetyNet <cit.>, integrate a machine learning planner with a rule-based fallback layer to ensure trajectory feasibility and safety, executing either the ML or fallback trajectory based on dynamic feasibility checks.
Another hybrid model, PDM-Hybrid <cit.>, uses trajectory fusion between sample-based and learning-based planners to achieve high scores in the nuPlan simulator. However, this model presents several issues:
1) The fusion of the two trajectories involves linear interpolation based on a correction horizon, denoted as C. Up to C, the trajectory is guided by the sample-based approach, transitioning to the learning-based trajectory beyond C. However, this method may introduce discontinuities in the final trajectory due to inconsistencies at the fusion point C;
2) While this strategy aims to produce a prediction trajectory resembling human behavior (after C), the actual path taken by the ego-vehicle aligns with the output of the sample-based approach. Consequently, the final trajectory may lack human-like characteristics, deviating from expected human behavior.
§ SYSTEM ARCHITECTURE
This section describes our hybrid imitation-learning model, combining a learning-based planner with an optimization-based component for kinematically feasible, collision-free trajectories. As outlined in Fig. 1, the system inputs the ego vehicle states, perception observations, and a goal destination to generate a sample-based trajectory with the Planner block. A Multilayer Perceptron (MLP) refines this trajectory to mimic human-like behavior, and the Model Predictive Trajectory (MPT) block optimizes it to avoid collisions with obstacles and road boundaries.
§.§ Planner
The planner block together with the Multilayer Perceptron, was inspired by PDM-Open model [24, 25], which,
taking as inputs the poses, velocities and accelerations of the ego vehicle, the observations (used for agents forecasting) and the goal, it is responsible to find a centerline from the starting position to the end point, leveraging on the Dijkstra algorithm <cit.>, and to compute a collision-free path,
relying on a sample-based approach.
The planner computes 15 different paths in the following way:
1) Starting from the centerline, it employs 5 different Intelligent Driver Model (IDM) <cit.> policies with specific target speeds, specifically 20%, 40%, 60%, 80% and 100% of the speed limit. When there is a leading vehicle in front of the ego, the speed limit is defined as the velocity of the leading vehicle;
2) Secondly, in order to have lateral variance, we also apply 3 different offsets from the centerline, respectively +1m, -1m and 0m.
This way, we have 15 different paths with longitudinal and lateral variety which are simulated in the forecasted environment and scored according to the closed-loop metrics provided by nuPlan.
The path with the highest score is then selected and if it has an expected at-fault collision within 2 seconds, the output is overwritten with a maximum braking force maneuver.
§.§ Multilayer Perceptron (MLP)
The multilayer perceptron is responsible for generating an
output trajectory which should be as similar as possible to the expert
driver one. In order to achieve this task, the neural network takes as inputs the ego vehicle's poses, velocities and accelerations of
longitudinal, lateral and angular axis, starting from the past 2 seconds up to the current
time step, together with the path computed by the planner block. These inputs are scaled to a 512-dimensional vector using a linear layer and then they are concatenated and fed into the MLP.
The MLP consists of two 512-dimensional linear layers with dropout (p=0.1) and ReLU activation functions. The output layer is a linear layer that regresses the future waypoints for the next 8 seconds. This output is called "Neural Network Trajectory," trained to minimize the L2 distance between its waypoints and those of the expert driver trajectory provided by the dataset, which offers more than 88-thousands scenarios with a length of 15 seconds with human driver trajectories for training purposes.
§.§ Model Predictive Trajectory (MPT)
The optimization-based component utilizes a MPT algorithm. This algorithm integrates inputs such as the "Neural Network Trajectory" generated by the MLP, the drivable area, the ego vehicle's poses and velocities and the observations from the perception system. Its primary function is to produce an optimized trajectory that ensures both collision-free navigation and adherence to kinematic feasibility.
Aiming to solve an optimization problem, we define the following soft and hard constraints:
1) Soft Constraint: the collision-free condition is considered as a soft constraint, since if the optimized trajectory is not collision-free, we take into consideration the previously generated trajectory;
2) Hard Constraint: since the trajectory near the ego vehicle must be smooth, the only hard constraint we have is that the trajectory points near the ego must be the same as the previously generated trajectory, in order to avoid sudden steering maneuvers. This hard constraint is formulated as follow:
δ_k = δ_k^prev if (0 ≤ i ≤ N_fix)
Where:
* δ_k represents the steering angle at a current trajectory point;
* δ_k^prev represents the steering angle at the previous trajectory point. It ensures that the current steering angle remains consistent with the previous one;
* N_fix represents the number of fixed trajectory points. It determines the range over which the hard constraint is applied.
The objective function of the optimization problem minimizes the tracking errors and the steering acceleration, rate and angle of the ego vehicle.
It can be defined as follow:
J = w_y ∑_k y_k^2 + w_θ∑_kθ_k^2 + w_δ∑_kδ_k^2
+ w_δ̇∑_kδ̇_̇k̇^̇2̇ + w_δ̈∑_kδ̈_̈k̈^̈2̈
Where at time step k, we can define the following variables:
* y_k: lateral distance to reference path;
* θ_k: heading angle against the reference path;
* δ_k: steering angle;
* δ̇_̇k̇: steering rate;
* δ̈_̈k̈: steering acceleration.
* w_y, w_θ, w_δ, w_δ̇, w_δ̈ are tuning weights.
The MPT, by taking as input the observations of other agents, is also able to perform adaptive cruise planning maneuvers. The role of the cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition.
The safe distance is calculated dynamically by the following equation:
d = v_ego t_idling + 1/2 a_ego t_idling^2 + v_ego^2/2a_ego - v_obstacle^2/2a_obstacle
where:
* d is the calculated safe distance;
* t_idling is the idling time for the ego to detect the front vehicle's deceleration;
* v_ego is the ego's current velocity;
* v_obstacle is the front obstacle's current velocity;
* a_ego is the ego's acceleration;
* a_obstacle is the obstacle's acceleration.
To maintain a safe distance while optimizing for smooth velocity transitions, we solve an optimization problem. The objective function minimizes the deviation from the desired velocity and smoothness of acceleration:
J = ∑_k (w_v(v_desired - v_ego, k)^2 + w_a a_ego, k^2)
subject to constraints on safe distance d, velocity, and acceleration. By solving this problem at each time step, the ego vehicle adapts to changes and ensures safe and efficient cruising. (Note that w_v, w_a are tuning weights).
§ EXPERIMENTS AND RESULTS
§.§ Baselines
We first analyze the results from the nuPlan open-loop (OL), closed-loop non-reactive (CL-NR), and closed-loop reactive (CL-R) simulations for baseline models, as shown in TABLE 1. Scores are computed using the simulator's built-in metrics. Open-loop simulations evaluate the planner's imitation of an expert driver's route, while closed-loop simulations assess the trajectory's safety, comfort, and collision avoidance. Each simulation assigns a score between 0 and 100 based on these criteria.
Upon closer examination of TABLE 1, a discernible pattern emerges within the results.
Specifically, Urban Driver, PDM-Open and GC-PGP <cit.>, characterized as learning-based models, exhibit
commendable performance in open-loop simulations but display diminished efficacy in
closed-loop scenarios.
Conversely, the rule-based IDM and the sample-based PDM-Closed models demonstrate an inverse behaviour: underperforming in
open-loop simulations yet surpassing the learning-based models in closed-loop simulations.
These findings suggest that learning-based models excel in predicting the motion of the ego
vehicle, capable of replicating human trajectories. However, unlike rule, sample or optimization-based
approaches, they do not inherently ensure safe closed-loop driving.
§.§ ROS Simulator
Before testing the model directly on the real vehicle, several experiments have been conducted in the simulator.
Fig. 2 shows different experimental results.
The green line is the "Neural Network Trajectory", direct output of the neural network. As expected it is not able to provide a safe closed-loop driving, as in the provided corner cases of Fig. 2 it often overcomes the boundaries of the lane, leading to unsafe and dangerous situations without the guarantee of a collision-free trajectory. Despite that, it shows good generalization capabilities, as the maps and scenarios considered during evaluation are completely different from the ones in the training stage.
However, the pink line, which represents the "MPT Trajectory", perfectly drives the vehicle within the bounds of the lane, redefining the multilayer perceptron's output into a safe and collision-free route.
The model is also able to perform collision avoidance maneuvers with static obstacles and adaptive cruise control driving with dynamic agents.
Thanks to these experimental results, it is possible to demonstrate the effectiveness of the safe closed-loop driving capabilities of the hybrid motion planner, which is indeed able to prevent collisions and unfeasible trajectories by computing a refined output through the optimization process.
However, assessing the model's ability to mimic human-like driving style requires a qualitative analysis.
To this aim, we examine several qualitative results.
The following results in Fig. 3-8 show some comparisons between a default optimization-based planner (on the left), and the hybrid motion planner that we propose in this paper (on the right).
In addition to the shape of the trajectories, also the velocity (top) and acceleration (bottom) profiles are provided in order to better evaluate the human-likeness.
In Fig. 3, the trajectories of both the default and hybrid planners exhibit a striking similarity in shape. However, upon closer inspection, we notice an interesting distinction: the hybrid model's trajectory gracefully widens around curves, diverging from the lane centerline, mirroring human driver behavior more closely.
Furthermore, the velocity and acceleration profiles of the hybrid planner are considerably smoother compared to the optimization-based model. In the latter, abrupt maneuvers and accelerations are evident, resulting in a discontinuous overall motion.
In Fig. 4, although the velocity and acceleration profiles looks very similar among the two planners, we can distinguish a notable difference in the shape of the trajectories. While the default planner almost perfectly follows the lane centerline, leading to a geometrical path, the hybrid model moves away from the centerline, driving along the two turns with a single maneuver.
Similarly, in Fig. 5 we obtain a human-like trajectory with the hybrid model, which widens around curves. Moreover, on the top of the image, we can notice an interesting behaviour in the shape of the trajectories. In that spot, we can see there is an abrupt step in the right bound of the lane, which also affects the default planner trajectory. Conversely, the hybrid planner completely ignore the step in the lane bound and does not affect the motion, leading to a more comfortable route.
Another interesting case is shown in Fig. 6, where an adaptive cruise control maneuver was simulated.
While the default planner abruptly accelerates at the beginning, reaching high velocities in a short time, and suddenly brakes when encountering the leading vehicle, leading to an uncomfortable and discontinuous motion, the hybrid planner employs a much smoother trajectory, inferring the right acceleration to avoid brusque and curt maneuvers.
As the experiment was conducted along a straight line, with no discernible differences between the two trajectories, we shift our focus to analyzing velocity and acceleration in the time domain in Fig. 7, where we can notice much smoother profiles in the hybrid model's motion.
In addition to our qualitative analysis, we shift our focus to quantitative results by inspecting the jerk profile in the time domain. Elevated jerk levels are characteristic of robotic maneuvers, whereas moderated levels reflect a more human-like driving style.
In Fig. 8, we can notice a high peak in the jerk profile of the optimization-based planner, while the hybrid motion planner's one remains contained.
§.§ Real World Driving
After having conducted several successful tests in the simulator environment, we now shift our case study into real-world scenarios. The model has been deployed on the vehicle built and designed by the company Pix Moving, which is called Robobus. The Robobus is a bi-directional, level 4 autonomous vehicle, fully electric, with sensors such as lidars, radars, cameras, GNSS and IMU. It has been designed to transport up to six people, with a maximum speed of 30km/h and it is already operating in some areas in China and Japan. The experiments took place in a real traffic scenarios, with other static and dynamic agents involved, as shown in Fig. 9. The planner showed stable and robust performance while navigating in the traffic, especially at low speed (less than 15km/h). Thanks to the optimization-based component which refines the output of the neural network, the final trajectory was always within the lane boundaries and collision-free with obstacles and other agents.
§ CONCLUSIONS
In our paper, we introduce a hybrid imitation-learning motion planner designed to ensure safe, collision-free trajectories that closely mimic human-like behavior. Our model exhibits impressive performance in simulation, demonstrating strong generalization across diverse maps, scenarios, and environments not seen during training. This underscores its robust capabilities. Moreover, our approach proves effective when deployed in real-world self-driving vehicles, particularly at low speeds. As we move forward, future research efforts should prioritize testing the model at higher speeds to better prepare it for real-world urban driving scenarios.
-12cm
99
c1 Steven LaValle, Planning Algorithms, Cambridge University Press, 2006.
c2 Wilko Schwarting, Javier Alonso, Daniela Rus, Planning and Decision-Making for Autonomous Vehicles, Annual Review of Control, Robotics, and Autonomous Systems, 2018.
c3 Yanjun Huang, Hong Wang, Amir Khajepour, A Novel Local Motion Planning Framework for Autonomous Vehicles Based on Resistance Network and Model Predictive Control, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020.
c4 Yadollah Rasekhipour, Amir Khajepour, Shih-Ken Chen, A Potential-Field-based Model Predictive Path Planning Controller for Autonomous Road Vehicles, IEEE Transactions on Intelligent Transportation Systems, 2017.
c5 Moritz Werling, Sören Kammel, Julius Ziegler, Optimal trajectories for time-critical street scenarios using discretized terminal manifolds, The International Journal of Robotics Research, 2011.
c6 Jie Ji, Amir Khajepour, William Melek, Path Planning and Tracking for Vehicle Collision Avoidance based on Model Predictive Control with Multi-constraints, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2016.
c7 Chang Liu, Seungho Lee, Scott Varnhagen, Path Planning for Autonomous Vehicles using Model Predictive Control, IEEE Intelligent Vehicles Symposium, 2017.
c8 Alberto Franco, Vitor Santos, Short-term Path Planning with Multiple Moving Obstacle Avoidance based on Adaptive MPC, IEEE Xplore, 2019.
c9 Chaoyong Zhang, Duanfeng Chu, Shidong Liu, Trajectory Planning and Tracking for Autonomous Vehicle Based on State Lattice and Model Predictive Control, IEEE Intelligent transportation systems magazine, 2019.
c10 Changxi You, Jianbo Lu, Dimitar Filev, Advanced Planning for Autonomous Vehicles Using Reinforcement Learning and Deep Inverse Reinforcement Learning, Elsevier, 2018.
c11 Parth Kothari, Christian Perone, Luca Bergamini, DriverGym: Democratising Reinforcement Learning for Autonomous Driving, Machine Learning for Autonomous Driving Workshop, 2021.
c12 Tung Phan, Forbes Howington, Sheng Chu, Driving in Real Life with Inverse Reinforcement
Learning, arXiv, 2022.
c13 Alex Kendall, Jeffrey Hawke, David Janz, Learning to Drive in a Day, arXiv, 2018.
c14 Wenyuan Zeng, Wenjie Luo, Simon Suo, End-to-end Interpretable Neural Motion Planner, Computer Vision Foundation, 2019.
c15 Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Argoverse: 3D Tracking and Forecasting with Rich Maps, Computer Vision and Pattern Recognition, 2019.
c16 John Houston, Guido Zuidhof, Luca Bergamini, One Thousand and One Hours: Self-driving Motion Prediction Dataset, Computer Vision and Pattern Recognition, 2020.
c17 Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Scalability in Perception for Autonomous Driving: Waymo Open Dataset, Computer Vision and Pattern Recognition, 2019.
c18 Holger Caesar, Juraj Kabzan, Kok Seang Tan, nuPlan: A closed-loop ML-based planning benchmark for autonomous vehicles, arXiv, 2022.
c19 Jie Cheng, Xiaodong Mei, Ming Liu, Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders, arXiv, 2023.
c20 Mayank Bansal, Alex Krizhevsky, Abhijit Ogale, ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst, arXiv, 2018.
c21 Oliver Scheel, Luca Bergamini, Maciej Wołczyk, Urban Driver: Learning to Drive from Real-world Demonstrations Using Policy Gradients, 5th Conference on Robot Learning, 2021.
c22 Stefano Pini, Christian S. Perone, Aayush Ahuja, Safe Real-World Autonomous Driving by Learning
to Predict and Plan with a Mixture of Experts, Machine Learning for Autonomous Driving Workshop, 2022.
c23 Matt Vitelli, Yan Chang, Yawei Ye, SafetyNet: Safe planning for real-world self-driving vehicles using machine-learned policies, arXiv, 2021.
c24 Daniel Dauner, Marcel Hallgarten, Andreas Geiger, Parting with Misconceptions about Learning-based Vehicle Motion Planning, arXiv, 2023.
c25 Daniel Dauner, Marcel Hallgarten, Andreas Geiger, Supplementary Material for Parting with Misconceptions about Learning-based Vehicle Motion Planning, arXiv, 2023.
c26 Napat Karnchanachari, Dimitris Geromichalos, Kok Seang Tan, Towards learning-based planning:
The nuPlan benchmark for real-world autonomous driving, arXiv, 2024.
c27 Edsger Wybe Dijkstra, A Note on Two Problems in Connexion with Graphs, Numerische Mathematik, 1959.
c28 M. Treiber, A. Hennecke, and D. Helbing. Congested traffic states in empirical observations
and microscopic simulations. Physical review E, 2000.
c29 Marcel Hallgarten, Martin Stoll, Andreas Zell. From Prediction to Planning With Goal Conditioned Lane Graph Traversals, arXiv, 2023.
|
http://arxiv.org/abs/2409.02598v1 | 20240904102959 | SurgTrack: CAD-Free 3D Tracking of Real-world Surgical Instruments | [
"Wenwu Guo",
"Jinlin Wu",
"Zhen Chen",
"Qingxiang Zhao",
"Miao Xu",
"Zhen Lei",
"Hongbin Liu"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
SurgTrack
W. Guo et al.
Centre for Artificial Intelligence and Robotics (CAIR), HKISI-CAS MAIS, Institute of Automation, Chinese Academy of Sciences School of Artificial Intelligence, University of Chinese Academy of Sciences
SurgTrack: CAD-Free 3D Tracking of Real-world Surgical Instruments
Wenwu Guo1*, Jinlin Wu1,2*, Zhen Chen1, Qingxiang Zhao1, Miao Xu1
Zhen Lei1,2,3, Hongbin Liu1,2
July 2024
===================================================================================================
§ ABSTRACT
Vision-based surgical navigation has received increasing attention due to its non-invasive, cost-effective, and flexible advantages.
In particular, a critical element of the vision-based navigation system is tracking surgical instruments. Compared with 2D instrument tracking methods, 3D instrument tracking has broader value in clinical practice, but is also more challenging due to weak texture, occlusion, and lack of Computer-Aided Design (CAD) models for 3D registration. To solve these challenges, we propose the SurgTrack, a two-stage 3D instrument tracking method for CAD-free and robust real-world applications. In the first registration stage, we incorporate an Instrument Signed Distance Field (SDF) modeling the 3D representation of instruments, achieving CAD-freed 3D registration. Due to this, we can obtain the location and orientation of instruments in the 3D space by matching the video stream with the registered SDF model. In the second tracking stage, we devise a posture graph optimization module, leveraging the historical tracking results of the posture memory pool to optimize the tracking results and improve the occlusion robustness. Furthermore, we collect the Instrument3D dataset to comprehensively evaluate the 3D tracking of surgical instruments. The extensive experiments validate the superiority and scalability of our SurgTrack, by outperforming the state-of-the-arts with a remarkable improvement. The code and dataset are available at https://github.com/wenwucode/SurgTrackhttps://github.com/wenwucode/SurgTrack.
§ INTRODUCTION
Developing computer-assisted surgery systems can improve the quality of interventional healthcare for patients <cit.>, offering significant benefits, such as reduced operational times and minimized risk of surgical complications. In particular, surgical navigation systems have become an indispensable component in modern surgery <cit.>, and ascertain the exact positioning of surgical instruments by tracking distinctive sections of the tools. As a critical element of surgical navigation systems, including electromagnetic-based <cit.>, optical-based <cit.>, and vision-based systems <cit.>. Among these, vision-based systems have garnered considerable interest due to non-invasive, cost-effective, flexible, and not subject to line-of-sight limitations or electromagnetic disturbances <cit.>.
The 3D tracking algorithm is essential in vision-based surgical navigation systems <cit.>. However, most existing methods of instrument tracking are based on object-tracking algorithms, detecting the region of interest object and corresponding matching the detected region across different frames. Early works <cit.> required markers of surgical instruments, and achieved instrument tracking by recognizing and matching markers across different frames. This method causes invasion of surgical instruments, lacking scalability. Later works <cit.> proposed marker-freed tracking methods, detecting instruments with handcraft visual features and then tracking instruments through the Kalman filter algorithm. Limited by the generalizability of handcraft visual features, these marker-freed methods did not perform well in real-world applications. Recently, Fathollahi et al. <cit.> proposed a highly accurate instrument tracking method, which introduces Yolo-v5 <cit.> to improve the accuracy of instrument detection and applies ReID <cit.> technology to improve the accuracy of cross-frame matching. However, these methods focused on developing 2D tracking of instruments, which can only perceive 2 degrees of freedom, which is not enough to provide sufficiently accurate information for surgical navigation.
Existing 2D tracking systems <cit.> are restricted to the x and y planes, accommodating in-plane rotations for a total of three degrees of freedom. In comparison, 3D object tracking approaches <cit.> match detected objects with pre-established computer-aided design (CAD) models to ascertain their 3D orientation. Represented through six degrees of freedom—spanning the x, y, and z axes, and including the rotational dimensions of pitch, yaw, and roll—this detailed spatial understanding is vital for vision-based navigational systems. However, the application of these 3D tracking methods to surgical environments is fraught with challenges. A primary challenge is the inaccessibility of CAD models for surgical instruments, as they are often proprietary due to patent protections. The absence of CAD models hinders most 3D tracking techniques in the realm of surgical instrument tracking. Additional obstacles are the low textural features and frequent occlusions of surgical instruments, which complicate their detection and sustained tracking.
Inspired by existing works <cit.>, we design a novel 3D surgical instrument tracking method, named SurgTrack, which is capable of accurately tracking the 6 degrees of freedom of surgical instruments in real 3D space. To solve the problem of missing CAD models, we incorporate an Instrument Signed Distance Field (SDF) model generating the 3D representation of the surgical instrument with RGB-D video frames. We also propose an Instrument SDF model to further accurately learn the 3D shape and texture of instruments. Through Instrument SDF, SurgTrack completes the registration of 3D tracking without CAD models. To solve tracking problems caused by occlusion and weak textures, we apply a posture memory pool to provide historical tracking results as a reliable reference. We also utilize a posture graph optimization module to optimize the ongoing tracking results with historical references and ensure that occlusions and weak textures do not cause tracking interruptions.
Furthermore, to facilitate a comprehensive analysis and evaluation of our methods, we collect a 3D tracking dataset of surgical instruments, named Instrument3D. Our SurgTrack achieves a remarkable 3D tracking performance with the 88.82% ADD-S and the 12.85 reconstruction error. We also conduct experiments on the general 3D object tracking dataset HO3D to demonstrate the generalization and scalability of our SurgTrack.
§ METHOD
§.§ Overview of SurgTrack
An overview of our SurgTrack framework is shown in Fig. <ref>. To achieve CAD-free registration, we first model the 3D shape of the surgical instrument using SDF (<ref>). Then, we track the 6-DoF pose of the instruments through the Posture Memory Pool and Posture Graph Optimization (<ref>).
§.§ CAD-free Instrument Registration
Instrument SDF Modeling.
Given the 3D point cloud {v| v ∈ℝ^3} captured by a RGB-D camera, we adapt the Signed Distance Function (SDF) to model the 3D representation of the surgical instrument as follows:
S = { v | Ψ(v)=0 },
where Ψ(v)=0 represents the points on the surface of the instrument. Therefore, we can derive the 3D model of the instrument from point cloud data, eliminating the need for a pre-existing Computer-Aided Design (CAD) model. This 3D model facilitates the registration process for 3D tracking. However, the SDF methodology faces inherent limitations when dealing with complex scenarios, such as occlusions and low-texture regions.
Occlusion and Texture Optimization. To address this, we incorporate the occlusion constraint and shape constraint in the SDF model. For occlusions, we introduce a positive value δ to alleviate boundary ambiguities between background and instrument caused by partial occlusions:
ℒ_occ = 1/|V_occ|∑_v∈ V_occ(Ψ(v)-δ)^2.
For surfaces with weak textures, we consider points near the surface in the SDF modeling process, enabling our SurgTrack to better capture the surface geometry and handle areas with weak textures, as follows:
ℒ_surf = 1/|V_surf|∑_v∈ V_surf(Ψ(v)+d_v-d_Δ)^2.
In this way, the total loss function ℒ is defined as follows:
ℒ = αℒ_occ + βℒ_surf,
where α and β balance the contributions of the two components.
§.§ Instrument Tracking
§.§.§ Tracking Initialization.
In the tracking stage, we estimate a coarse pose ξ̃_t by matching the current frame and its adjacent frames with RANSAC algorithm as follows:
ξ̃_t = min_R, t∑_i R p_i + t - q_i ^2.
In the above equation, RANSAC algorithm minimizes the distance between the reconstructed results p_i and their corresponding scene points q_i and estimates the coarse pose ξ̃_t. R and t represent the rotation and translation matrix.
§.§.§ Tracking Optimization.
Following the initial rough pose estimation obtained using RANSAC, the pose ξ̃_t serves as the initial estimate in the subsequent optimization phase. This pose is further refined by integrating the pose memory pool with the pose graph to improve accuracy and robustness. First, to address challenges such as long-term tracking drift, data loss, and occlusions, it is crucial to preserve the pose data from previous frames. We implement a posture memory pool 𝒫 that stores this information as follows:
𝒫 = { (ξ_i, M_i) | i = 1, 2, …, N },
where ξ_i ∈SE(3) represents the optimized pose of the i^th frame, M_i contains the 3D point cloud data associated with the i^th frame, and N is the number of keyframes currently stored in the posture memory pool.
With the initial pose ξ̃_t, we construct a posture graph using selected relevant frames from the posture memory pool. The selection is based on criteria such as the RANSAC matching threshold and frame overlap to ensure reliable references. Then, the posture graph is constructed as follows:
G=(𝒱,ℰ),
where the nodes 𝒱 consist of the current frame ℱ_t and the selected reference frames 𝒫_pg, as 𝒱=ℱ_t∪𝒫_pg with |𝒱|=K+1.
Based on the posture graph, we refine the tracking results of the current frame through the following loss function, resulting in the final optimized pose ξ_t ∈SE(3):
ξ_t ←min_ξ_t(w_sℒ_SDF(t)+∑_i∈𝒱,j∈𝒱,i≠ j[w_fℒ_3D(i,j)+w_pℒ_2D(i,j)]),
where ℒ_3D(i,j) is the 3D distance loss, ℒ_2D(i,j) is the 2D projection loss, ℒ_SDF(t) is the instrument SDF depth loss, and the scalar weights w_f,w_p,w_s are empirically set to 1. Specifically, the 3D distance loss ℒ_3D is calculated as:
ℒ_3D(i,j)=∑_(p_m,p_n)∈ C_i,jρ(ξ_i^-1p_m-ξ_j^-1p_n_2).
This 3D distance loss measures the Euclidean distance between corresponding RGB-D features p_m,p_n∈ℝ^3, using the Huber loss function ρ to enhance the robustness of our SurgTrack.
On the other hand, the 2D projection loss ℒ_2D is calculated as:
ℒ_2D(i,j)=∑_p∈ I_iρ(|n_i(p)·(T_ij^-1π_D_j^-1(π_j(T_ijp))-p)|).
This 2D projection loss assesses the pixel-wise point-to-plane distance after projection and transformation, comparing node i to the plane in node j.
Finally, the instrument SDF depth loss ℒ_SDF is calculated as follows:
ℒ_SDF(t)=∑_p∈ I_tρ(|Ψ(ξ_t^-1(π_D^-1(p)))|).
This instrument SDF depth loss measures the distance between the current frame and the implicit surface defined by the Instrument SDF, where Ψ(·) is the signed distance function indicating proximity to the surface. Note that this loss is considered only after the initial training of the object field has converged.
In this way, the optimization strategy for our SurgTrack, starting from the rough pose ξ̃_t and resulting in the final optimized pose ξ_t ∈SE(3), integrates 3D spatial information, instrument shape, and depth data from a single viewpoint to complete pose optimization, improving robustness against reflections, weak textures, and long-term tracking challenges.
§ EXPERIMENTS
§.§ Experimental Settings
Datasets. We collect a 3D tracking dataset of surgical instruments in RGB-D videos, named Instruments3D. The Instruments3D dataset consists of 13 videos across 5 surgical instruments, including ultrasound bronchoscopes, flexible and rigid endoscopes, thoracoscopes, and ultrasound probes. The Instruments3D dataset presents RGB-D videos by capturing human hands manipulating YCB objects, recorded at close range using an Intel RealSense camera. The ground truth data is derived through multi-view registration. We also conduct experiments on the general object 3D tracking dataset, HO3D <cit.>.
Evaluation metrics. We follow the classical evaluation protocol of 3D object tracking <cit.>. We use the ADD and ADD-S as the accuracy metric of 3D tracking, with their values ranging from 0 to 1, where higher values signify better accuracy. We use the Chamfer Distance (CD) as a measure of reconstruction error, where a smaller value indicates a more precise reconstruction.
§.§ Comparison Results on Instrument3D and HO3D
Comparison on Instrument3D.
The Instrument3D dataset presents a complex challenge due to the frequent occlusions and severe motion blur encountered during the manipulation of surgical instruments. Furthermore, the inherent characteristics of these instruments such as their weak texture, reflective surfaces, and slender profiles compound the difficulty. Despite these difficulties of the Instrument3D dataset, our SurgTrack maintains the capability of robust, long-term tracking in most cases, as shown in Fig. <ref>. The comparison results in Table <ref> confirm the remarkable advantage of our SurgTrack over state-of-the-art 3D tracking methods.
Comparison on HO3D.
As shown in Table <ref> and Fig. <ref>, on the HO3D dataset, we achieve the best results compared with other tracking schemes. Our algorithm shows strong capabilities in both ADD-S and ADD. While BundleTrack matches our performance in ADD-S, it falls short in all other metrics where we excel significantly, and it also demands more than 300 rounds of training. This demonstrates the strong generalization ability of our method to general objects.
§.§ Ablation Study
To comprehensively evaluate our SurgTrack framework for 3D tracking of surgical instruments, we investigate the impact of each module. These modules include occlusion and texture Optimization, posture memory pool, and posture graph. As shown in Table <ref>, the occlusion and texture optimization is helpful for tracking optimization, which can increase ADD-S by 12.43% and ADD by 21.56%. When constructing the posture graph, selecting the most matching pose subset instead of randomly selecting can reduce the CD error by nearly 3cm and increase the ADD by 41.29%. In this way, these comparisons further validate the effectiveness of our SurgTrack with tailored modules.
§ CONCLUSION
In this study, we collect a new multi-category surgical instrument 3D tracking data set, conduct a comprehensive study on 3D surgical instrument tracking, and propose a framework for 3D instrument tracking. We use Instrument SDF to generate the 3D representation of surgical instruments, achieving CAD-free 3D tracking registration. In the tracking stage, we use the posture memory pool and combine it with the posture graph for pose optimization, which greatly improves the 3D tracking accuracy. We also use the Instrument SDF to further improve the robustness to occlusion, weak texture, and long-term tracking. Experiments show that our method has significant superiority and scalability over public data sets and surgical instrument 3D tracking datasets.
§.§.§
This work was supported by the National Natural Science Foundation of China (Grant No.#62306313 and No.#62206280), and the InnoHK program.
§.§.§
The authors declare no competing interests.
splncs04
|
http://arxiv.org/abs/2409.02851v1 | 20240904162133 | Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models | [
"Zhibin Liu",
"Haoye Dong",
"Aviral Chharia",
"Hefeng Wu"
] | cs.CV | [
"cs.CV"
] |
[
[
[
=====
type=figure
< g r a p h i c s >
figureHuman-VDM for generating 3D humans from a single image. Given a single RGB human image, Human-VDM aims to generate high-fidelity 3D human. Human-VDM preserves face identity, delivers realistic texture, ensures accurate geometry, and maintains a valid pose of the generated 3D human, surpassing the current state-of-the-art models.
]
§ ABSTRACT
Generating lifelike 3D humans from a single RGB image remains a challenging task in computer vision, as it requires accurate modeling of geometry, high-quality texture, and plausible unseen parts. Existing methods typically use multi-view diffusion models for 3D generation, but they often face inconsistent view issues, which hinder high-quality 3D human generation. To address this, we propose Human-VDM, a novel method for generating 3D human from a single RGB image using Video Diffusion Models. Human-VDM provides temporally consistent views for 3D human generation using Gaussian Splatting. It consists of three modules: a view-consistent human video diffusion module, a video augmentation module, and a Gaussian Splatting module. First, a single image is fed into a human video diffusion module to generate a coherent human video. Next, the video augmentation module applies super-resolution and video interpolation to enhance the textures and geometric smoothness of the generated video. Finally, the 3D Human Gaussian Splatting module learns lifelike humans under the guidance of these high-resolution and view-consistent images. Experiments demonstrate that Human-VDM achieves high-quality 3D human from a single image, outperforming state-of-the-art methods in both generation quality and quantity.
§ INTRODUCTION
Generating 3D humans from a single RGB image has gained significant attention in recent years due to its versatile applications in filmmaking, video games, human-robotic interaction, etc. However, existing approaches for 3D human generation largely rely on multi-view diffusion models, which often suffer from inconsistent views and lead to artifacts. To address this problem, we propose a 3D Human Gaussian Splatting framework that allows users to generate 3D humans from a single 2D image input while ensuring accurate geometry and realistic appearance. However, generating 3D humans using only a single RGB image presents a significant challenge due to its inherent ambiguity, which necessitates inferring unseen geometry and appearance that are not directly captured in a 2D image.
Current approaches address this challenge by incorporating parametric human shape models, such as SCAPE <cit.> and SMPL <cit.>. However, these methods exclusively focus on reconstructing the human shape, neglecting the appearance details crucial for a fully realistic 3D representation. Earlier works, like PIFu <cit.>, attempted to address this gap with a data-driven approach. They used CycleGAN <cit.> and residual blocks <cit.> trained on image-3D pairs. However, such methods often struggle with novel appearances or poses mainly due to the lack of sufficient 3D training information. Subsequent methods, such as ECON <cit.> and 2K2K <cit.>, enhanced performance by incorporating depth or normal estimation into the generation process. SIFU <cit.> proposed a 3D human generation method using a side-view based Transformer with 3D aware Refinement. Despite the improvements, these methods often lack detail or result in inaccurate geometry, particularly with high-resolution input images.
Recently, SiTH <cit.> integrated a generative diffusion model into the 3D human generation pipeline to produce realistic textures and geometries, especially in unobserved regions. Ultraman <cit.> introduced a multi-view image generation model that helped in providing essential appearance priors aiding the generation process. Although diffusion models <cit.>, trained on extensive image datasets, have demonstrated potential for creating 3D humans, multi-view diffusion often struggles with generating view-consistent images and tends to introduce artifacts in the generated 3D humans.
This paper proposes Human-VDM, a novel Gaussian Splatting framework for generating 3D humans from a single image using video diffusion models. Human-VDM is comprised of three distinct modules: a view-consistent human video diffusion module, a video augmentation module, and a 3D human Gaussian Splatting module. Human-VDM first generates a `view-consistent' human video, then enhances the quality of the frames through super-resolution and video frame interpolation, and finally employs 3D Gaussian Splatting (3DGS) <cit.> to effectively generate the 3D human model.
Initially, we fine-tune SV3D <cit.>, a latent video diffusion model specifically designed for generating object videos, to enable it to generate view-consistent human videos. However, a direct application of video diffusion models <cit.> to the 3D human generation can result in geometric artifacts and blurry textures. Additionally, the generated video consists of only 21 frames at a low resolution of 576 × 576, which is insufficient for high-quality 3D human generation. To provide more view-consistent frames and realistic texture for 3D human generation, we carefully designed a video augmentation module that includes super-resolution and frame interpolation components. The generated human video is enhanced through this module by undergoing super-resolution and frame interpolation, which results in smooth, high-quality frames at a resolution of 1080 × 1080. Lastly, we introduce a 3D human Gaussian splatting module to generate realistic 3D human models. For this, we utilize SMPL <cit.> along with an optimizable feature tensor training strategy to optimize the parameters of the 3D Gaussians, thereby generating a high-quality 3D human from a single image. Figure <ref> and <ref> demonstrate that Human-VDM achieves state-of-the-art (SOTA) performance and generates realistic 3D humans from a single-view RGB image input. Our contributions can be summarized as follows:
* We propose a novel single-view 3D human generation framework that leverages the human video diffusion model to produce view-consistent human frames.
* We carefully designed a video augmentation model that consists of super-resolution and video frame interpolation to enhance the quality of the generated video.
* We introduce an effective Gaussian Splatting framework for 3D human reconstruction with offset prediction.
* Extensive experiments demonstrate that the proposed Human-VDM can generate realistic 3D humans from single-view images, outperforming state-of-the-art methods in both quality and effectiveness.
§ RELATED WORKS
3D Human Generation. PIFu <cit.> was among the first methods to introduce pixel-aligned features and neural fields <cit.> for reconstructing human figures from images by fitting parametric human shape models such as SMPL <cit.> and SCAPE <cit.>. PIFuHD <cit.> further enhanced this framework with high-resolution normal guidance. Subsequent methods improved upon this initial approach by integrating additional human body priors. For instance, PaMIR <cit.> and ICON <cit.> utilized skinned body models to guide the reconstruction process, while ARCH <cit.>, ARCH++ <cit.>, and CAR <cit.> extended this approach by mapping global coordinates into canonical coordinates, enabling reposing. PHOHRUM <cit.> and S3F <cit.> introduced techniques to disentangle shading and albedo, facilitating relighting. Concurrently, another set of methods replaced neural representations with traditional Poisson surface reconstruction <cit.>. Despite these advancements, such approaches have been primarily tailored to human bodies and often struggle with the complex topologies of loose clothing. To address this limitation, ECON <cit.> and 2K2K <cit.> integrated depth or normal estimation to enhance the reconstruction process. More recently, Ultraman <cit.> introduced a model to map texture thereby optimizing the texture details thus helping to maintain the color consistency during the final reconstruction. SIFU <cit.> also proposed a novel approach that combined the 3D Consistent Texture Refinement pipeline with a side-view Decoupling Transformer.
3D Human Generation with Diffusion models. Diffusion models <cit.> trained on large image datasets have exhibited remarkable capabilities in generating 3D objects from text prompts. Earlier works, such as Fantasia3d <cit.> and Magic3d <cit.>, predominantly followed an optimization-based workflow where 3D representations, such as NeRF <cit.>, were updated through neural rendering <cit.>. Although a few studies, such as TeCH <cit.>, adapted this workflow for 3D human reconstruction, they struggled to achieve accurate appearance and geometric representations of the human body due to the inherent ambiguities in text prompt condition. Recently, SiTH <cit.> integrated a generative diffusion model to produce full-body texture and geometry, including unobserved regions, within the reconstruction workflow. However, these methods still face challenges in capturing detailed clothing. In this paper, we leverage a video diffusion model (VDM) to generate an orbital video for 3D human reconstruction.
§ HUMAN-VDM
Given a single RGB image I of a person, Human-VDM aims to generate its 3D human model (see Figure <ref>). Human-VDM comprises several key modules: (i) the Human Video Diffusion module, (ii) the Video augmentation module, which includes the super-resolution and frame interpolation sub-modules, and (iii) the Human Gaussian Splatting module. First, the Human Video Diffusion module generates view-consistent videos of the input image. This video is then processed by the Video Augmentation module, where super-resolution enhances the resolution to 1080×1080, while video frame interpolation (VFI) smoothens the video frames. Finally, the augmented video is fed into the Human Gaussian Splatting module to generate a high-fidelity 3D human model.
§.§ Human Video Diffusion Module
To generate the video V̂, we input the front image of a human, denoted as I, into a latent video diffusion model which we fine-tuned for high-quality human video generation. We specifically use SV3D <cit.>, a latent video diffusion model designed for generating videos from a single image, capable of producing consistent multi-view images. However, since SV3D was originally designed for reconstructing general objects, its generated video quality for human body images is not satisfactory. Therefore, to enhance its capability for human video generation, we fine-tuned SV3D on Thuman 2.0 <cit.> dataset which includes a variety of high-quality human body scans. SV3D produces a raw orbital video, V̂ = [f̂_1, f̂_2, f̂_3, …, f̂_21], with a resolution of 576 × 576, illustrating the human from different viewpoints. The videos generated by the fine-tuned SV3D exhibit superior shape, appearance, and detailed rendering of areas not directly captured in a 2D image. We represent this generation process as follows:
V̂ = SV3D(I),
where `SV3D' denotes the generative process of the fine-tuned SV3D model.
§.§ Video Augmentation Module
The 21-frame human video V̂, with a resolution of 576 × 576, has limited expressive capacity for detailed 3D human reconstruction. To address this, we introduce the Video Augmentation Module, which includes super-resolution and frame interpolation. Super-resolution helps in improving the quality of textures while video frame interpolation improves the geometric smoothness of the 3D human and the quality of the previously invisible areas.
Video Super-resolution sub-module. For image super-resolution on each frame of V̂, we employ CodeFormer <cit.>, a transformer-based model designed primarily for enhancing facial image resolution. CodeFormer performs Low Quality (LQ) to High Quality (HQ) mapping by first learning a discrete codebook and an HQ decoder D_H through self-reconstruction learning. During Codebook Lookup, a transformer and an LQ encoder E_L are additionally introduced to accurately model the cookbook code combination. For facial images, increasing the resolution of each frame of V̂ by 4× and then resizing it to 1080×1080 yields clear and realistic images that significantly benefit 3D reconstruction. Similarly, we increase the resolution of each frame in the raw orbital video V̂ by 4× and resize it to 1080×1080, resulting in a high-resolution video V^'=[f^'_1,f^'_2,...,f^'_21] with improved texture quality. This process is formulated as follows:
f^'_i = Resize(SuperResolution (f̂_i)), 1≤ i≤ 21,
where `SuperResolution' denotes the operation of CodeFormer, while `Resize' denotes the operation of resizing the image to 1080×1080.
Video Frame Interpolation (VFI) sub-module. To enhance video consistency and interpolate frames, we employ PerVFI <cit.>. VFI provides additional visual information from diverse angles, improving the geometric smoothness of the 3D human and the quality of the invisible areas. PerVFI performs perception-oriented VFI and inputs two reference frame images I_0 and I_1 to reconstruct intermediate frames. First, bidirectional optical flows, i.e., F_0→1 and F_1→0 are estimated using a motion estimator. Additionally, two encoders capture multi-scale features. These features are then blended using asymmetric synergistic blending to obtain intermediate features f_t. These features are finally decoded to obtain the intermediate frame using a conditional flow generator, which samples from a normal distribution. We input the 21-frame high-resolution video frames V^' into PerVFI, resulting in an 81-frame high-resolution augmented video V=[f_1,f_2, ..., f_81]. This is formulated as follows:
f = VFI(f^'_j), 1≤ j≤ 81,
where `VFI' denotes the frame interpolation operation.
§.§ 3D Human Gaussian Splatting Module
We leverage 3D Gaussian Splatting <cit.> to model the 3D human from the augmented human video V. 3D Gaussian Splatting employs point-based representation, which facilitates high-quality real-time rendering by modeling the 3D object as a collection of parameterized static 3D Gaussians. Each Gaussian is characterized by a color c ∈ℝ^3, a 3D center position x ∈ℝ^3, opacity α∈ℝ, a 3D scaling factor s ∈ℝ^3, and a 3D rotation q ∈ℝ^4.
In this module, we incorporate an appearance network in conjunction with an optimizable feature tensor to enhance the representation of 3D Gaussian models refined from video data <cit.>. For each i^th frame f_i in the augmented video V, we first extract the SMPL model of the human body. We then sample points on the surface of this model and map their positions onto a UV position map, denoted by m. We introduce an optimizable feature tensor to capture the appearance of the reconstructed human. The parameters for each Gaussian are predicted by a Gaussian parameter decoder using the optimizable feature concatenated with m as input. These predictions form the 3D Gaussians in the canonical space. Using Linear Blend Skinning (LBS), these canonical 3D Gaussians can be reposed into motion space for rendering. This is formulated as follows:
m = M(θ̃,β)
P = Decode(cat(t,m)),
f_i^r = Splatting(LBS(D,J(β),θ̂_i),P),
where θ̃ is the pose parameters of the SMPL model in canonical space and β is the average shape parameters calculated from V, respectively. M is the operation of mapping the positions of the sampled points on the surface of the SMPL model onto a UV map; t denotes the optimizable feature tensor, Decode means the process of decoding the aligned feature tensors to predict the parameters of Gaussians P. D = T(β) + dT denotes the locations of 3D Gaussians in canonical space, formed by adding corrective point displacements dT on the template mesh surface T(β), J(β) produces 3D joint locations, θ̂_i represents the refined pose parameter optimized from θ_i, which denotes the pose parameters obtained from f_i, `LBS' is the operation of Linear Blend Skinning; `Splatting' denotes the render process, resulting in a rendered image f_i^r.
Training Objectives. For formulating the loss function, we take the current frame image f_i as the ground truth and calculate the loss with the rendered image f_i^r for optimization. This is formulated as follows:
ℒ = λ_RGBℒ_RGB +
λ_SSIMℒ_SSIM + λ_LPIPSℒ_LPIPS
+λ_Offsetℒ_Offset
+ λ_Scaleℒ_Scale
+ λ_fℒ_f,
where ℒ_RGB is the L1-loss between the ground truth and the rendered frame. ℒ_SSIM and ℒ_LPIPS denotes the SSIM and LPIPS losses, respectively. ℒ_Offset, ℒ_Scale and ℒ_f calculate the L2-norm of predicted offsets and scales, and the feature map, respectively. The weight coefficients λ_RGB, λ_SSIM, λ_LPIPS, λ_Offset, λ_Scale and λ_f, are set to 0.8, 0.2, 0.2, 10, 1.0 and 1.0 respectively.
§ EXPERIMENTS AND RESULTS
Dataset. Most works use the popular Thuman 2.0 dataset <cit.>, which comprises 2,500 high-quality human body scans, each accompanied by a detailed 3D model and texture mapping. The dataset includes a wide range of action poses and provides the SMPL-X <cit.> parameters along with corresponding grids.
Evaluation Metrics. Following previous works on 3D human generation, we use the four major metrics to evaluate the performance of Human-VDM. These include CLIP-Similarity <cit.>, LPIPS (Learned Perceptual Image Patch Similarity) <cit.>, SSIM <cit.> and PSNR. CLIP <cit.> measures the similarity between two images, providing a more representative evaluation of image feature similarity. LPIPS <cit.>, measures differences based on learned perceptual image patch similarity, aligning more closely with human perception. Likewise, SSIM (Structural Similarity Index) <cit.> is used to compare the luminance, contrast, and structure between two images. Lastly, PSNR (Peak Signal-to-Noise Ratio) assesses image quality based on pixel-level error, making it an error-sensitive evaluation metric.
Training details. To produce high-quality human videos, we fine-tuned SV3D using the Thuman 2.0 dataset <cit.> to enhance its 3D human video generation capabilities. We selected 475 samples from Thuman 2.0, excluding those used in subsequent quantitative comparisons. For each sample, 21 images were rendered from various angles following <cit.>. All images corresponding to a sample are rendered at the same horizontal position with a constant angular interval of 360/21 degree to ensure the consistency of rendered multi-view images. The first rendered image of each body was employed as the input, while the remaining images served as ground truth for fine-tuning SV3D. We freeze the image encoder and decoder of the original SV3D <cit.> model and optimize the U-Net weights <cit.>. The learning rate was set to and fine-tuned on one NVIDIA A800 GPU with a batch size of 13.
§.§ Qualitative Comparison
Figure <ref> presents the qualitative 3D human generation results from Human-VDM on a variety of input images that differ in gender, body posture, lighting, color, and clothing styles. The results demonstrate Human-VDM's significant performance with high appearance consistency, texture, and geometry qualities. Next, we compare Human-VDM with recent SOTA works on single-image based 3D human generation (see Figure <ref>), including PIFu <cit.>, PaMIR <cit.>, TeCH <cit.>, Ultraman <cit.>, SiTH <cit.> and SIFU <cit.>. Compared to Human-VDM, PaMIR <cit.> exhibits significant shortcomings in the geometry of the generated 3D human, e.g., the body of the generated human is incomplete for the first image. On the other hand, TeCH <cit.>, PIFu <cit.>, and SiTH <cit.> reconstruct remarkable geometries but contain apparent artifacts. Likewise, SIFU <cit.> displays misalignment in character motion and suboptimal texture quality on the back of the generated human. While Ultraman <cit.> obtains good geometry but fails to predict the realistic appearance of unseen view. Therefore, the proposed Human-VDM outperforms SOTA models in terms of texture quality and appearance consistency.
§.§ Quantitative Comparison
Following previous methods <cit.>, we randomly selected 50 samples from Thuman 2.0 <cit.>. Four views of the ground truth (GT), i.e., front, back, left, and right, were used to compute scores between the reconstructed results and the GT across these views. As reported in Table <ref>, Human-VDM achieves the lowest LPIPS and the highest CLIP score, indicating that the rendered images produced by our method are highly consistent with the input images. Additionally, Human-VDM achieves the highest SSIM and PSNR scores, further demonstrating that the rendered images of the generated 3D human are most closely aligned with the ground truth. All reported scores demonstrate the superiority of the proposed Human-VDM over existing SOTA methods.
§.§ User Study
The discussed metrics may not always fully capture the quality of generated 3D humans in terms of realism and other details. Thus following previous works, a user preference study was conducted to evaluate the performance of Human-VDM against existing SOTA methods. We compare Human-VDM with six recent SOTA models using 10 different samples, each with four views of generated 3D humans in different samples. For each sample, 30 volunteers were asked to vote on their impressions regarding four key aspects: geometry quality, texture quality, face quality, and overall quality. For a fair comparison, the results for the other six SOTA models were generated using their official code, with all settings left at their default values. As shown in Table <ref>, the proposed Human-VDM surpasses SOTA models in the aforementioned aspects.
Most volunteers considered Human-VDM to generate the best results, especially in terms of geometry and texture. Though Human-VDM does not particularly dominate in face quality relatively, it performs the best face consistency with the input image as shown in Figure <ref>. More than 53% of the volunteers confirm that Human-VDM outperforms other SOTA models, which confirms Human-VDM's superiority.
§.§ Ablation Study
We performed ablation studies by systematically excluding various components to assess the effectiveness of the proposed modules through both quantitative and qualitative comparisons. For this analysis, we randomly selected 30 samples from the Thuman 2.0 dataset <cit.>. We compared the full model with the variants excluding the proposed modules using the CLIP Similarity <cit.>, SSIM <cit.>, LPIPS <cit.>, and PSNR metrics. The evaluation covered rendered results from four viewpoints: front, back, left, and right. We additionally report results solely for the front view as well. Table <ref> presents the quantitative comparisons, while the qualitative visual comparisons are illustrated in Figure <ref>.
Quantitative results demonstrate that the proposed full model achieves superior CLIP Similarity and SSIM across both the single view and four views. The visual ablation results further establish that the 3D human generated by the full model exhibits more photorealistic textures and precise geometry. Results produced without finetuned SV3D are less lifelike and realistic since the videos generated by the original SV3D are not satisfactory. Without Super-Resolution, the video frames are not distinct enough for the Human Gaussian Splatting module, which results in blurs and artifacts of the reconstructed humans. Due to the lack of features presented by only 21 frames, results generated without frame interpolation are not good enough yet, which has apparent artifacts in novel views. This confirms the significance and contribution of the video augmentation module. In general, the finetuned SV3D provides high-quality human orbital video for realistic reconstruction; the super-resolution module enhances the quality of video frames to generate more distinct results, and the VFI module enables the model to generate remarkable results in novel views. Although the full model shows a slight decrease in LPIPS and PSNR, the visual results indicate that the 3D human reconstructed by the complete model is of higher quality. Overall, the full model achieves better performance i.e., when including the proposed components. This confirms the effectiveness of the proposed modules.
§ CONCLUSION AND FUTURE WORK
We propose a novel 3DGS-based framework for generating 3D humans from a single RGB image leveraging human video diffusion models. We first generate a view-consistent orbital video around the human and then augment the video through super-resolution and video frame interpolation. Finally, we reconstruct a remarkable 3D human using 3D Gaussian with the enhanced video. Both quantitative and qualitative experiments demonstrate that Human-VDM excels in generating 3D humans from a single image, outperforming state-of-the-art methods.
Limitations and Future works. Human-VDM has two limitations. First, it is challenging to accurately generate precise finger geometry due to the intricate and small size of finger poses. Second, applying large video diffusion models limits the model's overall ability to achieve a real-time 3D human generation. Future works can focus on addressing these limitations by enhancing geometry generation for complex and small finger poses, as well as developing more efficient models that can achieve real-time 3D human generation.
[
Supplementary Material
]
In the supplementary material, we provide a more detailed explanation of the model architecture, as well as training specifics, such as loss function weights, dataset descriptions, and definitions of the evaluation metrics. Additionally, we include further visual results and an analysis of failure cases.
§ MODEL ARCHITECTURE DETAILS
§.§ Human Video Diffusion Module
Module Architecture. The Video Diffusion Module of Human-VDM is based on SV3D <cit.>. SV3D's architecture builds upon SVD <cit.> and consists of a UNet <cit.> model with multiple layers. Each layer comprises a sequence of 1 residual block with Conv3D layers, followed by spatial and temporal transformer blocks integrated with attention layers. After being embedded into the latent space via the visual autoencoder (VAE) of SVD, the conditioning image is concatenated with the noisy latent state input z_t at noise timestep t before being fed into the UNet. The CLIP-embedding <cit.> matrix of the input image is provided to the cross-attention layers of each transformer block <cit.>, serving as the key and value, with the layer's feature acting as the query. Along with the diffusion noise timestep, the camera trajectory is also incorporated into the residual blocks. The camera pose angles e_i and a_i are first embedded into the position embeddings. These camera pose embeddings are then concatenated, linearly transformed, and combined with the noise timestep embedding. The composite embedding is fed into every residual block, where it is added to the block’s output after another linear transformation to match the feature size.
Static Orbits. The original SV3D model <cit.> consists of two main orbits: (1) the static orbit and (2) the dynamic orbit. Our study utilizes the static orbit, where the camera moves around the object at evenly spaced azimuth angles while maintaining the same elevation angle as in the conditioning image.
Fine-tuning SV3D for Human Video Diffusion. The original SV3D is fine-tuned upon SVD-xt <cit.> on the Objaverse dataset <cit.>, which contains synthetic 3D objects covering a wide diversity. For each object, <cit.> renders 21 frames around it on a random color background at 576×576 resolution, field-of-view of 33.8 degrees. We adopt the same rendering strategy for the Thuman 2.0 dataset <cit.> to fine-tune SV3D for high-quality human video generation.
§.§ Video Augmentation Module
Video Super-Resolution sub-module.
CodeFormer <cit.> is a transformer-based model <cit.> to enhance the resolution of human images. Upon learning a discrete codebook, an encoder E_H embed the high-quality human image I_h ∈ℝ^H× W× 3 as a compressed feature Z_h ∈ℝ^m× n× d by an encoder E_H. Each “pixel” in Z_h is then replaced by the nearest entry in the learnable codebook 𝒞 = c_k ∈ℝ^d^N_k=0. Afterward, the quantized feature Z_c ∈ℝ^m× n× d along with the code token sequence s ∈0, ⋯, N-1^m· n are produced as the following:
Z_c^(i,j) =min_c_k∈𝒞Z_h^(i,j)-c_k_2,
s^(i,j) =min_kZ_h^(i,j)-c_k_2.
Given Z_c, the high-quality human image I_rec is reconstructed by the decoder D_H. The m × n code token sequence, denoted as s, constitutes a novel latent discrete representation, which encodes the specific indices corresponding to entries in the learned codebook, i.e., Z^(i,j)_c = c_k when s^(i,j) = k.
Subsequently, with the codebook ℛ and decoder D_H held constant, a Transformer module <cit.> is introduced for predicting the code sequence, capturing the global human composition from low-quality inputs. To extract the low-quality features Z_l ∈ℝ^m × n × d using E_L, the features are first unfolded to m · n vectors Z_l^v ∈ℝ^(m · n) × d, which are subsequently fed into the Transformer. In the transformer, the s^th self-attention block performs the below operation:
X_s+1=Softmax(Q_sK_s)V_s+X_s,
where X_0 = Z^v_l. X_s is used to get the queries Q, key K, and value V through linear layers.
Video Frame Interpolation (VFI) sub-module. PerVFI is a novel model of frame interpolation. Given two reference frame images, I_0 and I_1 ∈ℝ^H× W× 3, with height H and width W, PerVFI is designed for reconstructing the intermediate frame I_t within the target time t ∈ (0, 1). It incorporates an asymmetric synergistic blending (ASB) module and a conditional normalizing flow-based generator.
After estimating bidirectional optical flows, PerVFI presents a pyramidal architecture, which can better capture multiscale information to extract features at different scales.
Specifically, a feature encoder E_θ is used to encode the two images into pyramid features with L levels, which can be denoted as f_i=E_θ(I_i), i=0,1. Subsequently, a feature blending module, denoted as B_θ, blends the pyramidal features to produce intermediate pyramid features. Afterward, a conditional normalizing flow-based generator G_ϕ, which is invertible, decodes f_t into the output frame I_t. The output is formulated as I_t=G^-1_ϕ(r;f_t), where r ∼𝒩(0,τ) ∈ℝ^H × W × 3 represents a variable drawn from a normal distribution with a temperature parameter τ; f_t is the feature pyramid with L levels.
§.§ 3D Human Gaussian Splatting Module
In 3D Gaussian, human appearances are determined by point displacements dT and properties P. Modeling dynamic human appearances involves estimating these evolving properties. We propose a dynamic appearance network coupled with an optimizable feature tensor to effectively capture dynamic human appearances across various poses. The dynamic appearance network is designed to learn a mapping from a 2D manifold representing the underlying human shape to the dynamic properties of 3D Gaussians as follows:
f_ϕ:𝒮^2∈ℝ^3→ℝ^7,
the 2D human manifold 𝒮^2 is depicted by a UV positional map I ∈ℝ^H× W ×3, where each valid pixel stores the position (x, y, z) of one point on the posed body surface. The final predictions consist of per point offset Δ𝐱̂∈ℝ^3, color 𝐜̂∈ℝ^3, and scale ŝ∈ℝ.
Human poses θ and translations t estimated from monocular videos are usually inaccurate. Hence, the 3D Gaussians reposed in motion space may be inaccurately represented, potentially resulting in unsatisfactory rendering outcomes. To address this issue, we jointly optimize human motions and appearances. We update the estimated body poses and translations by calculating (Δθ,Δ𝐭) to refine human motions, which can be formulated as follows:
Θ̂=(θ+Δθ,𝐭+Δ𝐭).
We modify θ in the equation of animatable Gaussians in the main article using Θ̂ to render the proposed animatable 3D Gaussians differentiable with respect to the motion conditions. Finally, the current frame image is taken as the ground truth to calculate the loss with the rendered image.
§.§ Training Objectives
We use the current frame image, i.e., f_i, and the rendered image, i.e., f_i^r, for supervising the Human-VDM model. The total loss consists of six different loss functions which include ℒ_RGB, ℒ_SSIM, ℒ_LPIPS, ℒ_Offset, ℒ_Scale and ℒ_f. In this section, we describe the loss functions in greater detail.
ℒ_RGB is the L1-loss between the ground truth and the rendered frame and is formulated as:
ℒ_RGB(x,y)=1/HW∑_h,w^HW|y_hw-x_hw|,
ℒ_SSIM <cit.>, or the Structural Similarity Index Metric Loss is a perceptual metric to measure the similarity between two images, taking luminance, contrast, and structure into account. We define the SSIM loss as follows:
ℒ_SSIM(x,y) = 1 - SSIM(x,y)
=1 - (2μ_xμ_y+c_1)(2σ_xy+c_2)/(μ_x^2+μ_y^2+c_1)(σ_x^2+σ_y^2+c_2),
where μ_x and μ_y stands for the mean of x and y; σ_x and σ_y represent the variance of x and y, while σ_xy denote the covariance of x and y.
ℒ_LPIPS <cit.> measures image similarity, which evaluates the perceptual difference between two images through deep learning models. In this paper, we utilize AlexNet <cit.> for extracting features of images. We calculate ℒ_LPIPS as:
ℒ_LPIPS(x,y)=∑_l1/H_lW_l∑_h,w||w_l⊙(f̂_xhw^l-f̂_yhw^l)||_2^2,
where f̂_xhw^l represents the feature output of image x in layer l at the pixel hw, and f̂_yhw^l means the same of image y. w_l is a trainable parameter in layer l.
ℒ_Offset, ℒ_Scale and ℒ_f calculate the L2-norm of the feature map, predicted offsets and scales on the canonical surface, respectively. We formulate them as follows:
ℒ_Offset=1/N∑_i=1^N(Δx̂_̂î)^2,
where Δ x_i denote the predicted offset of i^th gaussian.
ℒ_Scale=1/N∑_i=1^N(ŝ_̂î)^2,
where s_i denotes the predicted scale of i^th gaussian.
ℒ_f=1/F∑_i=1^F(t_i)^2,
where t_i denotes the optimized feature.
§ IMPLEMENTATION DETAILS
In this section, we present additional details on the model implementation. The Gaussian decoder is implemented as an MLP. A total of 202,738 Gaussians were initially sampled on the surface of the canonical SMPL model. The adjustable coefficient w, which presents the reliance on input low-quality image, is set to 0.7 in the Super-Resolution module. For each sample, we train the dynamic appearance network on a single NVIDIA RTX 3090 GPU for 1000 epochs with a batch size of 2. The learning rate of the network is set to .
§ ADDITIONAL RESULTS
In this section, we present additional results, including in-the-wild testing and failure cases.
§.§ In-the-wild visual results
To demonstrate the superiority of Human-VDM, we provide more visual comparison results. This includes additional results as shown in Figure <ref>, including results on challenging in-the-wild cases illustrated in Figure <ref>.
§.§ Failure Cases
In this subsection, we present several cases of failure in Human-VDM.
Although Human-VDM performs exceptionally well in generating 3D humans from a single RGB image, it still has a few limitations and failure cases, as discussed in the main text. Figure <ref> shows the failure cases of Human-VDM. For example, when the human in the input image interacts with their hands against their body, some artifacts may appear at the contact region.
|
http://arxiv.org/abs/2409.03187v1 | 20240905022504 | How noise affects memory in linear recurrent networks | [
"JingChuan Guan",
"Tomoyuki Kubota",
"Yasuo Kuniyoshi",
"Kohei Nakajima"
] | cs.NE | [
"cs.NE",
"cond-mat.dis-nn",
"cs.LG",
"math.DS"
] |
APS/123-QED
[email protected]
Intelligent Systems and Informatics Laboratory, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, Japan.
§ ABSTRACT
The effects of noise on memory in a linear recurrent network are theoretically investigated.
Memory is characterized by its ability to store previous inputs in its instantaneous state of network, which receives a correlated or uncorrelated noise.
Two major properties are revealed:
First, the memory reduced by noise is uniquely determined by the noise's power spectral density (PSD).
Second, the memory will not decrease regardless of noise intensity if the PSD is in a certain class of distribution (including power law).
The results are verified using the human brain signals, showing good agreement.
Keywords
noise, memory, information processing, RNN, autocorrelation.
How noise affects memory in linear recurrent networks
Kohei Nakajima
September 9, 2024
=====================================================
Introduction.—Understanding the effects of noise on information processing is a crucial problem in comprehending any physical system.
For instance, in the field of quantum computation,
the interaction between a quantum device and environment occurs with noise in the device, which impairs the accuracy of quantum computation <cit.>.
In nature, living organisms process information from the external environment
by extracting the necessary inputs from a large amount of signals containing noise, where noise works as a type of disturbance.
Short-term memory plays an essential role among various types of information processing,
which requires past input history.
This includes various tasks required in daily lives:
mental calculation <cit.>, recalling brief number of items <cit.>, and motor controls involving precise time perception <cit.>.
Additionally, many recent studies have reported that various types of physical systems can be utilized as computational resources <cit.>
where their short-term memories are exploited to solve tasks <cit.>.
In the theoretical studies exploring recurrent neural networks (RNNs),
the memory has been characterized by memory function (MF) <cit.>
and information processing capacity <cit.>, which can comprehensively reveal the memory in the network
<cit.>.
Using these measures,
the dependency of memory on parameters has been investigated.
Some studies <cit.>
numerically revealed that its network topology <cit.> affects the memory.
From the perspective of noise, other researches <cit.> have reported that random noise reduces the past inputs held in the network and that,
as the noise-to-signal ratio (NSR) increases,
the reduction becomes more critical.
Therefore, the random noise dominantly has the negative impact on the information processing based on the short-term memory.
Those researches <cit.> have focused on the case of random noise,
which is termed independent and identically distributed (i.i.d.) noise;
however, real-world systems receive not only i.i.d. noise but also correlated noise.
For example, 1/f-like noise <cit.>,
whose power spectral density (PSD) follows 1/f^β, can be ubiquitously observed.
Accordingly,
it is imperative to analyze the effects of general noises on information processing besides uncorrelated noise.
In this paper, we theoretically reveal properties of general noise regarding its influence on information processing in a linear RNN that receives input and noise.
We derive an analytical solution of MF to investigate the effects of noise on memory.
Based on the analytical solution, we show two properties.
First, we derive a simplified representation of the total memory by taking sufficiently large number of nodes to reveal the effect induced by noise correlation.
Second, we introduce a novel way to express MF, and clarify the impact of noise intensity on MF.
We demonstrate these effects of noise using experimental data obtained from the human brain.
Methods.—We consider an RNN with a fixed internal weight, which is called an echo state network (ESN) <cit.>.
A discrete-time RNN updates the state as follows:
x_t+1 = f(Wx_t + w_1u_t+1 + w_2v_t+1),
where x_t∈ℝ^N, u_t∈ℝ,
and v_t∈ℝ denote the state, input, and noise at t-th step, respectively,
while N is the number of nodes; f(·) is the activation function and W∈ℝ^N× N is the internal weight matrix; and
w_1, w_2∈ℝ^N are the weight vectors of input and noise, respectively.
In the present paper, we have adopted the uniformly random input u_t∈[-1, 1] and the linear activation function f(·).
We evaluate the short-term memory in the RNN using the MF <cit.>,
which represents how well the past injected input u_t-τ, could be emulated by a linear approximation with the network state:
û_t-τ = w_out^⊤x_t,
where τ is the delay from the current time.
After determining the readout weight vector w_out by minimizing a loss function of the mean squared error (MSE) 1/T∑_t=1^T (û_t-τ-u_t-τ)^2,
the MF is obtained as follows:
M[u_t-τ] = 1 -
min_w_out⟨ (û_t-τ-u_t-τ)^2 ⟩/⟨ u_t-τ^2⟩ (≤ 1),
where T is the sampled time length and
⟨·⟩ denotes the time average.
The upper bound 1 is satisfied when the system has fully memorized the input required to reconstruct the target.
The sum of the MF with respect to all τ represents the memory capacity (MC) of the full system,
=∑_τ=0^∞ M[u_t-τ].
The upper bound of is the number of linearly independent time series in the state, which is called rank and ideally N.
To demonstrate an applicable range of our results,
we have utilized not only noise models <cit.> but also experimental data of human brain activities that show a 1/f-like property <cit.>.
We adopted the EEG signals <cit.> measured from three brain areas:
midline frontal (Fz), vertex (Cz), midline parietal (Pz).
Signals in brain are expected to have a large amount of information, which include not only memory but irrelevant signals coming from different regions of the brain.
In the current study, we have injected both input and EEG signal to the RNN and regard the EEG as noise.
Results.—In the present study, we derive the MF and MC of RNNs that receive input and noise by direct substitution.
We begin the derivation from the following MF <cit.>, which is equivalent to Eq. (<ref>):
M[u_t-τ] =
U_τ^⊤X(X^⊤X)^-1X^⊤U_τ/U_τ^⊤U_τ,
where X∈ℝ^T× N is a matrix whose column represents the states time series and U_τ is the delayed input series.
In this derivation, we impose two assumptions on the system:
(i) the input u_t and the noise v_t are uncorrelated;
(ii) input and noise share the same weight vector, that is w = w_1 = w_2.
Under the conditions, we derived analytical solutions of MF and MC using a matix H and an autocorrelation matrix of u+v, C_uv = E + r C_v, where r = ⟨ v^2 ⟩ / ⟨ u^2 ⟩ is NSR and C_v is the autocorrelation matrix of v.
H is defined as
H =
[ H_K-1 H_K-2 ⋯ H_0 ], where
H_τ =
[ λ_1^τ λ_2^τ ⋯ λ_N^τ ]^⊤,
K is the time length and should be sufficiently large,
and λ_i is the i-th eigenvalue of W.
The (i,j) components of C_v is defined as (C_v)_ij=C(|i-j|), 1≤ i,j ≤ K, i,j ∈ℕ,
where C(τ) is the autocorrelation function of v normalized by the variance.
We call this an analytical solution involving inverse matrix (ASI):
M[u_t-τ] =
H_τ^⊤
(HC_uvH^⊤ )^-1H_τ,
= ∑_τ=0^K-1 M[u_t-τ] =
tr[
H^⊤
( HC_uvH^⊤ )^-1H],
which are derived in Sec. 1 of Supplementary material.
From ASI, we could confirm that
the MF and MC only depend on the eigenvalues λ_i and the autocorrelation of v
(detailed investigation in Section 2 of Supplementary material).
Based on ASI, we derived the following analytical results.
First, we focus on the MC of a sufficiently large RNN (i.e., the number of nodes and time-series length are infinite N=K→∞).
Under this assumption, we can derive the following formula of C_ sum, u from Eq. (<ref>) under the assumption that the rank of system is full:
= ∑_i=1^N1/ 1 + r λ[C_v]_i,
where λ[C_v]_i is the eigenvalues of C_v (derived in Section 4 of Supplementary material).
This formula yields two important properties of MC.
(i) The MC becomes independent of λ_i and is determined only by C_v, which are equivalent to the PSD of v <cit.>.
The result suggests that, in addition to NSR, the PSD is also crucial in determining the effect of noise, which are numerically verified in Section 4 of Supplementary material.
(ii) The MC of the infinite-dimensional RNN with an arbitrary autocorrelated noise is greater than that with random noise.
We can explain this property by introducing the minimum value of :
≥1/1+rN
which is proven under two conditions that tr[ C_v ]=K and the function 1/1+x is downward convex
(Theorem 4. 1 in Supplementary material).
The minimum value is the MC with any type of random noise,
meaning that equality holds if the PSD of v is flat and that becomes larger if the PSD is not flat.
Second, we focus on MF of both the input and noise.
The preceding result has focused on the memory of input held in RNNs, which also keeps noise as memory;
however, the MF is conventionally defined with i.i.d. signals and that,
for an autocorrelated noise, has not been derived thus far.
To address this problem,
we define the MF from another perspective <cit.>,
which is the square norm of the coefficient in the state expanded by orthonormal bases and is equivalent to the definition of Eq. (<ref>).
We use an autocorrelated noise model v_t represented by a sum of two factors: i.i.d. noise ñ_t and time-dependent function a_t.
A delayed noise of v_t includes delayed series of ñ_t and orthogonal basis ã_t obtained by decomposing a_t.
Subsequently, k-th delayed noise v_t-k can be decomposed into
v_t-k = c^n_kñ_t-k + ∑_i=0^k c^a_kiã_t-i,
where the two elements have time averages < ñ_t> = <
ã_t> = 0.
The bases {ñ_t} are innately defined in the noise and
{ã_t-i} are defined using the Gram–Schmidt orthogonalization:
â_t-k = a_t-k -
∑_i=0^k-1< ã_t-ia_t-k> ã_t-i,
ã_t-k = â_t-k/||â_t-k||.
In accordance with the polynomial expansion of state, we have introduced the MFs about the system with an i.i.d. input u_t and an autocorrelated noise v_t.
Because the RNN includes only linear terms, the state time series is expanded by three types of time-series bases: {u_t-τ}, {ñ_t-τ}, and {ã_t-τ}.
The delayed input time-series {u_t-τ} organize linearly independent bases because the input at each step is i.i.d.
Additionally, because of the orthogonalization, {ñ_t-τ} and {ã_t-τ} are also appended to the bases of the orthogonal system.
As a result, {u_t-τ}, {ñ_t-τ}, and {ã_t-τ} span the complete orthonormal system for the linear RNN.
Using these bases, we can perfectly expand the state,
and define the MFs on the bases u_t-τ, ñ_t-τ, and ã_t-τ
as M[u_t-τ], M[n_t-τ], and M[a_t-τ],
respectively (see Sec. 5 of Supplementary material for derivation).
We can define and as the MC of each signal, and as the total MC of the system, which can be computed as
= ∑_τ=0^∞ M[u_t-τ],
= ∑_τ=0^∞ M[n_t-τ] + M[a_t-τ],
and = +.
According to the completeness property <cit.>,
= N holds because the system depends only on the past input and noise series that span the complete system.
This definition of MF enables us to elucidate the limitations of autocorrelated noise effect.
Comparing the MF of i.i.d. elements and that of time-dependent function,
their difference can be characterized by the number of bases.
In a system where infinite time has passed, the number of bases u_t-τ would be infinite because the delayed time series would be linearly independent.
For the same reason, the number of bases {n_t-τ} is also infinite,
while that of ã_t-τ can be both finite (e.g., sinusoidal curve) and infinite (e.g., 1/f noise).
Here, we begin with the assumption that the number N_a of bases ã_t-τ is finite,
showing that, with a sufficiently large N, the sum of MFs about a_t-τ would become 0:
lim_N→∞1/N∑_τ=0^∞ M[a_t-τ] = 0
which is derived based on the following generating procedure of the base ã_t-τ.
A base of ã_t-τ is newly generated when τ increments (Eq. <ref>).
Subsequently, the new base is removed if the current input can be expressed by linear combination of existing orthogonal polynomials. This procedure is repeated until the new base does not appear, and we finally obtain the finite number N_a of bases in some cases.
For example, if a_t = cos(ω t), the bases are cos(ω t) and sin(ω t), indicating N_a=2.
This mechanism suggests that RNNs integrate the existing memory of the past inputs and overlapped information of the current input due to autocorrelation.
With a sufficiently large N, we obtain
∑_τ=0^∞ M[a_t-τ] ≤ N_a,
which produces
Eq. (<ref>)
(MFs of a in Fig. <ref>a, b).
Accordingly, combined with the completeness property, we derive lim_N→∞ 1/N ∑_τ=0^∞( M[u_t-τ] + M[n_t-τ] ) = 1.
In addition, the MFs of these i.i.d. elements can be characterized by Eq. (<ref>),
such that the variances ⟨ u^2 ⟩ and ⟨ n^2 ⟩ determine the ratio between the MFs of i.i.d. elements (see Sec. 5 of Supplementary material for derivation),
indicating that they are just scaled (MFs of u and n in Fig. <ref>a–c).
As the MC of a_t-τ gradually becomes 0,
increase in N leads to the enhancement of /N ,
which suggests that the inhibitory effect of noise becomes smaller.
In the infinite-dimensional RNN,
/N (/N) will converge to a certain value determined by the ratio of ⟨ u^2 ⟩ and ⟨ n^2 ⟩ (Fig. <ref>e):
lim_N→∞/N = ⟨ u^2 ⟩/⟨ u^2 ⟩+⟨ n^2 ⟩.
If the noise v is composed only of random components n_t-τ,
the ratio between the and is independent of N
and there is no enhancement of caused by increasing N (Fig. <ref>c, d).
In a system with finite N_a (Fig. <ref>d),
we can confirm that, as the proportion of n_t-τ within v decreases,
the disturbance effect of noise becomes smaller, which elucidates the result of Eq. (<ref>).
Conversely, if v is composed only of a_t-τ, the noise has a little inhibitory effect independent of r:
lim_N→∞/N=1
(
lim_N→∞/N=0
) .
Furthermore, even if N_a is infinite, could fulfill Eq. (<ref>).
We proved Eq. (<ref>) in two cases of
(i) lim_N→∞ < ∞ and
(ii) lim_N→∞ = ∞ under conditions.
(i) Under the d'Alembert's test condition of lim_n→∞λ̂[C_v]_n+1/λ̂[C_v]_n<1, Eq. (<ref>) holds,
where λ̂[C_v]_n expresses a sorted version of λ[C_v]_n in descending order.
(ii) In addition, even if lim_N→∞ = ∞,
could hold Eq. (<ref>).
For example, we have proven the case of 1/f^β noise (β≥ 1) (see proofs in Sec. 6 of Supplementary material).
Both the examples of PSDs shown here are characterized by the skewed distribution.
We say that a PSD is skewed if the distribution of λ̂[C_v]_n satisfies Eq. (<ref>).
Note that, since PSD represents the magnitude of coefficients of the Fourier series, in which the state is expanded by linearly independent bases of sinusoidal wave,
these results show that the distribution of the magnitude determine the noise effects.
To demonstrate cases in which a very large noise does not hinder information processing in RNN,
we examined the dependency of normalized MC (/N) on the parameters β and r.
Counterintuitively, even if the NSR is large (e.g., r = 100),
the RNN keeps /N ≈ 1 with a large β (>2.0).
Note that a blue region with β≥ 1 does not satisfy Eq. (<ref>) because /N = o(log(log N)/log N)
slowly converges to 0, implying that N ∼ 10^4 is not sufficiently large.
To mitigate the disturbance, the system requires N∼ 10^10^2,
implying that the real-world systems (e.g., the number of neurons in brain N∼ 10^7 <cit.>) cannot fully hold the MC.
Finally, we numerically verified the effects of noise on the memory using EEG signals from three human brain areas <cit.>.
Comparing the MF and MC of the systems using the autocorrelated noise with those using noises randomly shuffled in time direction (Fig. <ref>),
the former consistently have significantly higher values than the latter.
Even though the noise intensity is much larger than the input intensity (r=100), the MC keeps / > 0.7 (“Fz” in Fig. <ref>, right), where is numerically normalized by .
As N increases, / seems to converge to a fixed value,
which may indicate the ratio of ⟨ u^2 ⟩ and ⟨ n^2 ⟩ according to Eq. (<ref>).
The three different convergent values may imply that the ratio of random and autocorrelated elements varies among the regions of the brain.
In addition, by incorporating a threshold to determine where the convergence occurs, we can find a sufficiently large N to bring out the maximum MC under the noise.
For example, under the threshold in which / perturbed smaller than 5× 10^-3, those N of “Fz”, “Cz”, and “Pz”are computed as 40, 39, and 37, respectively.
Overall, we could confirm that the autocorrelated noise in the real-world always has smaller disturbance than random noises and that the memory of the input is retained even with the very strong irrelevant signals.
There are two limitations in our study.
We assumed that the input weights were shared between the input and noise.
This can be interpreted as a setting in which we inject the input containing noise to the RNN and ignore other noises.
This enables us to analytically evaluate the effects of the initially added noise that entails any intensity and any correlation.
We need to consider the noises coming from other input pathways in the future.
Next, we have only focused on linear RNNs in the current study.
However, previous researches have confirmed the presence of both linear and nonlinear information processing in real-world systems <cit.>,
indicating that nonlinear cases should be investigated in future.
Summary.—In the present study,
we used a linear RNN with random input and noise,
including correlated ones, to investigate the effects of noise on memory in general.
We derived an analytical solution of MF and MC dependent only on the autocorrelation of noise and the eigenvalues of internal weight,
which revealed the following three results:
First, in infinite-dimensional systems,
the MC becomes independent of the internal weight and is determined only by the PSD of noise.
Therefore, our results hold for any type of linear RNNs regardless of whether its internal weights are random or trained.
By using this solution, we proved that the MC with autocorrelated noise is larger than that with a random noise.
Second, noise has little inhibitory effects in a sufficiently large system, regardless of its intensity,
if the intensities of linearly independent bases in the noise satisfy a certain condition.
In a general case that the number of base is infinite,
this result is satisfied when the intensities have a skewed distribution that decays at a sufficiently fast rate such as 1/f noise.
Actually, this condition is also effective under the case with a finite number of base because its distribution always fulfill the skewed condition such as sinusoidal noise.
Third, we used EEG data to verify the above analytical results.
We demonstrated that, as a form of noise, the EEG series had a small inhibitory effect,
despite its strong intensity.
Moreover, different brain regions showed different ratio of random component and system size to bring out the maximum MC.
From these results, our research has clarified the effects of general noises on information processing,
providing analytical explanation.
|
http://arxiv.org/abs/2409.02847v1 | 20240904161836 | Superfluid-tight cryogenic receiver with continuous sub-Kelvin cooling for EXCLAIM | [
"Sumit Dahal",
"Peter A. R. Ade",
"Christopher J. Anderson",
"Alyssa Barlis",
"Emily M. Barrentine",
"Jeffrey W. Beeman",
"Nicholas Bellis",
"Alberto D. Bolatto",
"Victoria Braianova",
"Patrick C. Breysse",
"Berhanu T. Bulcha",
"Giuseppe Cataldo",
"Felipe A. Colazo",
"Lee-Roger Chevres-Fernandez",
"Chullhee Cho",
"Danny S. Chmaytelli",
"Jake A. Connors",
"Nicholas P. Costen",
"Paul W. Cursey",
"Negar Ehsan",
"Thomas M. Essinger-Hileman",
"Jason Glenn",
"Joseph E. Golec",
"James P. Hays-Wehle",
"Larry A. Hess",
"Amir E. Jahromi",
"Trevian Jenkins",
"Mark O. Kimball",
"Alan J. Kogut",
"Samuel H. Kramer",
"Nicole Leung",
"Luke N. Lowe",
"Philip D. Mauskopf",
"Jeffrey J. McMahon",
"Vilem Mikula",
"Mona Mirzaei",
"Samuel H. Moseley",
"Jonas W. Mugge-Durum",
"Jacob Nellis",
"Omid Noroozian",
"Kate Okun",
"Trevor Oxholm",
"Tatsat Parekh",
"Ue-Li Pen",
"Anthony R. Pullen",
"Maryam Rahmani",
"Mathias M. Ramirez",
"Cody Roberson",
"Samelys Rodriguez",
"Florian Roselli",
"Deepak Sapkota",
"Konrad Shire",
"Gage L. Siebert",
"Faizah Siddique",
"Adrian K. Sinclair",
"Rachel S. Somerville",
"Ryan Stephenson",
"Thomas R. Stevenson",
"Eric R. Switzer",
"Jared Termini",
"Peter T. Timbie",
"Justin Trenkamp",
"Carole E. Tucker",
"Elijah Visbal",
"Carolyn G. Volpert",
"Joseph Watson",
"Eric Weeks",
"Edward J. Wollack",
"Shengqi Yang",
"Aaron Yung"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Vacuum Radiation Pressure Fluctuations on Electrons
L. H. Ford
September 9, 2024
====================================================
§ ABSTRACT
The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a balloon-borne telescope designed to survey star formation over cosmological time scales using intensity mapping in the 420 – 540 GHz frequency range. EXCLAIM uses a fully cryogenic telescope coupled to six on-chip spectrometers featuring kinetic inductance detectors (KIDs) to achieve high sensitivity, allowing for fast integration in dark atmospheric windows. The telescope receiver is cooled to ≈ 1.7 K by immersion in a superfluid helium bath and enclosed in a superfluid-tight shell with a meta-material anti-reflection coated silicon window. In addition to the optics and the spectrometer package, the receiver contains the magnetic shielding, the cryogenic segment of the spectrometer readout, and the sub-Kelvin cooling system. A three-stage continuous adiabatic demagnetization refrigerator (CADR) keeps the detectors at 100 mK while a ^4He sorption cooler provides a 900 mK thermal intercept for mechanical suspensions and coaxial cables. We present the design of the EXCLAIM receiver and report on the flight-like testing of major receiver components, including the superfluid-tight receiver window and the sub-Kelvin coolers.
§ INTRODUCTION
Conventional galaxy surveys create large catalogs of galaxies to study the formation and evolution of large-scale structures in the universe. These surveys are often biased to detect only the brightest galaxies and have small survey areas on the sky, limiting their ability to capture a complete picture of galaxy populations in the cosmological context. The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM)<cit.> employs an emerging technique of line intensity mapping <cit.> that surveys the unresolved, integral surface brightness of redshifted line emission from galaxies. This approach measures the cumulative emission of all sources over large volumes, allowing a blind, complete census. In particular, EXCLAIM aims to map the redshifted emission of carbon monoxide (CO) and singly-ionized carbon ([CII]) in cross-correlation with spectroscopic galaxy surveys in windows over the 0 < z < 3.5 redshift range. As this range encompasses the period of “cosmic high noon” when the rate of star formation peaked <cit.>, EXCLAIM's measurements will be crucial in refining galaxy evolution models during this critical period.
EXCLAIM is a balloon-borne telescope designed to map diffuse emission from 420 to 540 GHz (714 to 555 μm) with a spectral resolving power of R = 512, covering several CO rotational lines (ν_CO,J = 115J GHz for J = 4 – 7) in galaxies with redshifts z < 1 and [CII] (ν_[CII] = 1.889 THz) over redshifts 2.5 < z < 3.5 <cit.>. EXCLAIM's primary extragalactic science comes from a ∼ 300 deg^2 survey along the celestial equator in cross-correlation with galaxy and quasar catalogs in the overlapping Stripe-82 region mapped by multiple surveys, particularly the Baryon Oscillation Spectroscopic Survey (BOSS)<cit.>. The EXCLAIM survey also includes several ∼100 deg^2 regions on the Galactic plane to study CO (J=4–3) and neutral carbon ([CI]) emission as tracers for star formation and molecular gas. For further details on the EXCLAIM survey and science forecasts, refer to Refs. and .
Based on the Absolute Radiometer for Cosmology, Astrophysics, and Diffuse Emission II (ARCADE II)<cit.> and the Primordial Inflation Polarization ExploreR (PIPER)<cit.> heritage, EXCLAIM employs a fully cryogenic telescope housed in an open 3000 L liquid helium (LHe) bucket dewar with superfluid fountain effect pumps<cit.> that cool the optics to <5 K and maintain the receiver cryostat at ≈1.7 K. Inside the receiver, six integrated spectrometers<cit.> coupled to kinetic inductance detectors (KIDs) provide the R = 512 spectral resolving power over the EXCLAIM frequency band. At target balloon float altitudes above 27 km, low total atmospheric column depth and pressure broadening cause the atmospheric emission to resolve into narrow lines, allowing the high-resolution integrated spectrometers to access low-background windows between the lines. The all-cryogenic instrument design enables EXCLAIM to fully utilize the dark atmospheric windows in the stratosphere by allowing access to spectral channels ≈50× darker<cit.> than those with ambient temperature optics. This high sensitivity drastically increases mapping speed, enabling EXCLAIM to achieve its science goals with a single-day conventional balloon flight that would otherwise take weeks.
At float, the EXCLAIM receiver sits in a superfluid helium bath, and the receiver core is initially cooled to ≈ 1.7 K via a thermal feedthrough (see Figure <ref>). All receiver interfaces must therefore remain superfluid tight to prevent superfluid helium from entering the receiver volume, where superfluid films or gas could hinder sub-Kelvin operation. A three-stage continuous adiabatic demagnetization refrigerator (CADR), backed by a ^4He sorption cooler, maintains the on-chip spectrometers at their operating temperature of 100 mK during the flight. Here we describe the design and testing of the EXCLAIM receiver that provides superfluid-tight enclosure for continuous sub-Kelvin operation of the sorption-cooler-backed CADR. As the availability of LHe limits the test time for the integrated receiver, we report on the independent testing and qualification of the major receiver components before integrating the receiver core.
This paper is organized as follows. We provide a general overview of the EXCLAIM instrument in Section <ref>, and focus on the receiver design in Section <ref>. In Section <ref>, we report on the lab testing of major receiver components in flight-like conditions. Finally, Section <ref> summarizes the current status and path towards the first flight from Fort Sumner, New Mexico, planned for September 2025.
< g r a p h i c s >
Left: CAD model of the EXCLAIM gondola showing the fully cryogenic telescope housed in an open helium bucket dewar. The green region shows the instrument beam to the sky. Right: Zoomed-in sectional view of the receiver cryostat highlighting the major optical and thermal components. All receiver interfaces must remain superfluid tight.
§ INSTRUMENT OVERVIEW
As shown in Figure <ref>, EXCLAIM employs a fully cryogenic telescope housed in a 3000 L LHe bucket dewar with a 2.0 m deep and 1.5 m diameter interior. Since the dewar size drives the overall mass of the gondola, it was chosen to be within a reasonable margin of the total balloon payload mass limit of 3400 kg. During ground operations and ascent, a lid covers the dewar to insulate the instrument, reduce boiloff, and limit the atmosphere from freezing on the optics. The lid is kept open for the science operation at float, letting the boiloff gas keep the optics dry and clean and eliminating the need for an ambient temperature window. EXCLAIM is expected to have a similar boiloff rate of 110 L/h when the superfluid pumps are operational as observed in PIPER <cit.>. When launched with a maximum practical helium load of ∼ 2600 L, this boiloff rate allows for ≳ 19 hours of cryogenic hold time at float, longer than the planned baseline hold time of ≳ 12 h for the EXCLAIM sub-Kelvin operation.
With the telescope boresight fixed at 45^∘ elevation, EXCLAIM uses a 90-cm parabolic primary mirror and parabolic secondary mirror in an off-axis Gregorian configuration to produce a collimated beam that couples to the receiver. A 30-cm folding flat mirror redirects the rays from the primary to the secondary mirror so that the telescope can fit within the dewar. The telescope optics, along with a 10-cm silicon lens inside the receiver (see Section <ref>), provides 4.2^' full width at half maximum (FWHM) resolution in the center of the EXCLAIM band at 480 GHz over a 22.5^' field of view <cit.>. This angular resolution is sufficient to produce a survey that covers spatial scales from the linear regime (k ≲ 0.1 hMpc^-1) up to scales where shot noise dominates (k ≳ 5 hMpc^-1) in the line intensity signal <cit.>. All the mirrors were machined from monolithic aluminum at the Johns Hopkins Applied Physics Laboratory. The fabricated mirrors meet the 20 μm root mean square (RMS) surface figure requirement and have been delivered to NASA Goddard for installation and alignment into the telescope frame. See Ref. for further details on the telescope optics.
The telescope optics will be operated at < 5 K to ensure low optical loading on each spectrometer channel (∼ 0.1 fW defined at the receiver cold stop) to ensure near background-limited performance across the band, enabling access to dark regions between atmospheric lines. The rapid helium boiloff is sufficient to maintain the telescope at the bath temperature during ascent. At float altitudes of ≳ 27 km, the ≲ 1 kPa ambient pressure decreases the boiling point of helium to below 1.7 K, lower than the 2.2 K superfluid transition temperature <cit.>. Once the float altitude is reached and the boiloff rate decreases, the superfluid fountain effect pumps at the bottom of the dewar (see Figure <ref>) are turned ON. These pumps spray superfluid helium onto each optical surface to maintain < 5 K temperature during the science operation. The design and performance of these superfluid pumps during the two PIPER flights are described in Ref. .
§ RECEIVER DESIGN
The receiver cryostat is positioned within the telescope frame using a symmetric hexapod of locking turnbuckles such that the folding flat mirror directs the instrument beam onto the secondary mirror mounted on top of the receiver lid as shown in Figure <ref>. The secondary mirror produces a collimated beam that passes into the receiver cryostat through a meta-material anti-reflection (AR) coated silicon vacuum window (Section <ref>). The 44.6-cm diameter and 72.5-cm height stainless steel receiver shell sits partially submerged in LHe, preventing the receiver window from being submerged during science observation. All the receiver interfaces employ superfluid-tight seals to prevent superfluid helium from entering the receiver volume, where superfluid films or gas could hinder sub-Kelvin cooling.
The receiver houses (1) the spectrometer package, (2) the optical filtering, baffling, and lens, (3) the magnetic shielding, (4) the sub-Kelvin cooling system, (5) the cryogenic segment of the spectrometer readout, (6) electrical interfaces to the ambient temperature electronics, and (7) thermal interfaces to the helium bath. In the following subsections, we describe the design of the major optical and thermal components inside the receiver.
§.§ Optics
The 114-mm diameter open aperture on the receiver lid uses a silicon window of 9 mm thickness, required to support the ambient atmospheric pressure prior to launch. It employs an indium seal to create a superfluid-tight interface with the receiver lid. We demonstrate this superfluid-tight seal in flight-like conditions in Section <ref>. The meta-material AR coating is implemented through sub-wavelength features cut into the window surface with a custom dicing saw <cit.>. As shown in Figure <ref>, EXCLAIM uses an optimized two-layer design to achieve low reflection (< -27 dB for normal incidence) across the signal band. A 24-cm focal length plano-convex silicon lens (see Figure <ref>) that focuses light onto the spectrometer package also uses the same AR coating design.
In addition to the cold telescope optics, EXCLAIM requires effective infrared (IR) rejection and stray light control to reduce excess loading on the detectors for accessing the dark spectral channels. We use two 10-cm diameter novel IR-blocking filters composed of diamond scattering particles embedded in a polyimide aerogel substrate. The substrate has an ultra-low density (0.1 – 0.2 g/cm^3) with a low index of refraction (n ∼ 1.1), eliminating the need for AR coating and allowing for high transmission across the band <cit.>. The size and density of the diamond particles were tuned to produce a low-pass filter with ∼ 1 THz cutoff <cit.>. To reject radiation immediately above and below the EXCLAIM band, we use two heat-pressed stacks of metal-mesh filters on polypropylene film with dielectric spacers <cit.>. These band-defining high- and low-pass filters and the lens are tilted at alternating angles (3^∘ for the lens and 2^∘ for the filters with respect to the chief ray) to suppress the formation of cavity modes and optical ghosts by terminating reflections in baffling.
A collection of baffles, blackened with Epotek 377 epoxy loaded with silica and graphite powders <cit.>, are strategically placed at multiple places inside the receiver (see Figure <ref>) for stray light control. A cold (1.7 K) optical stop with 7.6-cm diameter aperture placed in between the lens and the spectrometer package truncates the beam at < -15 dB across the band. In combination with the blackened baffles in the collimated region, the cold stop provides < -40 dB stray light spill onto the warmer elements at the top of the dewar to ensure < 0.1 fW excess loading per spectrometer channel. An in-flight calibration source <cit.>, consisting of a sapphire square with an integrated heater thermally isolated from the bath, is located in the volume behind the cold stop. This calibrator emits into the near sidelobes of the spectrometer lenslets for in-situ characterization of spectrometer response, uniformity, and time-varying responsivity <cit.>.
§.§ Integrated Spectrometer
The receiver optics focus the incoming light onto six 4-mm diameter hyper-hemispherical silicon lenslets with 126-μm Parylene-C AR-coating on the focal plane. Each lenslet couples light to an individual spectrometer chip using a dipole slot antenna. The six integrated spectrometers (μ-Spec)<cit.> incorporate a Rowland grating spectrometer implemented in a parallel plate waveguide on a low-loss single-crystal silicon chip, employing superconducting niobium microstrip planar transmission lines and thin-film aluminum KIDs <cit.>. The μ-Spec design offers several advantages: (1) an order of magnitude reduction in size compared to a free-space grating spectrometer, (2) lithographic control of all components, (3) high efficiency and resolution due to the low dielectric loss of single-crystal silicon, and (4) high immunity to stray light and crosstalk due to the microstrip architecture and thin dielectric <cit.>. Under the 0.16 fW loading expected at the input to the KIDs, we estimate the noise-equivalent power NEP_det < 8 × 10^-19 W/√(Hz), within the sensitivity requirements for the EXCLAIM science mission. While the flight-candidate spectrometer wafers are currently under fabrication, a prototype wafer with R = 64 spectral resolution has been extensively characterized in the lab <cit.>.
The KIDs are read through ambient-temperature electronics based on Xilinx ZCU111 Radio-Frequency System-on-a-Chip (RFSoC) FPGA boards that significantly reduce the size, weight, and power requirements and provide larger instantaneous bandwidth compared to previous generation readout systems <cit.>. To read out the six spectrometer chips, we plan to use three ZCU111 boards, each equipped with two 512-MHz bandwidth readout chains. Each readout chain uses a low-noise amplifier (Low Noise Factory LNC2_4A[www.lownoisefactory.com/product/lnf-lnc2_4a-2]) connected to the spectrometer chip with a pair of 2.19 mm OD stainless steel coaxial cables. The six outbound coaxial cables have a beryllium copper centerline that provides lower attenuation at the expense of higher thermal conduction. A 900 mK stage thermally suspended from the 1.7 K bath by a carbon fiber tube truss provides a thermal intercept for the coaxial cable lines going into the 100 mK spectrometer package. The thermal breaks in the cryogenic readout chains use 2.19 mm OD NbTi coaxial cables.
§.§ Sorption Cooler
Since the stainless steel receiver shell is a poor thermal conductor, we use a thermal feedthrough (Figure <ref>) to bring a high-purity copper rod into the receiver through a superfluid-tight ceramic seal. The reservoir around the thermal feedthrough outside the receiver lid is supplied by superfluid pumps to maintain it at ∼ 1.7 K. From the 1.7 K bath, we use a single-stage ^4He sorption cooler <cit.> to provide a ∼ 900 mK stage for pre-cooling the 100 mK stage with the spectrometer package and thermally intercepting the coaxial cables and mechanical suspensions. The sorption cooler provides a significant margin in flight operation by reducing the heat capacity and thermal loading of the CADR (described in Section <ref>) and allows testing in unpumped helium bath and cryocooler systems in the lab.
The sorption cooler has a simple operation procedure. Once the cold head gets close to the bath temperature (below the 5.2 K critical liquefaction point of ^4He), we heat the cooler pump to 45 – 55 K using a 300 Ω heater on the pump. This keeps the pump warm enough to prevent helium from being adsorbed by the charcoal inside the pump. As the cold head temperature stabilizes, we cool the pump through a gas-gap heat switch. This allows the charcoal to adsorb gaseous helium, lowering the pressure inside the pump while letting the liquid helium in the cold head to continue cooling down until its vapor pressure is in equilibrium with the internal pressure <cit.>. We estimate that the sorption fridge with 28 J of cooling energy can sufficiently cool the receiver sub-Kelvin stages from the 1.7 K float conditions and provide significant margins over the 12 hours baseline cold operation <cit.>.
§.§ CADR
EXCLAIM uses a three-stage CADR, shown in Figure <ref>, to provide continuous cooling to the 100 mK stage thermally connected to the spectrometer package. Each CADR stage is comprised of a paramagnetic salt crystal surrounded by a superconducting magnet made from NbTi wire wound onto a mandrel <cit.>. While the salt pills for the two warmer stages (S2 and S3) are suspended within the mandrel bore by Kevlar assemblies with minimal thermal conduction, the coldest stage (S1) pill does not require the thermal suspension from its magnet due to low hysteresis heating. Based on the PIPER heritage<cit.>, all three stages use chromium potassium alum (CPA) salt as the refrigerant for its relatively high cooling power at low operating temperatures, non-corrosive property, and ease of crystal growth <cit.>. Passive gas-gap heat switches<cit.>, consisting of a thin titanium chamber filled with ^3He and gold-plated copper fins, thermally connect S2 to S3 and S3 to the sorption cooler. As the cold end of the heat switch drops below a target temperature determined by the ^3He fill pressure, ^3He adsorbs onto the fins, thermally isolating the two ends. The S2 and S1 stages are connected with an active superconducting heat switch (SCHS), where a thermal connection is established by applying current to a small Helmholtz coil that drives an interconnecting superconducting lead normal. The SCHS is used between S1 and S2 because ^3He does not have sufficient vapor pressure to act as a thermal conductor below ∼ 250 mK. It also enables rapid control of the S2–S1 thermal exchange, providing flexibility in maintaining the continuous S1 temperature.
The operating concept of the EXCLAIM three-stage CADR is shown in the schematic in Figure <ref>. During the continuous operation, S2 and S3 perform conceptually the same thermodynamic cycle. In leg a, the stage isothermally magnetizes against an exchange at T_recycle, reducing the entropy of the magnetic spins. In leg b, the stage thermally decouples from the exchange and adiabatically demagnetizes, cooling the salt pill to T_cold (below T_recycle of the lower stage). In leg c, it isothermally absorbs heat from the lower stage until it is out of cooling power or the lower stage is fully charged. Finally, in leg d, it thermally decouples from the lower stage and adiabatically magnetizes, bringing the salt pill back to T_recycle. During the continuous operation, S1 is servoed at the 100 mK base temperature, and the active SCHS is used to extract heat from S1 when S2 gets colder. Table <ref> summarizes the properties of the EXCLAIM CADR stages, including the inductance (L), maximum current (I_max), magnetic field-to-current ratio (B/I), and the T_recycle and T_cold values from the lab test setup with a fourth stage (see Section <ref>).
Each CADR stage has one primary and one redundant ruthenium oxide (ROX) thermometer read out through 76 μm CuNi-clad NbTi wires to limit parasitic conduction. The electrical interface to the CADR is provided through a custom printed circuit board (PCB), shown in Figure <ref>, which brings in high-current lines from the receiver and provides voltage taps to measure the drop across the superconducting coil in a four-wire configuration. Outside the receiver, vapor-cooled high-current copper leads are directly soldered to the pins of a superfluid-tight feedthrough (see Figure <ref>). This feedthrough also carries lines for the sorption cooler pump heater and its gas-gap heat switch. Inside the receiver, we use 0.31 mm diameter NbTi wires with 0.5 mm copper cladding to transmit up to ∼ 4 A current (Table <ref>) to the CADR. These NbTi wires are soldered to the feedthrough on one end and gold-plated copper bobbins on the other end. The bobbins are attached to the PCB with fasteners and spring washers. The voltage taps are carried out from an MDM31 connector on the PCB and follow the same wiring scheme used for CADR thermometry and elsewhere in the receiver.
§ LAB TESTING
Once the receiver is fully integrated, it will be tested in flight-like conditions in a smaller LHe dewar (48.6-cm diameter and 152-cm depth) in the lab and transferred to the telescope with no changes in configuration for the flight. However, the duration of this integrated receiver test is limited by the availability of LHe, requiring ∼ 100 L of LHe per day of testing. Therefore, we have independently tested and qualified major receiver components before their integration into the receiver.
§.§ Window Test
As described in Section <ref>, all receiver interfaces must remain superfluid tight, including the interface between the receiver lid and the silicon window. To our knowledge, EXCLAIM is the first instrument to employ a silicon vacuum window in the receiver with stringent requirements for superfluid tightness. We, therefore, test the ability of the indium seal to create a superfluid-tight interface between the receiver lid and the window. We mount a copy of the flight window onto a test baseplate with the same interface as the flight receiver lid on one side and a superfluid-tight vacuum interface to connect to a leak checker pump on the other side. The schematic of the test setup is shown in Figure <ref>. Following a similar procedure to that described in Ref. , the indium seal is compressed using titanium fasteners with a stack of spring washers on a retainer ring. The test window hangs inside a small reservoir supplied with superfluid helium pumps, as shown in Figure <ref>. With the leak checker continuously pumping on the test volume behind the window, we first pour LHe into the test dewar and then pump on the system to create flight-like conditions. As indicated by the ∼ 1.4 K temperature of the ROX on top of the window test base, as shown in Figure <ref>, we were able to submerge the window along with the indium seal in superfluid helium. Throughout the cool down, the test volume maintained 2 × 10^-3 mbar pressure with a background He leak rate of 5× 10^-8 mbar l/s, demonstrating a superfluid-tight seal. We plan to repeat this test with the AR-coated flight window and similarly test the high-current feedthrough interface.
§.§ Sorption-cooler Test
We mounted the EXCLAIM sorption cooler on a 3-K bath provided by a pulse-tube cryocooler in the lab to verify its operation and performance. We followed the operational procedure described in Section <ref>. After cycling the cooler, the cold head reached 850 mK without any external load. Since the CADR S3 rejects heat to the sorption cooler, the loading on the sorption cooler during the flight depends on the frequency of S3 recycles. For nominal CADR operation at float, we estimate 82 μW of loading on the sorption cooler <cit.>. To test its cooling capacity, we supplied 100 μW of power (18 μW higher than the expected load) using a 2 kΩ heater on the cold head. During the initial testing, the cold head was able to maintain ∼ 890 mK for ≳35 h with 100 μW load. While this measured cooling capacity meets the baseline cold operation requirements for the flight, we plan to further optimize the operational procedure since the duration over which the head remains cold depends on the efficiency with which the helium charge is initially condensed <cit.>. The integrated receiver test will also help us optimize the frequency of the S3 recycles needed during the flight to better understand the loading on the sorption cooler.
§.§ CADR Test
As seen in Figure <ref>, the EXCLAIM CADR has now been assembled on its mounting ring and is ready to be integrated into the receiver. As other receiver parts needed to integrate the receiver core are being fabricated, we tested the three-stage EXCLAIM CADR using a gadolinium lithium fluoride (GLF) Stage 4 (S4) instead of the sorption cooler. For this test, shown in Figures <ref> and <ref>, S4 was thermally connected to a 3 K heat sink provided by a Gifford-McMahon (GM) cryocooler via a passive gas-gap heat switch. In addition to testing and verifying the CADR performance, this test was also used to qualify the EXCLAIM flight electronics and software for controlling and automating the CADR operation.
In the test setup, the S4 – bath passive heat switch is thermally conductive down to ∼ 1.7 K. With this condition, even when all four stages are fully charged to their respective I_max values (Table <ref>), the CADR does not have sufficient cooling capacity to pull S1 down to its operating temperature from the 3-K bath with a single demagnetization step. Instead, we use a “bootstrapping” technique to build the cooling capacity of the warmer stages over multiple cycles, as shown in Figure <ref>. During the first demagnetization step, as S4 approaches 1.2 K, we start re-building current in S3 while keeping S4 at 1.2 K through a proportional-integral-derivative (PID) control loop in the flight electronics. At this point, the S4 – bath heat switch is non-conducting, allowing S3 to magnetize isothermally. When S4 runs out of cooling power, we ramp down S3 to ∼ 0.5 K (making the S3 – S4 heat switch non-conducting) and let S4 recycle. Following a similar procedure, we build current in S2 as shown in Figure <ref>. Once S3 and S2 have built sufficient current, the SCHS is turned ON, letting S2 cool S1 down to ∼ 100 mK. Finally, we magnetize S1 in preparation for the continuous operation. While this CADR initialization algorithm is time-consuming (∼ 2 hours), it enables prolonged testing of the continuous operation (Figure <ref>) from a 3-K bath with a simple passive heat switch. For the flight, the CADR will be operating from a 900-mK bath, significantly reducing the initialization time, which we plan to test and optimize during the integrated receiver test.
Once the CADR has been initialized with S1 at ∼ 100 mK temperature and ∼ 0.8 A current, the continuous operation (described conceptually in Figure <ref> and Section <ref>) can begin. Figure <ref> shows two typical cycles of continuous operation from the lab test with S1 temperature stable at 99 mK, showing variations below ∼ 1 mK level. The S2, S3, and S4 cyclic operations are autonomously controlled <cit.> from the flight electronics, thus requiring no user input from the ground during the flight. The S1 is PID-controlled at the operating temperature, and the SCHS is manually controlled to allow the operator to recharge S1 as needed. During the lab test, we also qualified the CADR operation with various heater loads on S1 to simulate the parasitic loading during the flight. The CADR initialization and continuous operation, including the S1 temperature stability, will be further optimized during the integrated receiver test with the sorption cooler in place.
§ SUMMARY
EXCLAIM is a balloon-borne mission designed to survey star formation over cosmological time scales using intensity mapping. It uses an open LHe bucket dewar to house a fully cryogenic telescope that allows fast integration in dark atmospheric windows. Since the receiver sits in a superfluid helium bath at float, all the receiver interfaces, including the silicon window, were designed to be superfluid-tight. The receiver houses a three-stage CADR that maintains the spectrometer package at 100 mK and a sorption cooler that provides a 900 mK intermediate thermal stage. Given the limited test time with LHe for the integrated receiver, we have independently tested and qualified major receiver components. Next, we plan to integrate the receiver core and test it in a custom pulse-tube-cooled system, followed by a LHe dewar test before the planned flight in September 2025.
EXCLAIM began in April 2019 as a 5-year NASA Astrophysics Research and Analysis (APRA 1263 17-APRA17-0077) grant. S.D. is supported under NASA-JHU Cooperative Agreement 80NSSC19M005.
spiebib
|
http://arxiv.org/abs/2409.03149v1 | 20240905005625 | Non-stationary and Sparsely-correlated Multi-output Gaussian Process with Spike-and-Slab Prior | [
"Wang Xinming",
"Li Yongxiang",
"Yue Xiaowei",
"Wu Jianguo"
] | stat.ML | [
"stat.ML",
"cs.LG",
"cs.MA",
"cs.SY",
"eess.SY"
] |
Wang et al.
Non-stationary and Sparsely-correlated MGP
Non-stationary and Sparsely-correlated Multi-output Gaussian Process with Spike-and-Slab Prior[The code capsule has been submitted to Code Ocean with provisional DOI: 10.24433/CO.4010696.v1. ]
Xinming Wang
Department of Industrial Engineering and Management, Peking University, China. [email protected]
Yongxiang Li
Department of Industrial Engineering and Management, Shanghai Jiaotong University, China
Xiaowei Yue
Department of Industrial Engineering, Tsinghua University, China.
Jianguo Wu
Department of Industrial Engineering and Management, Peking University, China. [email protected]
Multi-output Gaussian process (MGP) is commonly used as a transfer learning method to leverage information among multiple outputs. A key advantage of MGP is providing uncertainty quantification for prediction, which is highly important for subsequent decision-making tasks. However, traditional MGP may not be sufficiently flexible to handle multivariate data with dynamic characteristics, particularly when dealing with complex temporal correlations. Additionally, since some outputs may lack correlation, transferring information among them may lead to negative transfer. To address these issues, this study proposes a non-stationary MGP model that can capture both the dynamic and sparse correlation among outputs. Specifically, the covariance functions of MGP are constructed using convolutions of time-varying kernel functions. Then a dynamic spike-and-slab prior is placed on correlation parameters to automatically decide which sources are informative to the target output in the training process. An expectation-maximization (EM) algorithm is proposed for efficient model fitting. Both numerical studies and a real case demonstrate its efficacy in capturing dynamic and sparse correlation structure and mitigating negative transfer for high-dimensional time-series data. Finally, a mountain-car reinforcement learning case highlights its potential application in decision making problems.
Transfer learning, Gaussian process, non-stationary correlation, negative transfer.
Search for singly charmed dibaryons in baryon-baryon scattering
Jialun Ping^1
School of Physics, Nankai University, Tianjin, 300071, China
===================================================================
§ INTRODUCTION
Gaussian process (GP) provides an elegant and flexible Bayesian non-parametric framework for modeling nonlinear mappings <cit.>. Characterized solely by mean and covariance functions, it is capable of capturing complex input-output relationships, as well as measuring prediction uncertainty which is critical for decision-making. As a result, GP has been widely applied in various fields, such as
Bayesian optimization <cit.>, experiment design <cit.>, and product quality monitoring <cit.>.
However, standard GP is designed for only one single output, which limits its use in multi-output or multi-task scenarios arising in various fields, such as Bayesian optimization <cit.>, traffic network <cit.>, and computer simulation emulator <cit.>. Consequently, multi-output Gaussian process (MGP) has been gaining increasing attention from researchers and has emerged as an important member in the vast family of transfer learning <cit.> and multi-task learning <cit.> methods.
Stationary GP and MGP models are commonly used with covariance function depending only on the distance among data points. However, this invariance to input space translation makes them unsuitable for non-stationary environments, where the data characteristics vary across the input domain <cit.>.
This phenomenon is quite common in time-series data. For instance, in the energy field, the mean of power consumption in a household is different in every season <cit.>.
In clinical studies, sepsis is very likely to cause changes in the cross-correlation among vital signs in the early onset <cit.>.
In kinesiology, the cooperation patterns of human joints vary across different gestures, e.g., both hands move jointly in a `shoot' action but separately in a `throw' action <cit.>.
In such cases, non-stationary models, that allow all or a subset of parameters to vary are generally more appropriate. Modeling and capturing such a structural change are important in subsequent decision making tasks, such as identifying the risk of disease and taking a medical care <cit.>.
Mainly two kinds of methods have been proposed to capture the dynamic characteristics of the non-stationary data.
The first category assumes that the parameters are the same within local regions but different across regions. For example, a Bayesian tree-based GP <cit.> uses a tree structure to partition the input space of computer simulation emulator. Another method, called jump GP, cuts the input space into several segments to model piece-wise continuous functions <cit.>. This model is optimized using the expectation-minimization (EM) algorithm or variational inference. Besides, a clustering-based GP <cit.> partitions the spatial data into groups by calculating a cluster dissimilarity and constructs stationary GPs for each group of data. A space-partitioning based GP is further extended to the active learning area to accelerate the design of experiments of heterogeneous systems <cit.>. However, these methods are not suitable for data with gradually-changing characteristics.
To address this issue, the methods in the second category abandon the locally-stationary assumption. They allow all or some parameters to be input/time-dependent, and model those parameters by additional GPs. For instance, non-stationary GP introduced in <cit.> and <cit.> applies GP priors on the amplitude and length-scale parameters of a square-exponential covariance function.
In addition, based on these single-output non-stationary GPs, researchers have explored MGP models for multivariate data with dynamic characteristics. For example, a non-stationary MGP is established to model the varying correlation between vital signals, where a GP prior is imposed on the time-dependent correlation matrix <cit.>.
However, in this state-of-the-art MGP model, the GP prior has no shrinkage effect, which does not encourage a sparse estimation of cross-correlation among multiple outputs.
Pursuing sparse estimation of cross-correlation is rooted in the negative information transfer, which is another critical challenge when using MGP. Transfer learning is very promising for leveraging information to data-insufficient domain, called target domain, from other correlated domains, called source domain. However, not all the data from the source domain are necessarily correlated with the target domain. If knowledge is transferred from uncorrelated domains, it may reduce the performance of target learning, known as negative transfer <cit.>. For example, the motion signal of a specific human joint may only be correlated with a subset of the other joints. In order to recover the joint's motion information by borrowing information from others, it is necessary to detect which joints share similar moving trajectories with the target joint. Therefore, it is crucial that researchers or engineers can make the best choice on using which sources to transfer information.
Negative transfer exists widely in transfer learning, often stemming from the excessive inclusion of source data. To handle this issue, one straightforward approach is to measure the relatedness of each source to the target and choose the most related one for information transfer. For example, the method proposed in <cit.> takes Jensen-Shannon (JS) divergence as a criterion and selects the source with the least divergence for knowledge transfer. However, such a method only takes the pairwise transferability into account and ignores the global structure between the target and the sources. And this choice is made independently on the specific model before training, which is far from an optimal decision. An alternative approach is the regularized MGP <cit.>, which jointly models all outputs and selects informative sources during the training process. However, all these approaches assume that the source-target cross-correlation is fixed in the time space, and thus cannot model the dynamic and sparse structure among multiple outputs.
To this end, we propose a non-stationary MGP model to capture the varying characteristics of data and mitigate the negative transfer simultaneously. Specifically, we focus on modeling the dynamic and sparse correlations between the sources and the target.
In the proposed framework, we first construct a convolution-process-based MGP for transfer learning, whose covariance function parameters are allowed to vary in time space. We then apply a spike-and-slab prior to the parameters that are related to the sparse correlation between the sources and the target. The slab part mainly accounts for smoothly-changing or constant correlation parameters, while the spike part is responsible for shrinking some parameters to zero, thereby removing the corresponding uninformative sources.
To the best of our knowledge, this is the first research on MGP that simultaneously handles dynamic relationship and negative transfer.
Our contributions can be summarized as follows:
* A novel non-stationary MGP model is established using the convolution of latent processes and time-dependent kernel functions, which is suitable for modeling multiple outputs with varying characteristics.
* A dynamic spike-and-slab prior is applied to capture the temporal and sparse correlations among outputs, deciding from which sources to transfer information to the target.
* The mixture weight of the spike and slab priors is automatically adjusted during the training process using an EM-based optimization algorithm, which can effectively prevent placing shrinkage effects on non-zero elements.
The rest of this paper is organized as follows. In Section <ref>, we revisit the related literature and the static MGP model. Section <ref> presents the proposed non-stationary MGP and an efficient EM algorithm for model training. In Section <ref>, we evaluate the effectiveness of our model on simulated data. In Section <ref>, we perform one time-series analysis case on human gesture data <cit.> and one control policy optimization case on the mountain-car problem <cit.>. In Section <ref>, we conclude the paper with a discussion.
§ PRELIMINARIES
In this section, we first review researches that are related to our work. We then introduce the static MGP based on the convolution process, which has been widely applied in various areas due to its flexibility <cit.>.
§.§ Related work
To deal with non-stationary data, a natural extension of GPs is to release the restriction that the parameters of the covariance functions are invariant throughout the input space. Most of the existing approaches either encourage the parameters to be constant in a local area and construct a piece-wise model (the first category, e.g., <cit.>),
or allow them to vary at each point and model them using other GPs (the second category, e.g., <cit.>). The methods in the second category have a similar structure to that of a two-layer Deep Gaussian Process (Deep GP), where the input is first transformed by the first GP layer into a latent input, and then fed into the second GP layer to obtain the output <cit.>. However, the parameters of Deep GP are stationary, which differs from the second category where the covariance parameters are dynamic.
Non-stationary GPs mainly focus on modeling the dynamic mean, smoothness, and amplitude parameters. With regards to MGP, dynamic correlation is another key characteristic that needs to be considered. In classical Linear model of Coregionalization (LMC), each output is a linear combination of several latent Gaussian processes, and the covariance matrix is modeled by a Kronecker product of a correlation matrix and a single GP’s covariance matrix <cit.>.The existing non-stationary MGPs are mainly extensions of the classical LMC model.
For example, the approach in <cit.> allows the correlation matrix to vary with inputs to model the dynamic relationship among outputs.
In <cit.>, a non-stationary MGP combines the time-varying correlation (across outputs) and smoothness (within each output) together. However, as extensions of the traditional LMC, these methods also suffer from the limitation that all outputs possess the same covariance structure. More flexible MGPs are proposed by constructing each output through the convolution process and modeling them with individual parameters <cit.>. However, these approaches are for stationary data. Furthermore, all existing approaches fail to capture a sparse correlation structure in a non-stationary environment.
Spatial-temporal modeling of non-stationary data is closely related to our work. In comparison with normal time-series modeling, spatial-temporal analysis requires to model the spatial correlation to enhance the prediction accuracy. A large number of spatial-temporal models have been investigated, such as spatial-temporal auto-regressive integrated moving average method (ST-ARIMA) <cit.>, spatial-temporal k-nearest neighbors (ST-KNN) <cit.>, spatial-temporal random fields <cit.>, and spatial-temporal deep neural networks <cit.>. Based on the aforementioned methods, a number of recent works try to extend them to handle non-stationary spatial-temporal data. One popular and efficient solution is utilizing some change detection algorithm to partition the time-series into several stationary periods, and then applying the stationary model for each period, e.g., a ST-KNN with a wrapped K-means partition algorithm <cit.>, an auto-regressive model coupled with a block-fused-Lasso change-point detection algorithm <cit.>. Besides partitioning the time-series into stationary parts, the method proposed by <cit.> maps the non-stationary space-time process into a high-dimensional stationary process through augmenting new dimensions. Another type of non-stationary spatial-temporal model is Bayesian random fields with non-stationary space-time kernels <cit.>, whose hyper-parameters change over time or location. Deep learning methods are also explored on non-stationary data recently, such as non-stationary recurrent neural networks <cit.>, long short-term memory networks <cit.>, and transformer-based networks <cit.>. In contrast to the spatial-temporal model, MGP does not impose a restriction that the source outputs must be sampled during the same period as the target outputs. Furthermore, it does not depend on spatial distance to establish correlations among outputs. As a result, the MGP model is capable of accommodating a wider range of scenarios.
It is important to mention that we use a dynamic spike-and-slab prior in our model. The classical spike-and-slab prior is a Bayesian selection approach <cit.> that has been used for feature selection in (generalized) linear models, additive models, and Gaussian processes <cit.>. With this prior, smaller parameters tend to be more influenced by the spike prior to reach zero, while the larger ones are mainly dominated by the slab part and bear little shrinkage. However, the classical spike-and-slab prior cannot account for the modeling of dynamic and sparse correlation parameters in our model. Therefore, we propose to extend this prior to a dynamic version. Although current works have explored the dynamic variable selection for the varying-coefficient linear models <cit.>, no work utilizes a dynamic spike-and-slab prior to model the dynamic and sparse correlation among outputs in a non-stationary MGP.
§.§ Static MGP based on convolution process
Consider a set of m outputs f_i : 𝒳↦ℝ, i=1,...,m, where 𝒳 is a d-dimensional input domain applied to all outputs. Suppose that the observation y_i is accompanied with independent and identically distributed (i.i.d.) noise ϵ_i ∼𝒩(0, σ_i^2), i.e.,
y_i (x) = f_i(x) + ϵ_i.
where x∈ℝ^d is the input.
Denote the n_i observed data for the ith output as 𝒟_i={X_i, y_i}, where X_i=(x_i,1,...,x_i,n_i)^T and y_i=(y_i,1,...,y_i,n_i)^T are the collections of input points and associated observations respectively. Let the total number of observations be represented by N = ∑_i n_i for m outputs. Denote the data of all outputs as 𝒟={X, y}, where X=(X_1^T,...,X_m^T)^T ∈ℝ^N × d and y=(y_1^T,...,y_m^T )^T ∈ℝ^N.
In an MGP model, the observation vector y follows a joint Gaussian distribution:
y| X∼𝒩[ 0, K ],
where K=K(X,X) ∈ℝ^N × N is a block-partitioned covariance matrix.
The (i,i^')-th block of K, K_i,i^'= cov_i,i^'^f(X_i,X_i^') + τ_i, i^'σ_i^2 I∈ℝ^n_i × n_i^' represents the covariance matrix between the output i and output i^' (τ_i, i^' equals to 1 if i = i^', and 0 otherwise). The function cov_i,i^'^f(x, x^') measures the covariance between f_i(x) and f_i^'(x^'). In the covariance matrix K, the cross-covariance block K_i,i^'(i ≠ i^') is the most important part to realize information transfer, as it models the correlation between different outputs.
As the convolution of a GP and a smoothing kernel is still a GP, we can construct each output f_i through convolving a group of shared latent processes {z_j(x)}_j=1^h and kernel functions {g_ji(x)}_j=1^h in the following way <cit.>:
f_i(x) = ∑_j = 1^h α_jig_ji(x)∗ z_j(x) = ∑_j = 1^h α_ji∫_-∞^∞ g_ji(x-u) z_j(u) d u
where ∗ represents convolution operation, α_ji is the amplitude parameter, and h is the number of shared latent processes. Usually, {z_j(x)}_j=1^h are independent white Gaussian noise processes with cov(z(x), z(x^')) = δ(x-x^'), where δ(·) is the Dirac delta function. Thus, the covariance function can be derived as:
cov_i,i^'^f (x, x^') = cov[f_i(x), f_i^'(x^')]
= ∑_j=1^h cov{α_ji g_ji(x)∗ z_j (x), α_ji^' g_ji^'(x^')∗ z_j (x^')}
=∑_j=1^h α_jiα_ji^'∫_-∞^∞ g_ji(u)g_ji^'(u-v)d u,
where v=x-x^'.
It such a way, the covariance between f_i(x) and f_i^'(x^') is dependent on their difference x-x^', the amplitude parameters, and the hyperparameters in kernels g_ji and g_ji^'. Compared with the classical LMC model f_i(x) = ∑_j = 1^h α_ji q_j(x),
where q_j(x) is a latent GP with covariance k_j(x, x^'). The convolution-process-based MGP is more flexible than LMC, as it does not restrict all outputs to having the same auto-covariance pattern.
At a new point x_*, the posterior distribution of y_i(x_*) given data {X,y} is:
y_i(x_*)| X,y∼𝒩( μ(x_*), Σ(x_*) ),
where the predictive mean μ(x_*) and variance Σ(x_*) can be expressed as:
μ(x_*) =K_*^T K^-1y,
Σ(x_*) = cov_ii^f(x_*, x_*) + σ_i^2 - K _*^T K^-1K_*,
where K_*^T = [ cov_i1^f(x_*, X_1)^T, ..., cov_im^f(x_*, X_m)^T] is the covariance between the new point x_* and all observed data. From the posterior distribution, we can find that the covariance function plays a crucial role in prediction. For instance, the predicted mean is the linear combination of output data, where the weight is decided by the covariance matrix. However, in the static MGP, the covariance between two data points depends solely on their distance and does not change dynamically. Additionally, some outputs may be uncorrelated with others, therefore the estimated covariance matrix should possess a sparse structure to avoid negative transfer between the uncorrelated outputs. In the following section, we will propose a novel non-stationary MGP to simultaneously address both problems.
§ MODEL DEVELOPMENT
We propose a non-stationary MGP model for transfer learning that can capture sparse source-target correlation in a dynamic environment. Specifically, we assume that the correlations between the target and each source vary over time. Besides, some sources may not be related to the target during certain time periods.
Under such a circumstance, a spike-and-slab prior is utilized to model the varying and sparse correlation structure.
§.§ The proposed model.
The structure of our hierarchical model is illustrated in fig: graphical structure. The first layer constructs outputs through the convolution of time-dependent kernel functions and latent white Gaussian noise processes, and the second layer consists of priors on function parameters designed to encourage desired properties, such as smoothness and sparsity.
§.§.§ Dynamic MGP
In this subsection, we introduce the major part of the proposed non-stationary MGP, which corresponds to the first layer in fig: graphical structure.
To simplify the notation, we slightly abuse the notation used in the previous section. Specifically, we take the first m-1 outputs f_i : 𝒳↦ℝ, i=1,...,m-1 as the sources, and the last one f_m : 𝒳↦ℝ as the target. We still assume that the observation y_i is accompanied with the i.i.d. measurement noise ϵ_i ∼ N(0, σ_i^2). Let ℐ={1,2,...,m} be the index set of all outputs, and ℐ^S=ℐ/{m} contain the indices of all sources. For simplicity yet without loss of generality, we assume the source and target data are sampled at the time t ∈{1, 2, ... ,n}, i.e., X_i = (x_i, 1, ..., x_i, t, ..., x_i, n)^T and y_i = (y_i,1,...,y_i,t,..,y_i,n)^T. In Appendix A, we will show a more general case where each output is observed only at a subset of time stamps.
Based on the above assumption, our dynamic MGP model is formulated as:
y_i (x_t) = f_i(x_t) + ϵ_i =α_ii, t g_ii, t(x_t)∗ z_i(x_t) + ϵ_i , i ∈ℐ^S
y_m (x_t) = f_m(x_t) + ϵ_m =∑_j ∈ℐα_jm, tg_jm, t(x_t) ∗ z_j(x_t) + ϵ_m
where α_ii, t and α_jm,t are time-varying amplitude parameters, g_ii, t(x) and g_jm, t(x) are time-varying kernel functions, and {z_i(x)}_i=1^m are latent white Gaussian noise processes independent of each other.
This model is highly flexible as various types of kernel functions can be utilized. We choose to employ a Gaussian kernel which is widely used due to its flexibility <cit.>. The kernel is given by
g_ij, t(x)= (2π)^-d/4|θ_ij, t|^-1/4exp(-1/2x^Tθ_ij, t^-1x),
where θ_ij, t is a diagonal matrix representing the length-scale for each input feature. More importantly, such a Gaussian kernel can yield closed-from covariance functions through the convolution operation in eq:cov in convolution process <cit.>.
This flexible model allows each source to have its own kernel g_ii, thereby allowing for heterogeneity among the sources. In order to transfer knowledge from the sources to the target, the target is connected to {z_i}_i=1^m-1 though the kernel function g_im, t. Regarding the parameters, {α_ii, t, θ_ii, t}_i=1^m are responsible for the non-stationary behavior within each output, while {α_im, t, θ_im, t}_i=1^m-1 capture the dynamic correlation between target and sources. More specifically, the amplitude parameter α_im, t controls the knowledge transfer. For example, if α_im, t=0, then f_i will not transfer information to f_m at time t.
As the latent processes are independent of each other, the covariance matrix among sources is zero-valued. Therefore, the covariance matrix can be re-partitioned as:
K=
[ [ K_1,1 ⋯ 0 K_1,m; ⋮ ⋱ ⋮ ⋮; 0 ⋯ K_m-1,m-1 K_m-1,m; K_1,m^T ⋯ K_m-1,m^T K_m,m ] ]
=
[ [ K_(ss) K_(sm); K_(sm)^T K_mm ] ],
where the (i,i^')-th block K_i,i^'= cov_i,i^'^f(X_i,X_i^') + τ_i, i^'σ_i^2 I, the block-diagonal matrix K_(ss) represents the covariance of source outputs, and K_(sm) represents the cross-covariance between the sources and the target.
Based on eq:cov in convolution process,
we can obtain covariance functions for the proposed non-stationary MGP model, as shown below:
cov_ii^f(x_t, x_t^')
= α_ii, tα_ii, t^'|θ_ii, t|^1/4 |θ_ii, t^'|^1/4/|θ_ii, t + θ_ii, t^'|^1/2exp[ -1/2(x_t - x_t^')^T(θ_ii, t + θ_ii, t^')^-1(x_t - x_t^')],
cov_im^f(x_t, x_t^')
= α_ii, tα_im, t^'|θ_ii, t|^1/4 |θ_im, t^'|^1/4/|θ_ii, t + θ_im, t^'|^1/2exp[ -1/2(x_t - x_t^')^T(θ_ii, t + θ_im, t^')^-1(x_t - x_t^')],
cov_mm^f(x_t, x_t^')
= ∑_j ∈ℐα_jm, tα_jm, t^'|θ_jm, t|^1/4 |θ_jm, t^'|^1/4/|θ_jm, t + θ_jm, t^'|^1/2exp[ -1/2(x_t - x_t^')^T(θ_jm, t + θ_jm, t^')^-1(x_t - x_t^')],
where i ∈ℐ^S.
Equations (<ref>- <ref>) represent the covariance within the sources, between the sources and the target, and within the target, respectively. To ensure the positivity of those hyper-parameters, we utilize a soft-plus transformation for them <cit.>:
α_ij,t = log[1 + exp(α̃_ij,t)], θ_ij,t = log[1 + exp(θ̃_ij,t)],
where α̃_ij,t, θ̃_ij,t are underlying parameters to estimate, whose range is [-∞, ∞].
The proposed covariance functions can be viewed as an extension of the non-stationary kernels <cit.> from the single-output to the multi-output case. Specifically, the auto-covariance of each source in eq: dynamic source auto-cov is the same as the covariance for a single-output non-stationary GP. From the cross-covariance between each source and the target, we can clearly see that the amplitude parameter α_im, t controls whether the cross-correlation is zero or not. The validity of the proposed covariance functions is outlined in Proposition <ref>:
The proposed non-stationary MGP covariance matrix in eq: covariance matrix is positive-definite, i.e., ∀y≠0,
y^T Ky > 0.
The proof is provided in Appendix B.
Based on eq: covariance matrix, the joint distribution of all sources and the target is expressed as:
[ [ y_(s); y_m ] ]| X∼𝒩(
[ [ 0; 0 ] ],
[ [ K_(ss) K_(sm); K_(sm)^T K_ mm ] ]),
where y_(s) is the collection of all source data. For notational convenience, we partition the parameters in a similar way,
α_(s) = {α_(s), t}_t=1^n,
θ_(s) = {θ_(s), t}_t=1^n,
σ_(s) = {σ_i}_i=1^m-1,
α_m = {α_m, t}_t=1^n ,
θ_m = {θ_m, t}_t=1^n,
where α_(s), t={α_ii, t}_i=1^m-1, θ_(s), t={θ_ii, t}_i=1^m-1, α_m, t={α_im, t}_i=1^m, and θ_m, t = {θ_im, t}_i=1^m. Furthermore, we denote the collection of all parameters as Φ = {Φ_(s), Φ_m}, where Φ_(s) = {α_(s), θ_(s), σ_(s)} and Φ_m = {α_m, θ_m, σ_m}.
In the proposed model, the most important and challenging task is to estimate those time-varying parameters. If no restriction is applied, model training may suffer from a serious over-fitting problem. To address this issue, Gaussian processes are typically employed to model the kernel parameters, e.g.,
log(α_t) ∼𝒢𝒫(0, k_α(t, t^')), log(θ_t) ∼𝒢𝒫(0, k_θ(t, t^')), where k_α, k_θ are covariance functions for the amplitude and length-scale parameters respectively. Although this technique can force the parameters to vary smoothly and reduce over-fitting, it cannot model a sparse correlation between the sources and the target. Consequently, this approach cannot avoid the negative transfer caused by unrelated sources.
§.§.§ Dynamic spike-and-slab prior.
The classical spike-and-slab prior <cit.> only handles the shrinkage of one single parameter and cannot model the smooth-varying parameters.
To take both the dynamic and sparse property of correlation into account, we propose a dynamic spike-and-slab prior placing on α_m:
α_im, t|γ_i, t, α_im, t-1 ∼(1-γ_i, t) p_spike(α_im, t)+γ_i, tp_slab(α_im, t| α_im, t-1)
γ_i, t|η ∼ Bern(η),
where γ_i, t∈{0,1} is a binary sparse indicator for α_im, t following a Bernoulli distribution, p_spike(α_im, t) is a zero-mean spike prior pushing parameter to zero, p_slab(α_t, im| α_t-1, im) is a slab prior connecting α_im, t-1 and α_im, t, and η is a prior weight between the spike and slab. If there is no prior information regarding the weight, we can set η to 0.5. The spike-and-slab prior is shown in the second layer of the graphical structure in fig: graphical structure. As for all the other parameters, we do not force them to possess sparsity, so only the slab prior is placed on them to control the smoothness, i.e.,
α_ii, t| α_ii, t-1 ∼p_slab(α_ii, t| α_ii, t-1),
θ_ii, t| θ_ii, t-1 ∼p_slab(θ_ii, t| θ_ii, t-1),
θ_im, t| θ_im, t-1 ∼p_slab(θ_im, t| θ_im, t-1).
Note that θ_ij, t is a diagonal matrix, so that the slab prior is placed on its d diagonal elements independently, i.e., p_slab(θ_ii, t| θ_ii, t-1) = ∏_l=1^d p_slab({θ_ii, t}_l| {θ_ii, t-1}_l).
By using the conditional distributions as priors, we can control the change of amplitude or smoothness from the previous time step to the current one, e.g., from α_ii, t-1 to α_ii, t.
Compared with the dynamic spike-and-slab prior used in linear models <cit.>, our method does not constrain the slab prior to be a stable autoregressive process. Besides, we use a simpler but more flexible prior for γ_i,t, while the work in <cit.> uses a prior conditional on the coefficients of the linear model.
The spike prior is responsible for shrinking the parameters to zero and cutting down the information transfer channel to the target. Common choices for this prior include point mass distribution, Laplace distribution, and Gaussian distribution with a small variance. Considering the shrinkage performance and optimization convenience, we choose the Laplace distribution as the spike prior, i.e.,
p_spike(α_im, t) = 1/2ν_0exp(-| α_im, t |/ν_0),
where ν_0 is the length-scale for Laplace distribution. If we maximize the log-likelihood function to optimize the model, this prior will become a L_1 norm penalty and have the ability to shrink parameters.
The slab prior encourages the smoothness of parameter change. In this work, we consider two types of slab priors. The first one is a hard slab prior,
p_slab^hard(α_im, t| α_im, t-1) = 1/2ν_1exp(-| α_im, t - α_im, t-1|/ν_1),
which encourages α_im, t to remain constant in a continuous period, approximating a piecewise model. In the second one, the parameters are allowed to change smoothly,
p_slab^soft(α_im, t| α_im, t-1) = 1/√(2 πν_1)exp(-( α_im, t - ρα_im, t-1)^2/2ν_1),
where ν_1 is variance of Gaussian distribution, and ρ < 1 is an autoregressive coefficient. A similar smoothing approach can also be found in <cit.>. These two slab priors make the current parameter value exactly or roughly concentrated around the previous value. Typically, we set ν_0 to be much smaller than √(ν_1) in the soft slab prior (or ν_1 in the hard slab prior) to make the two priors more separable and to put more penalty on sparsity. Besides, the values of α_im at multiple time steps before t can be included in the soft slab prior, e.g., p_slab^soft(α_im, t| α_im, t-1, α_im, t-2, ...). We choose the simplest form p_slab^soft(α_im, t| α_im, t-1) due to its wide application and robust performance in practice.
At time t, η can be interpreted as the prior probability that α_im, t belongs to the slab process. It influences the strength of shrinkage effect that α_im, t bears. Ideally, for non-zero α_im, t, the posterior mean of γ_i,t should be close to one so that α_im, t is barely impacted by the spike prior. In the optimization algorithm developed in the next subsection, we will show that the estimated mean of γ_i,t is automatically adjusted based on the estimated α_im, t to avoid shrinking non-zero elements. This makes our method superior to traditional Lasso methods where the sparse penalty weights are identical for zero and non-zero parameters.
Finally, based on the above discussion, the whole hierarchical model of the proposed non-stationary MGP can be expressed as follows:
y_(s), y_m | Φ_(s), Φ_m ∼𝒩 (0, K)
α_ii, t| α_ii, t-1∼p_slab(α_ii, t| α_ii, t-1), i∈ℐ^S,
θ_ii, t| θ_ii, t-1∼p_slab(θ_ii, t| θ_ii, t-1), i∈ℐ^S,
α_im, t|γ_i, t, α_im, t-1∼ (1-γ_i, t) p_spike(α_im, t) +γ_i, tp_slab(α_im, t| α_im, t-1), i∈ℐ,
γ_i, t|η∼ Bern(η), i∈ℐ,
θ_im, t| θ_im, t-1∼p_slab(θ_im, t| θ_im, t-1), i∈ℐ.
§.§ Expectation-maximization-based optimization algorithm
The widely-used algorithm for a Bayesian model is Markov Chain Monte Carlo (MCMC) sampling, but it is computationally-inefficient for the proposed non-stationary model with considerable time-varying parameters. Therefore, we develop an efficient EM algorithm. Instead of directly maximizing the posterior p(Φ|y)=p(Φ_(s), Φ_m | y_(s), y_m), we proceed iteratively in terms of the complete log posterior log p(Φ, γ | y), where the binary parameters γ are treated as “missing data”. Since this function is not observable, in the Expectation-step (E-step), we calculate its conditional expectation given the observed data and the current estimated parameters. Then in the Maximization-step (M-step), we maximize the expected complete log-posterior with respect to Φ.
More precisely, the E-step and M-step at the (k+1)th iteration can be expressed as:
E-step: Q(Φ | Φ^(k)) = E_γ | Φ^(k), y{log p(Φ, γ | y)},
M-step: Φ^(k+1) = Φ argmax{ Q(Φ | Φ^(k)) }
where E_γ | Φ^(k), y (·) is the conditional expectation on posterior of γ, and Φ^(k) is the optimized parameters at the kth iteration. For simplicity, we use E_γ (·) to denote E_γ | Φ^(k), y (·).
Based on Bayes' Theorem and the property of multivariate normal distribution, the expectation of log p(Φ, γ | y) can be as (derivation details can be found in Appendix C):
E_γ{log p(Φ, γ | y) }
= -1/2{y_(s)^T K_(ss)^-1y_(s) + log|K_(ss)| + (y_m-μ)^T Σ^-1 (y_m-μ) + log|Σ| }
+ ∑_i=1^m-1∑_t=2^n [ logp_slab(θ_ii, t| θ_ii, t-1) + logp_slab(α_ii, t| α_ii, t-1) ]
+ ∑_i=1^m ∑_t=2^n [ logp_slab(θ_im, t| θ_im, t-1) + (1-E_γγ_i, t) logp_spike(α_im, t)
+ E_γγ_i, tlogp_slab(α_im, t| α_im, t-1) ] + const.,
where μ=K_(sm)^T K_(ss)^-1y_(s) is the conditional mean of target given the sources and Σ=K_mm-K_(sm)^T K_(ss)^-1K_(sm) is the conditional covariance.
In the E-step, since γ is only dependent on Φ_m, the posterior of γ_i, t is calculated as:
p(γ_i, t| Φ^(k)_m) = p(α_im, t^(k) | γ_i, t) p(γ_i, t)/p(α_im, t^(k))∝ p(α_im, t^(k) | γ_i, t) p(γ_i, t)
=[ (1-γ_i, t) p_spike(α_im, t^(k)) +γ_i, tp_slab(α_im, t^(k)| α_im, t-1^(k)) ] ·η^γ_i, t(1-η)^(1-γ_i, t).
Then the conditional expectation of γ_i, t can be updated as:
E_γγ_i, t = ηp_slab(α_im, t^(k)| α_im, t-1^(k))/(1-η) p_spike(α_im, t^(k)) +ηp_slab(α_im, t^(k)| α_im, t-1^(k)),
The posterior mean E_γγ_i, t can be interpreted as the posterior probability of classifying the current parameter α_im, t into a slab process as opposed to a spike process, based on the past value α_im, t-1. For example, we set η to 0.5 as a non-informative prior, and take a small ν_0 (e.g., 0.01) for the spike prior and a large ν_1 (e.g., 0.1) for the soft slab prior. Based on eq: gamma update, supposing that (α_im, t-α_im, t-1)^2 is small and |α_im, t| is large, the expectation E_γγ_i, t will tend towards one, indicating that α_im, t is more likely from the slab prior.
On the other hand, if |α_im, t| is small, E_γγ_i, t is close to zero, enforcing strong shrinkage on α_im, t.
In the M-step, we can optimize the objective function in eq: EM objective with various gradient ascent methods, such as (stochastic) ADAM, L-BFGS, etc. <cit.>.
This objective function is actually a standard Gaussian process log-likelihood with additional regularization terms. The regularization terms penalize the difference between parameters at successive time points and shrink the amplitude parameters to facilitate source selection. The weights of the regularization terms are modulated by the expectation of γ. Ideally, for non-zero α_im, t, E_γγ_i,t will equal to one, so no shrinkage effect will be placed on it. In other words, the strength of sparse penalty is automatically adjusted through eq: gamma update, and this adjustment has explicit statistical interpretability.
In our case studies, we find the algorithm converges rapidly, e.g., achieving convergence after only five iterations.
The whole algorithm is summarized in Algorithm <ref>, where an ADAM method <cit.> is utilized in the M-step.
Note that the large parameter space of Φ poses a challenge in identifying the modes of the full posterior. To speed up the convergence, we propose to initialize the source parameter Φ_(s) by maximizing the sum of sources’ marginal log-likelihood and source parameter prior:
Φ_(s) max log p(y_(s)| Φ_(s)) + log p ( Φ_(s) )
For target parameters, we find simple random initialization can achieve satisfactory performance in experiments.
§.§ Computational Challenge
There are three main computational challenges that we need to address. The first one is the calculation of integration for a convolution kernel. To avoid an intractable integration for the covariance, we utilize the Gaussian kernel in eq: Gaussian kernel and derive closed-form covariance functions in Equations (<ref>- <ref>).
The second challenge is calculating the inverse of covariance matrix, which is a critical issue for all GPs. The computational complexity of Algorithm <ref> is approximately O(∑_i=1^m n_i^3), where n_i is the number of data points for each output. In comparison, the computational cost for traditional MGP is O((∑_i=1^m n_i)^3) with the same size of data and non-separable covariance. The main computational load of our model is in calculating the inverse of K_(ss) and Σ in the source marginal distribution log p(y_(s)| Φ_(s) ) and the target conditional distribution log p(y_m | Φ_m, y_(s), Φ_(s)) respectively in eq: EM objective. Since all the latent processes are independent of each other, K_(ss) is a block-diagonal matrix and the complexity is reduced from O(∑_i=1^m-1 n_i^3) to O(∑_i=1^m-1 n_i^3). The calculation of the inverse of Σ is O(n_m^3). Therefore, the overall computational complexity is reduced to O(∑_i=1^m n_i^3).
Finally, it is a hard task to estimate a considerable number of time-varying parameters. Therefore, we develop the EM-based algorithm to fit the model rather than using a sampling method. Based on the results of a non-stationary linear model <cit.>, the MCMC and EM algorithm lead to very close prediction errors, but the running time of MCMC is about ten times longer than that of the EM.
§.§ Tuning Parameter Selection
The tuning parameters for our model is ν_0, ν_1 (for the hard slab prior), and ν_2 (for the soft slab prior). Here, since the key of our method is selecting the most informative sources, we propose to maximize the following criterion:
B(ν) = N log p(y|X) - log(N) c_ν(α_m),
where N is the number of data, log p(y|X) is the log-likelihood for both the source and the target outputs, and c_ν(α_m) is the number of nonzero elements in α_m given ν. Similar criterion is proposed in <cit.>. Note that ν = {ν_0, ν_1} for the hard slab prior and ν = {ν_0, ν_2} for the soft slab prior. This criterion is similar to the Bayesian Information Criterion (BIC). The first term tends to select a more complex model with a larger likelihood for all outputs, while the second term prefers simpler models where less sources chosen to transfer information to the source. To reduce the computation, we first determine the ratios r_1 = ν_1/ν_0 and r_2 = ν_2/ν_0 to make the spike and slab priors separable <cit.>. Then we design a two-dimension search grid for (ν_1, ν_1/r_1) with the hard slab or (ν_2, ν_2/r_1) with the soft slab. The optimal value of ν is searched over the two-dimensional grid. For example, we set r_1∈{5, 10} and (ν_1, ν_0) ∈{(1/5, 1/25), (1/5, 1/50), (1/10, 1/50), (1/10, 1/100), (1/15, 1/75), (1/15, 1/150)} for a hard slab prior.
§.§ Model Prediction
Since the EM algorithm only estimates the value of parameters at the observed time points, given a new input x_t^* of interest, we first need to estimate the target parameter α_m, t^* and θ_m, t^* at the new time point t^*, then derive the predictive distribution of y_m (x_t^*).
§.§.§ Forecasting.
In the forecasting task, t^* > n. The estimated value of θ_im, t^* is,
θ_im, t^* = {θ_im, n, for hard slab prior,
ρ^t^*-nθ_im, n, for soft slab prior,
.
which is actually the mode of p_slab(θ_im, t^* | θ_im, n). As for α_im, t^*, if E_γγ_i, n≥ 0.5, we consider it from the slab process and estimate it using the same method as θ_im, t^*. Otherwise, it is classified to a spike process and shrunk to zero <cit.>.
§.§.§ Recovery.
In the recovery task, the target data are unobserved at some specific time points and we aim to recover the missing data, i.e., 1<t^*<n. Define t_be and t_af to be the nearest observed time points before and after t^* respectively. Denote the nearest observation time to t^* as t_near, i.e.,
t_near = t∈{t_be, t_af} argmin |t-t^*| .
As the parameters before and after t_* are already optimized by the EM algorithm, the estimation of θ_im, t^* becomes:
θ_im, t^* = {θ_im, t_near, for hard slab prior,
LSE (θ_im, t_be, θ_im, t_af), for soft slab prior,
.
where LSE( · ) represents a least-square estimation for auto-regressive process introduced in <cit.>.
We also let α_im, t^* = 0, if E_γγ_i, t_near<0.5. Otherwise, we estimate its value in the same way as θ_im, t^*.
Then, given the parameter α_m, t^* and θ_m, t^*, and the new input point x_t^*, the joint distribution of y_m( x_t^* ) and observations y_m can be expressed as:
[ y_m; y_m( x_t^* ) ]∼𝒩[ [ μ; μ_t^* ],
[ Σ Σ_t^*; Σ_t^*^T Σ_t^*, t^* ] ],
where μ_t^*=K_(s*)^TK_(ss)^-1y_(s), Σ_t^* = K_m*-K_(sm)^TK_(ss)^-1K_(s*), and Σ_t^*, t^* = K_t^*, t^*-K_(s*)^TK_(ss)^-1K_(s*).
In the above equations, K_(s*) is the cross-covariance matrix of the sources and the new input x_t^*, K_(m*) is the covariance of the target observation and the new point, and K_t^*, t^* is the variance at x_t^*.
Then, the posterior distribution of y_m( x_t^* ) can be derived as:
y_m( x_t^* ) ∼𝒩(μ_t^*+Σ_t^*^T Σ^-1 (y_m - μ),
Σ_t^*, t^* - Σ_t^*^T Σ^-1Σ_t^*).
§ NUMERICAL STUDY
In this section, we verify the effectiveness of the proposed non-stationary MGP with the spike-and-slab prior (denoted as DMGP-SS) using synthetic data. In Section <ref>, we briefly describe the general settings for the numerical study and benchmark methods. In Section <ref>, we introduce the design of simulation cases, where the cross-correlation of the sources and the target are dynamic and sparse. In Section <ref>, we demonstrate our model's capability in detecting the underlying correlation pattern as well as improving target prediction performance on the synthetic data.
§.§ General settings
Similar to the assumption made in the model development section, we generate m sequences consisting of m-1 sources and 1 target, sampled at the same timestamps. The input space is simply time. To investigate the source selection capability of DMGP-SS, we assign only m_t < m-1 sources to be correlated with the target at time t. Besides, a source will remain either correlated or uncorrelated continuously for a certain period of time.
For comparison, we consider three benchmarks:
* GP. The target is modeled using a single-output GP, with a squared-exponential covariance function.
* MGP-L1. It is a state-of-art static method introduced in <cit.>. MGP-L1 models the target and sources in one MGP model, with the same covariance structure as in eq: covariance matrix. The scaling parameters {α_im}_i=1^m-1 are penalized by L_1 term to achieve source selection. The regularized log-likelihood of this model is:
logℱ = -1/2y^T K^-1y - 1/2log |K| - λ∑_i=1^m-1|α_im| - const.
where K is calculated using the static covariance functions in <cit.> and λ is a tunning parameter.
* DMGP-GP. This is a state-of-art non-stationary MGP model, which constructs an LMC model for all outputs and assumes the hyper-parameters follow other GPs <cit.>. More details can be found in the Appendix D. In this model, the covariance for the m outputs is
cov[y(x_t), y(x_t^')] = A_t A_t'^T k(x_t, x_t^') + diag{σ_i},
where A_t A_t^'^T ∈ℝ^m × m is the correlation matrix of m outputs, and diag{σ_i} is the diagonal matrix with {σ_i}_i=1^m. In this study, we focus on the correlation between the sources and target, which corresponds to the last column of A_t A_t^T (except the last element (A_t A_t^T)_mm), i.e., (A_t A_t^T)_0:m-1, m.
All methods are implemented in Python 3.8 and executed on a computer with an Inter(R) Core(TM) i5-7400 CPU with 3.00GHz and 16GB RAM. Both GP and MGP-L1 are implemented using gpflow <cit.>. DMGP-GP is implemented using TensorFlow and optimized with ADAM <cit.>, with a maximum iteration limit of 500. The EM algorithm for DMGP-SS is also based on Tensorflow, and we use stochastic ADAM with four batches in the M-step. We set the k_out and k_in in Algorithm <ref> to 5 and 400 respectively.
For MGP-L1, the weight of L_1 penalty λ is a tuning parameter. For DMGP-GP, we use square exponential functions for k_α and k_θ. For simplicity, we apply the same tuning parameters for both kernels, i.e., the amplitude α^# and length-scale θ^#. Those parameters are tuned by cross-validation. In the case of DMGP-SS, the prior sparsity parameter η is set to 0.5. We repeat each case 50 times and report the prediction performance through averaging the results.
§.§ Simulation cases
The main objective of this section is to demonstrate the effectiveness of our method in capturing the non-stationary and sparse cross-correlation between the sources and the target. For simplicity, we hold the other characteristics constant over time, e.g., the smoothness of each output.
Specifically, we design two simulation cases with different cross-correlation patterns. The first case involves a piece-wise constant cross-correlation, while the second case has a smoothly-changing correlation. In each case, the input data {x_t}_t=1^130 are evenly spaced in [1, 130]. The observed data are generated from sine functions with measurement noise ϵ_t ∼𝒩(0,0.3^2).
Case 1.
In this case, we define four kinds of source functions:
Y_1(x_t)=3sin(π x_t /20 + e_1) + ϵ_t,
Y_2(x_t)=2sin(2 π x_t /20 + e_2) exp[0.5(x_t%40-1)] + ϵ_t,
Y_3(x_t)=3sin(4 π x_t /20 + e_3) + ϵ_t,
Y_4(x_t)=2sin(5 π x_t /20 + e_4) + ϵ_t,
where e_i∼𝒩(0,0.2^2) is a random phase to make the sampled outputs different from each other, and “%” represents the reminder operation. The term exp[0.5(x_t%40-1)] is used to deviate the shape of Y_2 from the standard sine function. In each experiment, we generate 4k sources through sampling each kind of function k times, i.e., m=4k+1. Specifically, the sources {y_i+4j | 0 ≤ j ≤ k-1 } are sampled from Y_i.
Then, we define the dynamic target output as:
f_m(x_t)=a_1, tsin(π x_t/20)+a_2, tsin(2 π x_t/20) + a_3, tsin(4π x_t/20) + a_4,tsin(5 π x_t /20)
In this case, we simulate a piece-wise constant cross-correlation by setting:
a_1, t = (2 + 2a_1) I_t<40,
a_2, t = (2+ 2a_2) I_40 ≤ t < 80 + (1 + a_2) I_80 ≤ t ≤ 130,
a_3, t = (1+ a_3) I_80 ≤ t ≤ 130,
a_4,t = 0.
Therefore, there are three segments, [0,40), [40, 80), and [80,130]. Only the 1st, the 2nd, and the 2nd and 3rd sources are correlated to the target in the three periods, respectively. The other sources remain uncorrelated to the target at all times.
Case 2.
Compared with Case 1, we only change the coefficients {a_i, t}_i=1^3 into smoothly-changing ones in this case. Specifically, we let them vary along sine-cosine trajectories,
a_1, t = [(2+a_1)cos(π t / 120) + 0.5] I_t<40,
a_2, t = [(2+a_2)sin(π t / 120 - π/6) +0.5] I_40 ≤ t < 130,
a_3, t = [(2+a_3) sin(π t / 120 - π/2) +0.5] I_80 ≤ t < 130.
In all cases, we set k=1, 4 to generate four and sixteen source outputs for each case. In order to compare the prediction performance of different methods, we randomly remove three short sequences of length 10 from [10, 30], [50, 70] and [90, 110] respectively. These 30 data points are treated as missing data, while the others are used as training data.
§.§ Simulation results
To begin with, we demonstrate that the proposed DMGP-SS is capable of capturing the dynamic and sparse correlation between the sources and the target.
fig: simulation scale illustrates the estimated {α_im}_i=1^4 for MGP-L1, the {α_im, t}_i=1^4 for DMGP-SS, and the estimated (A_t A_t^T)_1:4, m for DMGP-GP in Case 1 and 2 with four sources. And fig: case1 prediction visualizes the sources and target prediction results in Case 1 with four sources. DMGP-SS is implemented with the hard and soft slab priors for Case 1 and 2 respectively. Note that the value of a_i,t and α_im, t are not identical, since a_i,t is a linear combination weight rather than the correlation parameter in MGP.
Overall, DMGP-SS successfully recovers the true dynamic structural sparsity of correlation shown in the first column of fig: simulation scale. Firstly, DMGP-SS tracks closely the periods of correlation between each source and the target, achieving a dynamic selection of sources. Since the target's characteristics do not change abruptly (as shown in fig: case1 prediction), it is reasonable that the estimated correlation change points are about ten time-steps before or after the designed change times. Second, the hard slab prior encourages nearly piece-wise correlation, while the soft slab prior allows smoothly changing correlation. Due to the appropriate selection of sources at different times in fig: case1 prediction, the proposed DMGP-SS achieves precise prediction with the lowest prediction uncertainty. This highly improves the confidence of decision making when using the recovered series. The difference on confidence interval of three MGP models is due to that the uncorrelated sources 'poison' the correlation structure and decrease the influence of the truly-correlated sources, resulting in a lower value of variance reduction term Σ_t,*^T Σ^-1Σ_t,* in posterior prediction eq:posterior distribution.
In contrast, another non-stationary method DMGP-GP fails to estimate a sparse structure in the source-target correlation since the GP prior on parameters lacks the shrinkage effect. The proposed DMGP-SS addresses this problem through combining the smooth slab prior and the shrinking spike prior.
Regarding MGP-L1, although it can estimate a sparse structure of α_m, the estimated values of non-zero parameters are constant over time and cannot reflect the change of correlation. As a result, it even performs worse than GP in recovering the target output in fig: case1 prediction.
Another advantage of DMGP-SS is the adaptive adjustment of the spike-and-slab combination weight, E_γγ_i, t. In our settings, ν_0^-1 in the spike prior is much larger than ν_1^-1 in the hard slab prior (or √(ν_1^-1) in the soft slab prior), to put more penalty on the correlation sparsity. For example, we set ν_0 = 0.02 and ν_1 = 0.1 in Case 1. However, this sparse penalty does not cause significant bias on non-zero α_im, t because of the automatically adjusted E_γγ_i, t in the EM algorithm. Specifically, we initialize it with 0.99 to barely shrink parameters at the beginning. Then E_γγ_i, t is updated in the E-step of the EM algorithm. fig: simulation gamma shows its estimated value after five iterations. For the correlated sources (e.g., the first source during t∈[0,50]), their corresponding E_γγ_i, t is very close to 1, so they bear negligible shrinkage effect from the spike prior. In contrast, for the uncorrelated sources (e.g., the first source after t = 50), E_γγ_i, t is approximately 0.2, implying a substantial shrinkage effect. The value 0.2 can be derived based on eq: gamma update. For consecutive zero elements, p_slab=η (2ν_1)^-1, and p_spike=(1-η) (2ν_0)^-1, resulting in E_γγ_i, t≈ν_1^-1/ν_0^-1.
tab: simulation MAE summarizes the results of 40 repetitions for each case.
DMGP-SS outperforms all the other methods in both cases. And the increasing of source number does not affect the prediction accuracy, demonstrating its remarkable robustness in dealing with moderate data.
MGP-L1 exhibits slightly higher prediction accuracy than GP. Because the static covariance structure limits its ability to transfer accurate information at all times. Under some circumstances, this limitation will cause a negative transfer effect on the learning of target (for example, the result shown in fig: case1 prediction).
DMGP-GP has a better performance on target prediction than GP and MGP-L1, due to the ability to model dynamic correlation. However, it does not achieve the same prediction accuracy as DMGP-SS in these cases. On the one hand, it lacks the ability to exclude the impact of uncorrelated sources. On the other hand, DMGP-GP is an extension of LMC and uses the same function k(x_t, x_t^') to model the auto-covariance of every output. This feature makes it unsuitable for our cases where the source covariance functions have four kinds of length-scales. Nevertheless, the proposed DMGP-SS models each source with separate kernels and latent functions, highly increasing its flexibility.
tab: computational time lists the computational time of the four methods in Case 1. Between the two non-stationary methods, DMGP-SS requires much less computation time than DMGP-GP. This exactly verifies the analysis in <ref> that our model can save a large mount of time than the traditional non-stationary MGP method, due to a block-sparse covariance matrix. In fact, our model is scalable for larger size of data, which is described in Appendix E.
§ CASE STUDY
In this section, we apply DMGP-SS to two cases: human movement signal modeling and control policy optimization. In the first case, these signals are recorded by sensors attached to different joints, such as hands and feet. As the movement of joints are dynamically correlated across different gestures <cit.>, it is reasonable to utilize a non-stationary method to capture the varying correlation and leverage information among the signals of joints. Regarding the control policy iteration, we study on a classical reinforcement learning problem, mountain-car. We evaluate the performance of DMGP-SS on leveraging knowledge between difference systems when the environment is non-stationary.
§.§ Movement Signal Modeling
§.§.§ Data Description
We use the MSRC-12 gesture dataset <cit.> consisting of sequences of human skeletal body movement. Twenty sensors are distributed across the body, and each sensor can record three-dimensional coordinates of joint positions. The motion signal has a sampling frequency of 30 Hz and an accuracy of ± 2cm. The dataset comprises 12 different gestures performed by 30 participants. The experiment involves each participant starting from a standing position, performing one gesture, and returning to the standing state. This process repeats several times continuously within each sample.
To demonstrate the effectiveness of DMGP-SS, we connect the instances of three gestures (“Goggles”, “Shoot”, and “Throw”) performed by the same individual. fig: gesture skl shows the snapshots of the standing position and the selected gestures.
In the first two gestures, the participant stretches both arms in front of him to perform searching or shooting motions. In the third gesture, the participant only uses his left arm to make an overarm throwing movement. In these gestures, the main acting joints are hands, wrists, and elbows, where the trajectories of hands and wrists are almost identical. Therefore, we select the movement signals of four joints (left wrist, left elbow, right wrist and right elbow) as twelve outputs. We choose the z-coordinate movement of the left elbow as a target output, while using the remaining eleven movement signals as sources.
We select two 120-frame-long instances for each gesture and down-sample each instance to 30 frames. Therefore, there are 180 points for each output. To eliminate the difference in initial coordinate values, we reset the initial three-dimensional coordinate value to (0, 0, 2) across different recordings. Additionally, we re-scale all outputs to [-2, 2]. fig: real data displays the 12 outputs and the change of joints' positions.
§.§.§ Results
Intuitively, the cross-correlation between the source and target signals should remain constant for a single gesture, so a hard slab prior is used in DMGP-SS. All other settings for this case are identical to those used in the simulation studies. We still simulate consecutive data missing for the target. From the 60-points-long time-series of each gesture, we randomly remove a sequence of length ten as missing data.
First of all, fig: real correlation shows the estimated correlation between the sources and the target. MGP-L1 selects six sources (the third 'R-W-z', the forth 'R-E-x', the sixth 'R-E-z', the seventh 'L-W-x', the ninth 'L-W-z', and the eleventh 'L-E-y') as the correlated signals for the whole time period, in which the ninth source has the strongest correlation. DMGP-SS selects the sixth source (`R-E-z') and the ninth source (`L-W-z') as correlated sources when 0 ≤ t ≤ 120 and 120 ≤ t ≤ 180, respectively. DMGP-GP does not provide a sparse estimation of cross-correlation. Among them, the proposed DMGP-SS accurately captures the underlying physical movements of each gesture. In the “Google” and “Shoot” gestures, both elbows have almost the same trajectory. In the “throw” gesture, the left elbow's movement is highly correlated with that of the left wrist. These findings align well with the signals shown in fig: real data. On the contrary, limited by the static correlation structure, MGP-L1 can only select some sources and force them to be correlated with the target all the time, but such signals do not exist in the dynamic environment. For DMGP-GP, the results cannot provide us an intuitive understanding on the joints' relationship.
fig: real prediction displays the recovered target signal in one experiment. Notably, DMGP-SS accurately recovers the missing data with high precision. Besides, it has the minimal uncertainty because it can selects the most correlated sources at each time and such a high correlation improves the confidence on prediction results. Conversely, the predictions of MGP-L1 and DMGP-GP display an obvious bias in the first gap and higher uncertainty for all gaps. Besides, similar to the results of numerical studies, the proposed DMGP-SS also gives predictions with the lowest uncertainty, which significantly improves the confidence of decision making.
We further repeat the experiments 36 times. <ref> compares the prediction accuracy of four methods in terms of both the MAE and the continuous-ranked-probability-score (CRPS). The CRPS measure is a widely used metric to evaluate the probabilistic forecast for a real-valued outcome <cit.>:
CRPS = n_test^-1∑_i=1^n_test∫[Φ( ŷ_i ) - 1_ŷ_i ≥ y_i)]^2 dŷ_i
where ŷ_i is the predicted output and Φ is the cumulative density function (CDF) of the predicted distribution. A low CRPS value suggests that the predicted posterior are concentrated around the true value, indicating a high probabilistic forecast performance. As expected, DMGP-SS outperforms the other methods due to its ability to capture the dynamic and sparse correlation accurately. MGP-L1 performs better than GP, benefiting from the information borrowed from the other sources. However, it cannot model a dynamic correlation, resulting in lower prediction accuracy than DMGP-SS. Regarding DMGP-GP, although it captures the change of correlations between the target and some sources, its prediction accuracy is even lower than GP. This result may be attributed to that non-sparse correlations lead to potential negative transfer effects. Besides, since the sources' smoothness is heterogeneous, it is inappropriate to use the same auto-covariance function to model all outputs.
§.§ Control Policy Optimization
In reinforcement learning problems, there are five important elements for the agent: state s, action a, reward r, value V and policy W. Starting from one state s, the agent takes an action a, transitions into a new state u and gets an immediate reward r(u) from the environment. In general, a reinforcement learning framework includes two primary procedures:
* Model estimation: estimating the state transition function U(s, a) based on the observed transition samples [(s, a), u].
* Policy iteration: iteratively estimating the state value V(s) (the long-term expected reward) for each state and improving the control policy W(s).
More details could be found in the works on GP-based reinforcement learning <cit.>.
In this study, we employ the well-known reinforcement learning problem, mountain-car, to demonstrate the application of our model in decision making. fig: mountain car illustrates such a problem. A car begins in a valley and aims to reach a goal position of the right-most hill. Due to the steep slope of the hill, the car cannot directly accelerate to the goal position. Instead, it needs to drive up the opposite side, turn back, and accelerate to reach the goal position. In system, the agent state is described by the position s^pos∈[-1.2, 1.0] and the velocity s^vel∈[-0.07, 0.07] of the car, i.e., s=(s^pos, s^vel). The agent action is a horizontal force a ∈[-1, 1]. The car starts at the state s_init=(s^vel_init, s^vel_init) with s^pos_init∈ [-0.6, -0.5] and s^vel_init=0, aiming to reach the goal state s_goal=(0.45, 0). The reward function is the probability density function of N(s_goal, diag{0.05^2, 0.0035^2}). The dynamic equation of this system is approximated by:
s^vel_t = s^vel_t-1 + P · a_t-1 - G ·cos(3 · s^pos_t-1)
s^pos_t = s^pos_t-1 + s^vel_t
where P is the horizon power unit and G is the vertical force unit.
We consider the control problem involving a car in a non-stationary environment with limited transition samples. During t∈[0,20), the target car runs in an environment with (P,G)=(0.009, 0.0015) and we get 20 random samples. However, at t=20, an unknown change occurs, altering the environment factors to (P, G)=(0.0011, 0.0026). We sampled another 20 random samples during t∈ [20,40). After t=40, we start to control the car to reach the goal position and the environment does not change any more. Given that those samples are too limited to build an accurate a transition model, we transfer information from two historical source datasets, each consisting of 200 samples from stationary environments with (P,G)=(0.01, 0.0015) and (P, G)=(0.001, 0.0025), respectively. We can see that the target environment is close to the first source environment before t=20, and is close to the second one after that.
Algorithm <ref> summarizes the workflow of reinforcement learning, where we employ the DMGP-SS as a transition model. For simplicity, we model the source transition data with stationary GPs in DMGP-SS. The sources and the target are expressed as:
u_i (s, a) =α_ii g_ii(s, a)∗ z_i(s, a) + ϵ_i , i ∈ℐ^S,
u_m (s, a) = ∑_j ∈ℐα_jm, tg_jm, t(s, a) ∗ z_j(s, a) + ϵ_m.
Here, u is the position or velocity state to where the agent transitioned from a state s after taking an action a. After fitting this model both the target and source samples, we train a GP for the value model V(s) with even-spaced support points S_supp and rewards r(S_supp). Based on the transition model U(s, a), we iteratively improve the policy W(s) and value model V(s) until the prediction of V(s) converges. Details on the policy improvement procedure can be found in <cit.>. While we focus on the offline setting in this case, it is worth noting that this framework can be readily extended to accommodate an online setting, wherein the training sample consists of the visited states and the transition models are updated every few steps.
We choose two reinforcement learning benchmark methods <cit.> with the stationary GP and MGP as the state transition model, respectively. The maximum execution steps are 600 for each method.
In tab: RL MAE and Distance, we report the the mean of absolute distances to the goal state for three methods. The DMGP-SS-based control policy has the shortest average distance to the goal state in 600 steps. Specifically, fig: rl case path compares the position of the car controlled by the three policies. With DMGP-SS as the transition model, the car reaches the goal position and stays there after about 250 time steps. However, the other methods cannot find a good policy to reach the goal position within 600 moves, since their stationary transition models cannot account for the environment change and make the policy iteration hard to converge.
In tab: RL MAE and Distance, we further compare the predictive MAE on state transition for three methods. The proposed method has a significantly lower prediction error on velocity state transition than the the other methods, since it can capture the change of environment and transfer information from the related sources. fig: rl correlation illustrates the estimated α_m from DMGP-SS trained on the velocity transition data. We can find that the proposed method successfully finds that the correlation between the sources and the target changes at t=20. Therefore, during the policy improvement stage, it can leverage information from the similar source (the second one) and avoid the negative transfer from the uncorrelated source (the first one). Regarding the prediction error on position transition, although RL-DMGP-SS has a higher MAE than the other benchmarks, the difference is minor considering the position range [-1.2, 0.6]. Therefore, our method can provide the best control policy.
§ CONCLUSION
This paper proposes a flexible non-stationary multi-output Gaussian process for modeling multivariate data in transfer learning. The novelty of our approach lies in its ability to capture dynamic and sparse cross-correlations between sources and targets. We achieve this by allowing correlation parameters to follow a spike-and-slab prior, where the slab prior ensures correlation variation over time, and the spike prior encourages parameters to shrink to zero, eliminating negative transfer effects from uncorrelated sources. The ratio of these two priors is automatically adjusted in the proposed EM algorithm, preventing the shrinkage effect on non-zero correlation parameters. Through the experiments on simulation and human gesture dataset, we demonstrate that our method is well-suited for both capturing non-stationary characteristics and mitigating negative transfer.
The proposed data-driven method provides a powerful tool for researchers and engineers to select the most informative sources to transfer knowledge. Except high-dimensional time-series modeling, our approach could also find applications in both sequential decision making and change point detection. For instance, transfer learning has arisen to handle the critical challenge of sparse feedbacks in reinforcement learning (RL), a popular framework for solving sequential decision making problems. However, negative transfer is still a notable challenge for multi-task reinforcement learning and it is risky to naively share information across all tasks <cit.>. Therefore, we embedded our model with the offline RL transfer framework to automatically select correlated sources to share knowledge in a non-stationary environment <cit.>. In the future, we can further develop our method to account for an online RL task. Besides that, the estimated dynamic correlation among outputs can help us to understand the structure of time-series even with some missing data, as well as to detect some structural change points for subsequent decision making.
Many extensions are possible for our model. Although the proposed methodology is flexible enough to capture complex and dynamic correlation, scaling it up to large datasets is computationally challenging. Possible solutions include utilizing a sparse approximation for the covariance matrix or developing a more efficient optimizing algorithm.
In addition, the proposed EM optimization algorithm solely provides estimated values of the model parameters without incorporating any uncertainty measurement. To address this issue, we can use variational inference to approximate the true posterior distribution of parameters, thereby capturing the inherent uncertainty associated with these parameters. The approximated uncertainty would be propagated into the prediction at new points.
[4]
§ DMGP-SS WITH MISSING DATA
In general, observation may be missing at some time points within the range of 1 ≤ t ≤ n. Under such a circumstance, the observation number for each output is n_i ≤ n. To account for the missing data, we re-denote the data for the ith output as X_i = (x_i, t_1, ..., x_i, t_i, ..., x_i, t_n_i)^T and y_i = (y_i, t_1, ..., y_i, t_i, ..,y_i, t_n_i)^T, where t_i represents the observation time index.
Compared with the model presented in the main text, the general DMGP-SS requires only adjustments on the parameter priors. The general model can be expressed as follows,
y_(s), y_m | Φ_(s), Φ_m∼𝒩 (0, K)
α_ii, t_i| α_ii, t_i-1∼p_slab(α_ii, t_i| α_ii, t_i-1),
θ_ii, t_i| θ_ii, t_i-1∼p_slab(θ_ii, t_i| θ_ii, t_i-1),
α_im, t_i|γ_i, t_i, α_im, t_i-1∼ (1-γ_i, t_i) p_spike(α_im, t_i)
+γ_i, t_ip_slab(α_im, t_i| α_im, t_i-1),
γ_i, t_i|η∼ Bern(η),
θ_im, t_i| θ_im, t_i-1∼p_slab(θ_im, t_i| θ_im, t_i-1).
The spike prior is the same as that in the main text. The slab priors are re-defined as,
p_slab^hard(α_im, t_i| α_im, t_i-1) = 1/2ν_1exp(-| α_im, t_i - α_im, t_i-1|/ν_1),
p_slab^soft(α_im, t_i| α_im, t_i-1) = 1-ρ/(1-ρ^t_i - t_i-1)√(2 πν_1)exp(-( α_im, t_i - ρ^t_i - t_i-1α_im, t_i-1)^2 (1-ρ^2)/2ν_1 (1-ρ^2(t_i - t_i-1))),
where the soft prior is derived based on the property of an auto-regressive process.
§ PROOF OF PROPOSITION 1
The proposed non-stationary MGP covariance matrix Eq. (9) is positive-definite, i.e., ∀y≠0,
y^T Ky > 0.
Proof of Proposition 1
Recall that the covariance functions are generated by the convolution of kernel functions:
cov_ii^f(x_t, x_t^') = α_ii,tα_ii,t'∫ g_ii,t(x_t - u) g_ii, t'(x_t' - u) du
cov_im^f(x_t, x_t^') = α_ii,tα_im,t'∫ g_ii,t(x_t - u) g_im, t'(x_t' - u) du
cov_mm^f(x_t, x_t^') = ∑_j=1^m α_jm,tα_jm,t'∫ g_jm,t(x_t - u) g_jm, t'(x_t' - u) du
Decompose this quadratic form as follows,
y^T Ky = ∑_1≤ i ≤ m∑_1≤ j ≤ m∑_t ∑_t' y_i,ty_j,t' [ cov_i,j^f (x_t, x_t^') + σ_i^2 𝕀_i=j, t = t']
= ∑_1≤ i ≤ m-1∑_t ∑_t' y_i,ty_i,t' cov_ii^f (x_t, x_t^') + 2 ∑_1≤ i ≤ m-1∑_t ∑_t' y_i,t y_m, t' cov_im^f (x_t, x_t^')
+ ∑_t∑_t' y_m,t y_m, t' cov_mm^f (x_t, x_t^') + ∑_i ∑_t y_i,t^2 σ_i^2
= ∑_1 ≤ i ≤ m-1{∑_t ∑_t' y_i,ty_i,t'α_ii,tα_ii,t'∫ g_ii,t(x_t - u) g_ii, t'(x_t^' - u) du.
+ ∑_t ∑_t' y_m,ty_m,t'α_im,tα_im,t'∫ g_im,t(x_t - u) g_im, t'(x_t^' - u) du
. + 2∑_t ∑_t' y_i,ty_m,t'α_ii,tα_im,t'∫ g_ii,t(x_t - u) g_im, t'(x_t^' - u) du}
+ ∑_t ∑_t' y_m,ty_m,t'α_mm,tα_mm,t'∫ g_mm,t(x_t - u) g_mm, t'(x_t^' - u) du + ∑_i ∑_t y_i,t^2 σ_i^2
= ∑_1 ≤ i ≤ m-1{∫[ ∑_t y_i,tα_ii,t g_ii,t(x_t - u) ∑_t' y_i,t'α_ii,t' g_ii, t'(x_t^' - u) . .
+ ∑_t y_m,tα_im,t g_im,t(x_t - u) ∑_t' y_m,t'α_im,t' g_im, t'(x_t^' - u)
. . + 2 ∑_t y_i,tα_ii,t g_ii,t(x_t - u) ∑_t' y_m,t'α_im,t' g_im, t'(x_t^' - u) ] du}
+ ∫∑_t y_m,tα_mm,t g_mm,t(x_t - u) ∑_t'y_m,t'α_mm,t' g_mm, t'(x_t^' - u) du + ∑_i ∑_t y_i,t^2 σ_i^2
= ∑_1 ≤ i ≤ m-1{∫[ ∑_t y_i,tα_ii,t g_ii,t(x_t - u) + ∑_t' y_m,t'α_im,t' g_im, t'(x_t^' - u) ]^2 du}
+ ∫[ ∑_t y_m,tα_mm,t g_mm,t(x_t - u) ]^2 du + ∑_i ∑_t y_i,t^2 σ_i^2 > 0
Proof completes.
§ DERIVATION OF THE OBJECTIVE FUNCTION IN M-STEP
Based on Bayes theorem, the parameter posterior can be expressed as:
p(Φ, γ | y) ∝ p(y|Φ)p(Φ|γ)p(γ).
And based on the theory of multivariate Gaussian distribution, we have:
p(y|Φ) = p(y_(s), y_m | Φ_(s), Φ_m)
=𝒩(
[ [ y_(s); y_m ] ] |
[ [ 0; 0 ] ],
[ [ K_(ss) K_(sm); K_(sm)^T K_ mm ] ])
= 𝒩( y_(s) | 0, K_(ss)) 𝒩( y_m | μ, Σ),
where μ=K_(sm)^T K_(ss)^-1y_(s) is the conditional mean of target given the sources and Σ=K_mm-K_(sm)^T K_(ss)^-1K_(sm) is the conditional covariance.
Therefore, the objection function can be derived as:
E_γ{log p(Φ, γ | y) }
= E_γ{log p(y | Φ, γ) p(Φ, γ) } + const.
= log p(y_(s)| Φ_(s) ) + log p(y_m | Φ_m, y_(s), Φ_(s))
+ log p(Φ_(s)) + E_γ{log p(Φ_m| γ) + log p(γ) } + const.
= log p(y_(s)| Φ_(s) ) + log p(y_m | Φ_m, y_(s), Φ_(s))
+ log p(θ_(s)) + log p(α_(s)) + log p(θ_m) + E_γ{log p(α_m| γ) } + const.
= -1/2{y_(s)^T K_(ss)^-1y_(s) + log|K_(ss)| + (y_m-μ)^T Σ^-1 (y_m-μ) + log|Σ| }
+ ∑_i=1^m-1∑_t=2^n [ logp_slab(θ_ii, t| θ_ii, t-1) + logp_slab(α_ii, t| α_ii, t-1) + logp_slab(θ_im, t| θ_im, t-1) ]
+ ∑_i=1^m ∑_t=2^n [ (1-E_γγ_i, t) logp_spike(α_im, t) + E_γγ_i, tlogp_slab(α_im, t| α_im, t-1) ] + const..
§ DETAILS OF DMGP-GP
DMGP-GP is a state-of-art non-stationary MGP model, which constructs a LMC model for all outputs and assumes the hyper-parameters follow other GPs <cit.>:
y(x_t) = A_t q(x_t) + ϵ
log(A_ii, t) ∼𝒢𝒫(0, k_α(t, t^'))
A_ij, t ∼𝒢𝒫(0, k_α(t, t^')), i ≠ j
q_i (x_t) ∼𝒢𝒫(0, k(x_t, x_t^')),
k(x_t, x_t^') = √(2 θ_tθ_t^'/θ_t^2 + θ_t'^2)exp[ (x_t - x_t^')^2/2(θ_t^2 + θ_t^'^2)]
log(θ_t) ∼𝒢𝒫(0, k_θ(t, t^'))
where y(x_t) = [y_1(x_t), ..., y_m(x_t)]^T are m outputs, A_t ∈ R^m × m is the time-varying coefficient matrix, q(x_t) = [q_1(x_t), ..., q_m(x_t)]^T are m i.i.d. latent Gaussian processes with zero mean and the same covariance function k(x_t, x_t^'), and ϵ=(ϵ_1,...ϵ_m) is measurement noise with ϵ_i ∼ N(0, σ_i^2). The covariance for the m outputs is
cov[y(x_t), y(x_t^')] = A_t A_t'^T k(x_t, x_t^') + diag{σ_i},
where A_t A_t^'^T ∈ℝ^m × m is the correlation matrix of m outputs, and diag{σ_i} is the diagonal matrix with elements {σ_i}_i=1^m.
§ SCALABILITY OF DMGP-SS
Since our model supposes that the latent processes z_i(x) are independent on each other, the computational complexity is O(mn^3) when all the m outputs have an equal length of n. In comparison, the computational complexity of the classical MGP is O(m^3n^3), much larger than that of ours. Therefore, the proposed model can handle a number of outputs much easier.
For example, we test our method on one numerical case with up to mn=8580 points. tab: scalable experiments show the prediction error and model fitting time. The prediction accuracy of the proposed method is better than that of GP in all the three experiments. Besides, although the third experiments have two times as many points as the first one, the fitting time of the third one (m=33, n=260) is only two times that of the first one (m=17, n=260), which identifies that the computational complexity of our method is O(mn^3). Besides, the fitting time of the second experiment (m=17, n=520) is only four times that of the first one (m=17, n=260), which means the second experiment takes less gradient descent steps to converge than the first one does.
This work was supported by NSFC under Grants NSFC-72171003, NSFC-71932006.
69
[AlBahar et al.(2022)AlBahar, Kim, Wang, Yue]BOdeepGP2022
AlBahar A, Kim I, Wang X, Yue X (2022) Physics-constrained bayesian optimization for optimal actuators placement in composite structures assembly. IEEE Transactions on Automation Science and Engineering .
[Alvarez Lawrence(2011)]EfficientConvolvedMGP2011
Alvarez MA, Lawrence ND (2011) Computationally efficient convolved multiple output gaussian processes. The Journal of Machine Learning Research 12:1459–1500.
[Bai et al.(2022)Bai, Safikhani, Michailidis]NonStaAR2022
Bai Y, Safikhani A, Michailidis G (2022) Hybrid modeling of regional covid-19 transmission dynamics in the u.s. IEEE Journal of Selected Topics in Signal Processing 16(2):261–275, <http://dx.doi.org/10.1109/JSTSP.2022.3140703>.
[Boyle Frean(2004)]DependentGP2004
Boyle P, Frean M (2004) Dependent gaussian processes. Advances in neural information processing systems 17.
[Cheng et al.(2021)Cheng, Lu, Peng]NonStaSTKNN2021
Cheng S, Lu F, Peng P (2021) Short-term traffic forecasting by mining the non-stationarity of spatiotemporal patterns. IEEE Transactions on Intelligent Transportation Systems 22(10):6365–6383, <http://dx.doi.org/10.1109/TITS.2020.2991781>.
[Choong et al.(2009)Choong, Charbit, Yan]AutoregressiveImpute2009
Choong MK, Charbit M, Yan H (2009) Autoregressive-model-based missing value estimation for dna microarray time series data. IEEE Transactions on information technology in biomedicine 13(1):131–137.
[Christakos(2012)]STRandomFields2012
Christakos G (2012) Random field models in earth sciences (Courier Corporation).
[Damianou Lawrence(2013)]DeepGP2013
Damianou A, Lawrence ND (2013) Deep gaussian processes. Artificial intelligence and statistics, 207–215 (PMLR).
[Dance Paige(2022)]SSGP2022
Dance H, Paige B (2022) Fast and scalable spike and slab variable selection in high-dimensional gaussian processes. International Conference on Artificial Intelligence and Statistics, 7976–8002 (PMLR).
[Fairchild et al.(2017)Fairchild, Lake, Kattwinkel, Moorman, Bateman, Grieve, Isler, Sahni]VitalSign2017
Fairchild KD, Lake DE, Kattwinkel J, Moorman JR, Bateman DA, Grieve PG, Isler JR, Sahni R (2017) Vital signs and their cross-correlation in sepsis and nec: a study of 1,065 very-low-birth-weight infants in two nicus. Pediatric research 81(2):315–321.
[Fothergill et al.(2012)Fothergill, Mentis, Kohli, Nowozin]GestureData2012
Fothergill S, Mentis H, Kohli P, Nowozin S (2012) Instructing people for training gestural interactive systems. Proceedings of the SIGCHI conference on human factors in computing systems, 1737–1746.
[Frazier(2018)]BayesianOptimization2018
Frazier PI (2018) Bayesian optimization. Recent advances in optimization and modeling of contemporary problems, 255–278 (Informs).
[Fricker et al.(2013)Fricker, Oakley, Urban]MGPemulator2013
Fricker TE, Oakley JE, Urban NM (2013) Multivariate gaussian process emulators with nonseparable covariance structures. Technometrics 55(1):47–56.
[Garg et al.(2012)Garg, Singh, Ramos]NonStaSTGP2012
Garg S, Singh A, Ramos F (2012) Learning non-stationary space-time models for environmental monitoring. Proceedings of the AAAI Conference on Artificial Intelligence, volume 26, 288–294.
[Gelfand et al.(2004)Gelfand, Schmidt, Banerjee, Sirmans]VaryingCoregionalization2004
Gelfand AE, Schmidt AM, Banerjee S, Sirmans C (2004) Nonstationary multivariate process modeling through spatially varying coregionalization. Test 13(2):263–312.
[George McCulloch(1993)]SS1993
George EI, McCulloch RE (1993) Variable selection via gibbs sampling. Journal of the American Statistical Association 88(423):881–889.
[Goulard Voltz(1992)]LMC1992
Goulard M, Voltz M (1992) Linear coregionalization model: tools for estimation and choice of cross-variogram matrix. Mathematical Geology 24(3):269–286.
[Gramacy(2020)]Surrogates2020
Gramacy RB (2020) Surrogates: Gaussian process modeling, design, and optimization for the applied sciences (CRC press).
[Gramacy Lee(2008)]TreeGP2008
Gramacy RB, Lee HKH (2008) Bayesian treed gaussian process models with an application to computer modeling. Journal of the American Statistical Association 103(483):1119–1130.
[Heaton et al.(2017)Heaton, Christensen, Terres]ClusterGP2017
Heaton MJ, Christensen WF, Terres MA (2017) Nonstationary gaussian process models using spatial hierarchical clustering from finite differences. Technometrics 59(1):93–101.
[Heinonen et al.(2016)Heinonen, Mannerström, Rousu, Kaski, Lähdesmäki]Non-stationaryGP2016
Heinonen M, Mannerström H, Rousu J, Kaski S, Lähdesmäki H (2016) Non-stationary gaussian process regression with hamiltonian monte carlo. Artificial Intelligence and Statistics, 732–740 (PMLR).
[Hersbach(2000)]CRPS2000
Hersbach H (2000) Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather and Forecasting 15(5):559–570.
[Hu Wang(2021)]OnlineMGP
Hu Z, Wang C (2021) Nonlinear online multioutput gaussian process for multistream data informatics. IEEE transactions on industrial informatics 18(6):3885–3893.
[Huber et al.(2021)Huber, Koop, Onorante]SparseTVP2021
Huber F, Koop G, Onorante L (2021) Inducing sparsity and shrinkage in time-varying parameter models. Journal of Business & Economic Statistics 39(3):669–683.
[Ishwaran Rao(2005)]SS2005
Ishwaran H, Rao JS (2005) Spike and slab variable selection: frequentist and bayesian strategies. The Annals of Statistics 33(2):730–773.
[Kalli Griffin(2014)]SparseTVP2014
Kalli M, Griffin JE (2014) Time-varying sparsity in dynamic regression models. Journal of Econometrics 178(2):779–793.
[Kingma Ba(2015)]ADAM2014
Kingma DP, Ba JL (2015) Adam: A method for stochastic optimization. Proceedings of 3rd International Conference on Learning Representations.
[Ko Kim(2022)]DeepGP2022
Ko J, Kim H (2022) Deep gaussian process models for integrating multifidelity experiments with nonstationary relationships. IISE Transactions 54(7):686–698.
[Kontar et al.(2018)Kontar, Zhou, Sankavaram, Du, Zhang]RULMGP2018
Kontar R, Zhou S, Sankavaram C, Du X, Zhang Y (2018) Nonparametric modeling and prognosis of condition monitoring signals using multivariate gaussian convolution processes. Technometrics 60(4):484–496.
[Kuss Rasmussen(2003)]GPRL
Kuss M, Rasmussen C (2003) Gaussian processes in reinforcement learning. Advances in neural information processing systems 16.
[Lee et al.(2023)Lee, Wang, Wu, Cai, Yue]ActivePartitionGP2023
Lee C, Wang K, Wu J, Cai W, Yue X (2023) Partitioned active learning for heterogeneous systems. Journal of Computing and Information Science in Engineering 23(4):041009.
[Liu et al.(2020)Liu, Gong, Yang, Chen]NonStaRNN2020
Liu Y, Gong C, Yang L, Chen Y (2020) Dstp-rnn: A dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction. Expert Systems with Applications 143:113082.
[Liu et al.(2022)Liu, Wu, Wang, Long]NonStaTransformers2022
Liu Y, Wu H, Wang J, Long M (2022) Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems 35:9881–9893.
[Matthews et al.(2017)Matthews, van der Wilk, Nickson, Fujii, Boukouvalas, León-Villagrá, Ghahramani, Hensman]GPflow2017
Matthews AGdG, van der Wilk M, Nickson T, Fujii K, Boukouvalas A, León-Villagrá P, Ghahramani Z, Hensman J (2017) GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research 18(40):1–6.
[Meng et al.(2021)Meng, Soper, Lee, Liu, Greene, Ray]Non-stationaryMGP2021
Meng R, Soper B, Lee HK, Liu VX, Greene JD, Ray P (2021) Nonstationary multivariate gaussian processes for electronic health records. Journal of Biomedical Informatics 117:103698.
[Moore(1990)]MountainCar1990
Moore AW (1990) Efficient memory-based learning for robot control. Technical report, University of Cambridge, Computer Laboratory.
[Paciorek Schervish(2003)]Non-stationaryGP2003
Paciorek C, Schervish M (2003) Nonstationary covariance functions for gaussian process regression. Advances in neural information processing systems 16.
[Padakandla et al.(2020)Padakandla, KJ, Bhatnagar]NonStationaryRL2020
Padakandla S, KJ P, Bhatnagar S (2020) Reinforcement learning algorithm for non-stationary environments. Applied Intelligence 50:3590–3606.
[Pan Yang(2009)]TransferSurvey2009
Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359.
[Park(2022)]JumpGP2022
Park C (2022) Jump gaussian process model for estimating piecewise continuous regression functions. Journal of Machine Learning Research 23(278):1–37.
[Paun et al.(2023)Paun, Husmeier, Torney]Non-stationaryGP2023
Paun I, Husmeier D, Torney CJ (2023) Stochastic variational inference for scalable non-stationary gaussian process regression. Statistics and Computing 33(2):44.
[Rangapuram et al.(2018)Rangapuram, Seeger, Gasthaus, Stella, Wang, Januschowski]NonStaRNN2018
Rangapuram SS, Seeger MW, Gasthaus J, Stella L, Wang Y, Januschowski T (2018) Deep state space models for time series forecasting. Advances in neural information processing systems 31.
[Ročková George(2014)]EMVS2014
Ročková V, George EI (2014) Emvs: The em approach to bayesian variable selection. Journal of the American Statistical Association 109(506):828–846.
[Rockova McAlinn(2021)]DynamicSS2021
Rockova V, McAlinn K (2021) Dynamic variable selection with spike-and-slab process priors. Bayesian Analysis 16(1):233–269.
[Rodrigues et al.(2019)Rodrigues, Henrickson, Pereira]MGPtraffic2019
Rodrigues F, Henrickson K, Pereira FC (2019) Multi-output gaussian processes for crowdsourced traffic data imputation. IEEE Transactions on Intelligent Transportation Systems 20(2):594–603, <http://dx.doi.org/10.1109/TITS.2018.2817879>.
[Scheipl et al.(2012)Scheipl, Fahrmeir, Kneib]FunctionSelection2012
Scheipl F, Fahrmeir L, Kneib T (2012) Spike-and-slab priors for function selection in structured additive regression models. Journal of the American Statistical Association 107(500):1518–1532.
[Shand Li(2017)]NonStaAugment2017
Shand L, Li B (2017) Modeling nonstationarity in space and time. Biometrics 73(3):759–768.
[Shen et al.(2023)Shen, Gnanasambandam, Wang, Kong]MGPBO2023
Shen B, Gnanasambandam R, Wang R, Kong ZJ (2023) Multi-task gaussian process upper confidence bound for hyperparameter tuning and its application for simulation studies of additive manufacturing. IISE Transactions 55(5):496–508.
[Stathopoulos Karlaftis(2003)]ARIMA2003
Stathopoulos A, Karlaftis MG (2003) A multivariate state space approach for urban traffic flow modeling and prediction. Transportation Research Part C: Emerging Technologies 11(2):121–135.
[Ton et al.(2018)Ton, Flaxman, Sejdinovic, Bhatt]NonStaFourier2018
Ton JF, Flaxman S, Sejdinovic D, Bhatt S (2018) Spatial mapping with gaussian processes and nonstationary fourier features. Spatial statistics 28:59–78.
[Verstraeten et al.(2020)Verstraeten, Libin, Nowé]LMCRL
Verstraeten T, Libin PJ, Nowé A (2020) Fleet control using coregionalized gaussian process policy iteration. ECAI, 1571–1578.
[Wang et al.(2020a)Wang, Hamelijnck, Damoulas, Steel]NonStaRandomFields2020
Wang K, Hamelijnck O, Damoulas T, Steel M (2020a) Non-separable non-stationary random fields. International Conference on Machine Learning, 9887–9897 (PMLR).
[Wang et al.(2020b)Wang, Cao, Philip]STDeep2020
Wang S, Cao J, Philip SY (2020b) Deep learning for spatio-temporal data mining: A survey. IEEE transactions on knowledge and data engineering 34(8):3681–3700.
[Wang et al.(2022)Wang, Wang, Song, Kirby, Wu]RegularizedMGP2022
Wang X, Wang C, Song X, Kirby L, Wu J (2022) Regularized multi-output gaussian convolution process with domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(5):6142–6156.
[Wang et al.(2019)Wang, Zhang, Zhu, Long, Wang, Yu]NonStaLSTM2019
Wang Y, Zhang J, Zhu H, Long M, Wang J, Yu PS (2019) Memory in memory: A predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9154–9162.
[Wen et al.(2023)Wen, Zhou, Zhang, Chen, Ma, Yan, Sun]TransformersSurvey2023
Wen Q, Zhou T, Zhang C, Chen W, Ma Z, Yan J, Sun L (2023) Transformers in time series: a survey. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 6778–6786.
[Williams Rasmussen(2006)]GP2006
Williams CK, Rasmussen CE (2006) Gaussian processes for machine learning, volume 2 (MIT press Cambridge, MA).
[Xia et al.(2016)Xia, Wang, Li, Li, Zhang]STKNN2016
Xia D, Wang B, Li H, Li Y, Zhang Z (2016) A distributed spatial–temporal weighted model on mapreduce for short-term traffic flow forecasting. Neurocomputing 179:246–263.
[Xu et al.(2022)Xu, Wu, Yue, Li]DynamicSubspace2022
Xu R, Wu J, Yue X, Li Y (2022) Online structural change-point detection of high-dimensional streaming data via dynamic sparse subspace learning. Technometrics 1–14.
[Yang et al.(2020)Yang, Xu, Wu, Wang]MultiTaskRL2020
Yang R, Xu H, Wu Y, Wang X (2020) Multi-task reinforcement learning with soft modularization. Advances in Neural Information Processing Systems 33:4767–4777.
[Ye Dai(2021)]TransferForecasting2021
Ye R, Dai Q (2021) Implementing transfer learning across different datasets for time series forecasting. Pattern Recognition 109:107617.
[Yoon Li(2018)]PositiveTransfer2018
Yoon H, Li J (2018) A novel positive transfer learning approach for telemonitoring of parkinson’s disease. IEEE Transactions on Automation Science and Engineering 16(1):180–191.
[Yun et al.(2022)Yun, Zhang, Li]STRandomFields2022
Yun S, Zhang X, Li B (2022) Detection of local differences in spatial characteristics between two spatiotemporal random fields. Journal of the American Statistical Association 117(537):291–306.
[Zhang et al.(2021)Zhang, Yan, Lee, Shi]DynamicSubspace2021
Zhang C, Yan H, Lee S, Shi J (2021) Dynamic multivariate functional data modeling via sparse subspace learning. Technometrics 63(3):370–383.
[Zhang et al.(2016)Zhang, Wang, Chen]QualityMonitoring2016
Zhang L, Wang K, Chen N (2016) Monitoring wafers’ geometric quality using an additive gaussian process model. IIE Transactions 48(1):1–15.
[Zhang Yang(2021)]Multi-taskSurvey2021
Zhang Y, Yang Q (2021) A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering 34(12):5586–5609.
[Zhu et al.(1997)Zhu, Byrd, Lu, Nocedal]BFGS1997
Zhu C, Byrd RH, Lu P, Nocedal J (1997) Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on mathematical software (TOMS) 23(4):550–560.
[Zhu et al.(2023)Zhu, Lin, Jain, Zhou]TransferRL2023
Zhu Z, Lin K, Jain AK, Zhou J (2023) Transfer learning in deep reinforcement learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence .
[Zou et al.(2023)Zou, Careem, Dutta, Thawdar]NonStaRandomFields2023
Zou Z, Careem M, Dutta A, Thawdar N (2023) Joint spatio-temporal precoding for practical non-stationary wireless channels. IEEE Transactions on Communications 71(4):2396–2409.
|
http://arxiv.org/abs/2409.02608v1 | 20240904104533 | A Medical Multimodal Large Language Model for Pediatric Pneumonia | [
"Weiwei Tian",
"Xinyu Huang",
"Tianhao Cheng",
"Wen He",
"Jinwu Fang",
"Rui Feng",
"Daoying Geng",
"Xiaobo Zhang"
] | cs.CV | [
"cs.CV"
] |
A Medical Multimodal Large Language Model for Pediatric Pneumonia
Weiwei Tian, Xinyu Huang, Tianhao Cheng, Wen He, Jinwu Fang, Rui Feng, Daoying Geng, Xiaobo Zhang
Manuscript received September 4, 2024. This work was supported in part by the National Natural Science Foundation of China (No.62172101), and in part by the Science and Technology Commission of Shanghai Municipality (No.22511106003, No.23511100602) and Municipal Hospital Frontier Joint Research Project (No.SHDC12024136), which studies on Evaluation Indicator Construction and Clinical Application Management for Diagnostic and Treatment Assistant Large-scale Model for Pediatric Severe Pneumonia. (Xinyu Huang and Tianhao Cheng contributed equally. Corresponding author: Xiaobo Zhang.)
Weiwei Tian, Rui Feng, and Daoying Geng are with the Academy for Engineering and Technology, Fudan University, No. 220 Handan Road, Shanghai 200433, China (e-mail: {wwtian20, fengrui}@fudan.edu.cn, [email protected]).
Xinyu Huang, Tianhao Cheng, and Rui Feng are with the School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, No. 2005 Songhu Road, Shanghai 200438, China (e-mail: [email protected], [email protected]).
Wen He, Rui Feng, and Xiaobo Zhang are with the Department of Respiratory Medicine, Children’s Hospital of Fudan University, No. 399 Wanyuan Road, Shanghai 201102, China (e-mail: [email protected], [email protected]).
Jinwu Fang is with the School of Public Health, Fudan University, No. 130 Dongan Road, Shanghai 200032, China (e-mail: [email protected]).
Daoying Geng is also with the Department of Radiology, Huashan Hospital, Fudan University, No. 12 Wulumuqi Rd. Middle, Shanghai 200040, China.
September 9, 2024
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Pediatric pneumonia is the leading cause of death among children under five years worldwide, imposing a substantial burden on affected families. Currently, there are three significant hurdles in diagnosing and treating pediatric pneumonia. Firstly, pediatric pneumonia shares similar symptoms with other respiratory diseases, making rapid and accurate differential diagnosis challenging. Secondly, primary hospitals often lack sufficient medical resources and experienced doctors. Lastly, providing personalized diagnostic reports and treatment recommendations is labor-intensive and time-consuming. To tackle these challenges, we proposed a Medical Multimodal Large Language Model for Pediatric Pneumonia (P2Med-MLLM). This was the first foundation model tailored for patients primarily diagnosed with pediatric pneumonia, capable of handling diverse clinical tasks—such as generating free-text radiology reports and medical records—within a unified framework. Specifically, P2Med-MLLM can process both pure text and image-text data, trained on an extensive and large-scale dataset (P2Med-MD), including real clinical information from 163,999 outpatient and 8,684 inpatient cases. This dataset comprised 2D chest X-ray images, 3D chest Computed Tomography (CT) images, corresponding radiology reports, and outpatient and inpatient records. P2Med-MLLM combined a Large Language Model (LLM) with a vision encoder, fine-tuning them together to handle multiple temporally sequenced and interleaved 2D or 3D images with corresponding radiology reports using a perceiver module. We designed a three-stage training strategy to enable P2Med-MLLM to comprehend medical knowledge and follow instructions for various clinical tasks. To rigorously evaluate P2Med-MLLM’s performance, we developed P2Med-MBench, a benchmark consisting of 642 meticulously verified samples by pediatric pulmonology specialists, covering six clinical decision-support tasks and a balanced variety of diseases. The automated scoring results demonstrated the superiority of P2Med-MLLM. This work plays a crucial role in assisting primary care doctors with prompt disease diagnosis and treatment planning, alleviating patient disease burden, reducing severe symptom mortality rates, and optimizing the allocation of medical resources.
§ INTRODUCTION
In 2021 alone, more than 0.5 million children under the age of five died from Lower Respiratory Infections (LRI) worldwide, accounting for 12% of total deaths <cit.>. Among LRI, pediatric pneumonia, especially when accompanied by severe symptoms and complications, has the highest morbidity and mortality, particularly in developing countries <cit.>. Pediatric pneumonia, bronchitis, and asthma share similar symptoms like coughing and wheezing, making prompt diagnosis upon admission very challenging <cit.>. Limited healthcare resources and a lack of experienced doctors in primary hospitals exacerbate this situation, leading to misdiagnoses and inappropriate treatments.
To meet the growing demands of precision medicine, deep learning-based technologies have emerged for identifying pediatric respiratory diseases <cit.>, early triage <cit.>, and predicting clinical outcomes <cit.>. Despite achieving or nearing human expert levels, these models primarily treated clinical tasks as simple classification or regression problems, falling short of providing detailed and reliable diagnostic bases and treatment recommendations.
Recently, Multimodal Large Language Models (MLLMs) have experienced exponential growth in general domains <cit.>, but they were still not fully capable of effectively supporting real-world clinical applications <cit.>. The essential reason was that, to protect patient privacy, these models were mainly trained on medical textbooks and literature from the internet, without exposure to real and comprehensive medical data. Aligning with human doctors has significantly improved MLLMs' performance across various specialties (e.g., radiology, pathology, ophthalmology, and dermatology) and tasks (e.g., disease diagnosis, medical image generation, medical image caption, medical report generation, medical report summarization, rationale diagnosis, survival prediction, medical image-text retrieval, medical report quality assessment, medical question answering, and medical visual question answering) <cit.> in the healthcare field. Inspired by the aforementioned work, we aim to explore the feasibility of MLLMs using real clinical data on pediatric pneumonia. Given the complexity of pediatric pneumonia, we mainly face challenges from three aspects:
* Lack of a large-scale and high-quality multimodal dataset for training: Due to the rapid physical development of children, their medical imaging, laboratory tests, and demographic data significantly differ from those of adults. Currently, there is a shortage of a large-scale pediatric pneumonia dataset that reflects real clinical scenarios. Moreover, real-world data tend to be noisy and have a long-tailed distribution, which can severely impact model training effectiveness.
* Lack of a unified and compatible model architecture: Addressing the diverse clinical needs in the full process of diagnosing and treating pediatric pneumonia requires a model architecture that can efficiently handle different modalities, sequences, and time-series data inputs, and produce outputs for various tasks in a unified manner. Currently, such a compatible model structure is lacking.
* Lack of a comprehensive and objective evaluation benchmark: A comprehensive and objective benchmark is crucial for supervising model training and assessing performance. There is a lack of a high-quality evaluation benchmark that covers a wide range of clinical tasks and disease categories.
To address the obstacles of applying MLLMs to pediatric pneumonia, we developed a Medical Multimodal Large Language Model tailored for Pediatric Pneumonia (P2Med-MLLM, Fig. <ref>), which was trained and deployed on a local server within the hospital environment to ensure data security and privacy.
To effectively train P2Med-MLLM, we constructed the first large-scale Chinese Medical Multimodal Dataset for Pediatric Pneumonia (P2Med-MD), covering real clinical information from 163,999 outpatients and 8,684 inpatients. Specifically, we collected comprehensive medical data for patients with a primary diagnosis of pediatric pneumonia, including 2D chest X-ray and 3D chest Computed Tomography (CT) images, corresponding radiology reports, outpatient records, and three-level inpatient records reflecting disease progression <cit.> (Fig. <ref>). During the three training stages, we ensured data quality by deduplication, task type-based balanced sampling, and disease category-based balanced sampling (Fig. <ref>).
As for the architecture of P2Med-MLLM, it consisted of three core components: a Large Language Model (LLM, Chinese-LLaMA-2 <cit.>), a CLIP-pretrained vision encoder <cit.>, and a perceiver module <cit.>. This design enabled P2Med-MLLM to retain its original capabilities in understanding and generating pure text (outpatient and inpatient records), while also allowing it to interleave multiple 2D chest X-rays or 3D chest CT images with corresponding radiology reports. This facilitated comparative analysis of a patient's radiological examinations over different time points, aligning more closely with clinical practice.
For evaluation, we initialized a Medical Multimodal Benchmark for Pediatric Pneumonia, termed P2Med-MBench. This benchmark covered various disease categories and valuable clinical tasks, including radiology report generation (X-ray), radiology report generation (CT), outpatient medical record generation, first disease course record generation, attending physician's first ward round record generation, and chief physician's first ward round record generation (Fig. <ref>). All real-world data have been meticulously verified by professional pediatric pulmonology specialists to ensure quality and representativeness. Automatic scoring of results generated by P2Med-MLLM and other open-source LLMs on P2Med-MBench, along with a series of ablation studies, demonstrated the superiority of our approach.
Overall, the main contributions of our work are summarized as follows:
* Construct a large-scale and high-quality multimodal dataset (P2Med-MD): Based on real clinical scenarios, we developed the first Chinese medical multimodal dataset for patients with a primary diagnosis of pediatric pneumonia. This dataset covered various diseases and clinical tasks.
* Propose a unified and compatible multimodal model architecture (P2Med-MLLM): For different clinical tasks, we introduced the first model capable of handling both pure text data (outpatient and inpatient records) and temporally sequenced, interleaved image-text pairs (2D X-rays and 3D CT images, along with corresponding radiology reports).
* Establish a comprehensive and objective multimodal evaluation benchmark (P2Med-MBench): To supervise the training process and objectively evaluate different models, we meticulously designed a multimodal benchmark with balanced distributions of diseases and clinical tasks. Extensive quantitative and qualitative experimental results demonstrated the effectiveness of our approach.
§ RESULTS
In this section, we conducted experiments on various tasks, including radiology report generation (X-ray), radiology report generation (Computed Tomography, CT), outpatient medical record generation, first disease course record generation, attending physician's first ward round record generation, and chief physician's first ward round record generation. We began by describing the evaluation metrics used for the experiments. Then, we presented the quantitative and qualitative results of our framework on the Medical Multimodal Benchmark for Pediatric Pneumonia (P2Med-MBench).
§.§ Evaluation Metrics
To evaluate the professional performance of various Medical Multimodal Large Language Model variants for Pediatric Pneumonia (P2Med-MLLM) and baseline models, we utilized 13B Chinese-LLaMA-2 <cit.> with a one-shot in-context example to automatically score the generated open-ended responses and provide reasons for the given scores. Our pediatric pulmonology specialists meticulously curated a set of examples across tasks and evaluation components based on their clinical expertise. For the most critical evaluation components in a range of tasks, such as impression or diagnosis results, we employed accuracy and comprehensiveness metrics. For other evaluation components, we used accuracy alone. We assessed the quality of the generated answers using a 5-point scale, as depicted in Fig. <ref>. The original Chinese version can be found in Fig. <ref>. The 95% confidence interval for each metric was calculated using the t-distribution.
§.§ Radiology Report Generation (X-ray)
As shown in Table <ref>, Large Language Models (LLMs) such as Baichuan 2 <cit.> and Chinese-LLaMA-2 <cit.> can only process pure texts. In clinical practice, X-ray images were crucial for screening and diagnosing pediatric pneumonia. By incorporating a perceiver with an LLM, P2Med-MLLM can handle sequential 2D X-ray images and generate corresponding radiology reports. Fig. <ref> demonstrated that P2Med-MLLM was capable of processing X-ray images taken at different times from the same patient in two conversation turns. The model generated different impressions: “bronchopneumonia" (October 29, 2022) and “bronchopneumonia resolved" (November 11, 2022), effectively reflecting the patient's disease progression. The original Chinese results were detailed in Fig. <ref>.
§.§ Radiology Report Generation (CT)
In addition to 2D X-ray images, P2Med-MLLM can also generate radiology reports for 3D CT images. As shown in Fig. <ref> and Fig. <ref>, the model successfully identified critical radiological features in the images and recognized underlying diseases.
§.§ Outpatient Medical Record Generation
Generating outpatient medical records was a challenging and open-ended task that required comprehensive analysis of the outpatient's chief complaint, history of present illness, and physical examination. As indicated in Table <ref>, the 8B P2Med-MLLM demonstrated significant improvements compared to other LLMs (such as the 7B or 13B Baichuan 2 or Chinese-LLaMA-2). For example, P2Med-MLLM increased the accuracy of preliminary diagnosis from 2.96 to 3.37 and improved the comprehensiveness from 3.26 to 4.17. These results indicated that P2Med-MLLM can generate radiology reports without compromising the language model's capabilities. Fig. <ref> and Fig. <ref> showed that P2Med-MLLM can make accurate diagnoses in free-text format and provide highly relevant treatment recommendations and plans, despite missing information “follow-up appointments as necessary".
§.§ First Disease Course Record Generation
Generating resident physician's first disease course records for inpatients was more challenging because it required comprehensive analysis of various information, including the history of present illness, physical examination, auxiliary examinations, and clinical history features. As depicted in Table <ref>, compared to the suboptimal models, the 7B or 13B Baichuan 2, P2Med-MLLM showed an improvement in the accuracy and comprehensiveness of admission diagnosis by 0.25 and 0.35, respectively. Qualitatively, as shown in Fig. <ref> and Fig. <ref>, P2Med-MLLM can understand the provided patient information and questions, generating a relatively accurate diagnostic basis, admission diagnosis, and diagnostic and treatment plan in a standardized format. However, some details still needed improvement. For instance, the chest X-ray on October 1, 2023, showed “bronchitis", while the chest X-ray on October 8, 2023, showed “minor inflammation in both lungs and right upper and lower pulmonary emphysema". The most recent examination results should be prioritized. Additionally, there were some errors in the nursing level recommendations and medication guidelines that needed to be addressed.
§.§ Attending Physician's First Ward Round Record Generation
Generating attending physician's first ward round records for inpatients was also a crucial and meaningful task for generative medical foundation models. This task involved using input clinical history features and additional clinical history and signs to produce the patient's diagnostic basis, current diagnosis, and diagnostic and treatment plan. In Table <ref>, the 8B P2Med-MLLM outperformed the 13B Baichuan 2 by 0.07 in accuracy and 0.12 in comprehensiveness for the current diagnosis, highlighting the advantages of our model. Fig. <ref> and Fig. <ref> demonstrated the effectiveness of P2Med-MLLM in generating the three components for this task. However, the generated record still had some shortcomings, such as omitting critical information like abnormal findings in the “physical examination" section of the diagnostic basis and the “neonatal nutritional risk assessment" in the diagnostic and treatment plan.
§.§ Chief Physician's First Ward Round Record Generation
The task of generating chief physician's first ward round records for inpatients was similar to that of generating attending physician's first ward round records. As shown in Table <ref>, although the 8B P2Med-MLLM maintained advantages in most tasks, components, and metrics, it fell behind the 13B Baichuan 2 by 0.08 in accuracy for the current diagnosis in this specific task. This was remarkable especially considering that, for the baseline LLMs, namely Baichuan 2 and Chinese-LLaMA-2, the 13B models significantly outperformed the 7B models. It not only demonstrated that more model parameters can lead to further performance improvements, but also indicated the potential of LLMs that can be further simulated by scaling up the models. Due to computational resource constraints, we opted for the largest model size of 8B to balance performance. Fig. <ref> and Fig. <ref> showed that our model provided correct answers in most components, except for the incorrect addition of “congenital" in the diagnosis of hemangioma and the omission of the “nasal cannula oxygen" keyword in the components of the diagnostic basis and diagnostic and treatment plan.
§ DISCUSSION
§.§ Impact of Different Stages and Modalities in P2Med-MLLM
To investigate the impact of different stages and modalities, we provided a thorough ablation study of the Medical Multimodal Large Language Model for Pediatric Pneumonia (P2Med-MLLM) by removing single stage or modality. The results were shown in Table <ref> and Fig. <ref>.
First, we investigated the impact of different training stages on the most critical evaluation components of each task, specifically impression or diagnosis results (columns Full and A-C in Table <ref>). We found that each stage contributed to performance improvement, demonstrating the significance of medical knowledge infusion pre-training, task type-based balanced instruction tuning, and disease category-based balanced instruction tuning in multi-task clinical decision supports. Specifically, the importance of the three stages, in descending order, were: stage 1, stage 2, and stage 3, respectively.
In addition to the three-stage training strategy, we also evaluated the impact of different modalities, that is, plain text and image-text data. By comparing column Full with columns D and E in Table <ref>, respectively, we observed that removing either modality adversely affected the performance of the other (at least 0.6 on average). These results suggested that tasks involving both modalities were mutually beneficial to some extent. Notably, image-text tasks had a more significant influence on plain text tasks.
Next, as shown in Fig. <ref>, we explored the performance of P2Med-MLLM across different stages and modalities for all evaluation components of each task. Compared to incomplete stages and modalities, P2Med-MLLM demonstrated significant advantages. Specifically, P2Med-MLLM outperformed others in 4 out of the 6 most crucial evaluation components and in 9 out of 16 evaluation components overall. These observations suggested that while P2Med-MLLM achieved the best results, some evaluation components, such as diagnostic basis and treatment plan, still showed room for improvement. We believed the reason behind this was that the ground truth for these open-ended evaluation components was inherently diverse, making automatic scoring with language models more challenging. Therefore, we focused primarily on the most crucial and standardized evaluation components of each task.
§.§ Impact of Different Tasks in P2Med-MLLM
Traditional methods typically involved training a network on a subset for a specific medical task. While intuitive, such a training strategy significantly increased computational complexity. To demonstrate the effectiveness of P2Med-MLLM trained jointly on multiple tasks, we compared it with multiple single-task dedicated networks on the most crucial evaluation components using accuracy and comprehensiveness metrics. As shown in Fig. <ref>, joint training by P2Med-MLLM yielded substantial performance improvements, particularly in tasks such as radiology report generation (Computed Tomography, CT), outpatient medical record generation, and chief physician’s first ward round record generation. We utilized a generative network to unify all tasks, and this flexible structure ensured performance while easily extending to new tasks. It was valuable in the real world for assisting clinicians in completing multiple tasks.
§.§ Impact of Different Conversation Forms in P2Med-MLLM
We found that during a single outpatient or inpatient visit for each patient, there may be multiple scans for the same imaging modality, reflecting changes in the patient's condition. Thus, for the radiology report generation task, we constructed a multi-round conversation using all scans of the same imaging modality from a single visit, arranged in chronological order. Using X-ray scans as an example, we compared the performance of models with and without multi-round conversations. As shown in Table <ref>, “w/o MRC" indicated treating each scan as an independent single-round conversation. Although P2Med-MLLM and “w/o MRC" achieved comparable results in the evaluation component of findings, adopting multi-round conversations showed a notable advantage in the most crucial evaluation component of impression, exceeding in both accuracy and comprehensiveness metrics by at least 0.37. This demonstrated that the temporal information was critical to perform radiological diagnosis.
§.§ Impact of Different Large Language Models in P2Med-MLLM
In this subsection, we explored different Large Language Models (LLMs) in P2Med-MLLM using the Medical Multimodal Benchmark for Pediatric Pneumonia (P2Med-MBench). Specifically, we compared models based on Baichuan 2 <cit.> and Chinese-LLAMA-2 <cit.>. The results in Table <ref> showed that the Chinese-LLAMA-2-based model significantly outperformed Baichuan 2-based model on average and most tasks, except for the outpatient medical record generation task. Therefore, we chose Chinese-LLAMA-2 as the Large Language Model (LLM) for P2Med-MLLM.
§ OUTLOOK
Multimodal Large Language Models (MLLMs) have brought substantial advancements in the healthcare field. In this study, we preliminarily explored and demonstrated the feasibility of securely and effectively training and deploying a MLLM on private hospital data, specifically focusing on real clinical scenarios involving patients with a primary diagnosis of pediatric pneumonia. Our work encompassed the entire process, from data collection and cleaning to model construction and evaluation, offering a valuable reference for researchers in the interdiscipline of artificial intelligence for medicine. We built the largest Chinese Medical Multimodal Dataset for Pediatric Pneumonia (P2Med-MD) so far. Different from previous efforts, the Medical Multimodal Large Language Model for Pediatric Pneumonia (P2Med-MLLM) employed a unified framework that supported both pure text data (outpatient and inpatient records) and temporally sequenced, interleaved 2D or 3D medical images alongside radiology reports, aligning more closely with clinical practice. P2Med-MLLM could potentially serve as a clinical assistant, helping doctors enhance diagnostic and treatment efficiency, providing personalized recommendations for pediatric pneumonia patients, and optimizing clinical workflows.
Despite the progress achieved in our research, there are several limitations. Firstly, for complicated and open-ended clinical tasks, such as generating diagnostic bases and treatment plans in medical records, the performance of P2Med-MLLM still falls short of real clinical applications. Additionally, the automatic scoring system lacks robustness, highlighting the need for more objective evaluation metrics. Secondly, this study only includes patients primarily diagnosed with pediatric pneumonia. Future work could extend the objects to cover all respiratory diseases, or even the entire spectrum of general medicine across all age groups. Thirdly, the study is limited to a single-center cohort, and data collection from multiple healthcare institutions and countries would enhance diversity and generalizability. Lastly, as this is a retrospective study, future research could explore prospective studies.
§ METHODS
In this section, we provided a detailed description of our self-built dataset, the medical multimodal Large Language Model (LLM), and the implementation details.
§.§ Medical Multimodal Dataset for Pediatric Pneumonia (P2Med-MD)
Currently, the medical domain faces a significant shortfall in multimodal datasets that accurately reflect real-world clinical scenarios, a crucial element for training a practical medical multimodal LLM. To bridge this gap, we constructed a high-quality, large-scale Chinese Medical Multimodal Dataset for Pediatric Pneumonia (P2Med-MD), through human-machine interaction. P2Med-MD focused on pediatric patients with a primary diagnosis of pneumonia. Here, we started by providing an overview of P2Med-MD in Sec <ref>. It consisted of three sets, medical knowledge infusion, task type-based balanced sampling, and disease category-based balanced sampling, corresponding to Sec <ref>, <ref>, and <ref>, respectively. These parts were utilized for different training stages described in Sec <ref>. In Sec <ref>, we introduced a new Medical Multimodal Benchmark for Pediatric Pneumonia, termed P2Med-MBench, which encompasses six tasks, e.g., radiology report generation (X-ray), radiology report generation (Computed Tomography, CT), outpatient medical record generation, first disease course record generation, attending physician's first ward round record generation, and chief physician's first ward round record generation. These tasks were designed to monitor the development of the medical multimodal LLM.
§.§.§ Overview
The study was approved by the Ethics Committee of Children’s Hospital, Fudan University (2022-307A, approved November 22, 2022). For participants admitted before November 22, 2022, informed consent was waived; for those admitted on or after November 22, 2022, informed consent was obtained. In this retrospective study, we collected the outpatient information of 163,999 patients and the inpatient information of 8,684 patients who were admitted to Children’s Hospital of Fudan University between August 26, 2016 to November 1, 2023. Outpatient information included outpatient medical records, and chest X-ray and CT scans along with corresponding radiology reports. Inpatient information comprised three-level round records formed by first disease course records, attending physician's first ward round records, and chief physician's first ward round records, as well as chest X-ray and CT scans with corresponding radiology reports. The built dataset altogether contained 67,616 chest X-ray examinations and 2,321 chest CT examinations along with their respective radiology reports, 684,758 outpatient medical records, 9,180 first disease course records, 9,993 attending physician's first ward round records, and 6,426 chief physician's first ward round records. More details were given in Table <ref>. Fig. <ref> illustrated the distribution of gender, age, and image modalities in P2Med-MD.
§.§.§ Stage 1: Medical Knowledge Infusion Data
Considering the complexity of inpatient records, stage 1 injected medical knowledge into the general model by learning from all radiology image-report pairs (including X-ray and CT) and simple outpatient medical records. Fig. <ref> depicted the disease categories derived from the impression in X-ray radiology reports, with each category comprising over 100 samples during stage 1. Fig. <ref> displayed the disease categories extracted from the impression in CT radiology reports, each with more than 10 samples. And Fig. <ref> outlined the disease categories identified from the preliminary diagnosis in outpatient medical records, each encompassing over 400 samples.
However, we observed a significant amount of repetitive descriptions within the outpatient records. Prior researches <cit.> have demonstrated that repetition in training data can degrade model performance. Thus, it was crucial to perform deduplication to ensure the quality of outpatient records. The data deduplication scheme employed in previous studies <cit.> typically relied on non-whitespace exact text matching, which was suboptimal due to the diverse writing styles of different doctors. <cit.> indicated that near-deduplication could enhance performance. We followed this pipeline that largely inherited the settings from CodeParrot <cit.>. It involved calculating MinHashes <cit.> of all outpatient records and applying Locally Sensitive Hashing (LSH) to cluster records based on their MinHash fingerprints. During the LSH phase, similar outpatient records were grouped into the same buckets, thereby identifying them as duplicates. From each group of duplicates, only one record was retained. Fig. <ref> illustrated the disease distribution post-deduplication of these outpatient medical records.
§.§.§ Stage 2: Task Type-Based Balanced Sampling Data
To ensure the model effectively followed diverse task instructions, we curated a variety of instruction-following data covering six distinct tasks in stage 2. Given the two orders of magnitude difference in sample volumes between outpatient and inpatient medical records, it was essential to balance the number of outpatient and inpatient records, as the generative model was sensitive to data imbalances. By looping through disease categories with sample sizes between 325 and 5,000 in outpatient records of stage 1, a maximum of 500 samples per category were sampled without repetition until the total sample size was balanced with that of inpatient records. Due to the multi-label nature of the preliminary diagnosis in outpatient records, the sampled data included a broader range of disease categories. Fig. <ref>, <ref>, <ref>, <ref>, <ref>, and <ref> depicted the distribution of disease categories for six tasks during stage 2. Specifically, Fig. <ref>, <ref>, and <ref> illustrated the disease categories extracted from the admission diagnosis or current diagnosis in three-level inpatient medical records, each category featuring over 40 samples.
§.§.§ Stage 3: Disease Category-Based Balanced Sampling Data
We observed considerable differences in the distribution of disease categories per task in stage 2, potentially impairing the performance of the generative model. To mitigate the long-tail problem, it was essential to perform balanced sampling of disease categories in stage 3. For X-ray image-report pairs, we sampled up to 500 samples per category from those with sample sizes ranging from 100 to 2,000. All CT image-report pairs were included due to the relatively smaller sample size. For outpatient medical records, we sampled up to 200 samples per category from those identified in stage 1 with sample sizes between 325 and 5,000. For three-level inpatient medical records, we included all samples from disease categories containing 40 to 500 samples. Fig. <ref>, <ref>, <ref>, <ref>, <ref>, and <ref> illustrated the distribution of disease categories for six tasks in stage 3, which were more balanced compared to stage 2.
§.§.§ Medical Multimodal Benchmark for Pediatric Pneumonia (P2Med-MBench)
Building upon P2Med-MD, we presented P2Med-MBench, a comprehensive evaluation benchmark for pediatric pneumonia. P2Med-MBench contained six distinct tasks, including radiology report generation (X-ray), radiology report generation (CT), outpatient medical record generation, first disease course record generation, attending physician's first ward round record generation, and chief physician's first ward round record generation. A detailed breakdown of each task, including task description, clinical scenario, modality, image dimension, model input, and model output <cit.>, was shown in Table <ref>.
It was noteworthy that the P2Med-MD was collected over a continuous period. It was diverse and complex, potentially even containing some data with noise. To guarantee the data quality and representativeness for evaluation, two pediatric pulmonology specialists performed meticulous manual verification of the P2Med-MBench samples. Ultimately, we obtained 121 samples for radiology report generation (X-ray), 121 samples for radiology report generation (CT), 100 samples for outpatient medical record generation, 100 samples for first disease course record generation, 100 samples for attending physician's first ward round record generation, and 100 samples for chief physician's first ward round record generation. These samples were dismissed in the whole training set. Detailed descriptions of the six evaluation tasks were provided in the following.
Radiology report generation (X-ray).
This task primarily focused on the automatic generation of radiology reports for X-ray images, encompassing two key sections: findings and impression. The former provided a detailed description of crucial aspects observed in the 2D X-ray images, while the latter summarized the most relevant findings. Given that an outpatient or inpatient might have one or more X-ray images taken from various views and different times, we incorporated time, modality, and corresponding multi-view images in the input to facilitate correlation and comparison with prior radiological data of the same patient, thereby enabling the generation of more objective and comprehensive radiology reports. For a given set of X-ray images, we employed prompt sentences similar to the following as input: “Current radiological data is as follows: \n [Examination time] December 31, 2022 \n [Examination modality] X-ray \n [Image] image...image \n Based on the above information, combined with professional radiological knowledge, generate a report in the format: \n [Findings] {Your findings based on the images} \n [Impression] {Your impression based on the images} \n". The number of image tokens corresponded to the number of views, with one for the anteroposterior view and two for the anteroposterior and lateral views. The impression, as the most critical component, was assessed using two metrics: accuracy and comprehensiveness, while the findings were evaluated solely on accuracy. To ensure the reliability of our evaluation, we have selected 100 sets of X-ray image-report pairs at unique time points and 10 sets at multiple times from the same patient, altogether comprising 121 samples and covering more than 47 distinct diseases.
Radiology report generation (CT).
This task was similar to the radiology report generation (X-ray) task but was specifically designed for 3D CT images, thereby the examination modality was CT. The number of image tokens denoted the number of series, with one representing a non-contrast series, and two indicating both non-contrast and contrast-enhanced series. The final selection of 121 samples encompassed more than eight types of diseases.
Outpatient medical record generation.
This task imitated the clinical process of a physician's outpatient visit, utilizing textual information such as the chief complaint, history of present illness, and physical examination to formulate a preliminary diagnosis, a treatment recommendation for the patient, and a treatment plan for the doctor. Here, we simulated this task as a prompt-based generative dialogue task. For example, we used the following as input: “Current outpatient pediatric information is as follows: \n [Chief complaint] The pediatric patient presented with cough and fever for 4 days ... \n [History of present illness] Maximum temperature of 40^∘C ... \n [Physical examination] The pediatric patient is conscious and responsive ... \n Based on the above information, combined with professional medical knowledge, make a diagnosis in the format: \n [Preliminary diagnosis] {Your preliminary diagnosis} \n [Treatment recommendation] {Your treatment recommendation}\n [Treatment plan] {Your treatment plan} \n". The output was then matched with the ground truth. The preliminary diagnosis, the most crucial element, was assessed on both accuracy and comprehensiveness, while the treatment recommendation and treatment plan were evaluated only on accuracy. We have selected 100 samples covering more than 55 disease types for preliminary diagnosis.
First disease course record generation.
This task simulated the process by which a resident physician recorded the first disease course for a patient within 24 hours of hospitalization, synthesizing textual information such as history of present illness, physical examination, auxiliary examination, and clinical history features to predict the diagnostic basis, admission diagnosis, and diagnostic and treatment plan. The diagnostic basis explained the causes related to the admission diagnosis, reflecting the model's capacity for logical reasoning. We employed prompt sentences like “Current inpatient pediatric information is as follows: \n [History of present illness] The pediatric patient had a fever without obvious inducement five days ago (October 9, 2023) ... \n [Physical examination] The pediatric patient is conscious and responsive ... \n [Auxiliary examination] October 9, 2023: outpatient blood test ... \n [Clinical history features] Male, 13 years old ... \n Based on the above information, combined with professional medical knowledge, make a diagnosis in the format: \n [Diagnostic basis] {Your diagnostic basis} \n [Admission diagnosis] {Your admission diagnosis} \n [Diagnostic and treatment plan] {Your diagnostic and treatment plan} \n" as input. We focused primarily on the admission diagnosis, evaluating it for both accuracy and comprehensiveness, while the diagnostic basis and diagnostic and treatment plan were assessed only for accuracy. Similarly, we have selected 100 samples that include more than 47 types of diseases for admission diagnosis.
Attending physician's first ward round record generation.
This task simulated how an attending physician performed the first ward round within 72 hours of a patient's hospitalization, analyzing textual data such as clinical history features and additional clinical history and signs to predict the diagnostic basis, current diagnosis, and diagnostic and treatment plan. We utilized the following prompt as input: “Current inpatient pediatric information is as follows: \n [Clinical history features] Male, 13 years old ... \n [Additional clinical history and signs] The pediatric patient continues to experience recurrent fever, peaking at 39.7^∘C ... \n Based on the above information, combined with professional medical knowledge, make a diagnosis in the format: \n [Diagnostic basis] {Your diagnostic basis} \n [Current diagnosis] {Your current diagnosis} \n [Diagnostic and treatment plan] {Your diagnostic and treatment plan} \n". Our primary focus was on the current diagnosis, thus we evaluated the prediction using accuracy and comprehensiveness, while the diagnostic basis and the diagnostic and treatment plan were assessed solely on accuracy. Similarly, we have selected 100 samples, covering over 79 types of diseases for current diagnosis.
Chief physician's first ward round record generation.
This task was similar to the attending physician's first ward round record generation task; however, it specifically simulated the chief physician's first ward round record within one week of a patient's hospitalization. The selected 100 samples encompassed more than 74 types of diseases for current diagnosis.
§.§ Medical Multimodal Large Language Model for Pediatric Pneumonia (P2Med-MLLM)
As illustrated in Fig. <ref>, the architecture of the Medical Multimodal Large Language Model for Pediatric Pneumonia (P2Med-MLLM) primarily consisted of three modules: a pre-trained LLM (e.g., Chinese-LLaMA-2 <cit.>) serving as the foundational model, a pre-trained vision encoder (e.g., CLIP <cit.>) responsible for encoding medical images into image embeddings, and an attention-based perceiver <cit.> that transformed these image embeddings into image tokens compatible with the LLM.
For P2Med-MLLM training, we considered a three-stage procedure, shown in Fig. <ref>. During the first stage of medical knowledge fusion pre-training, only the perceiver module was trainable, facilitating the alignment of multimodal features. Subsequently, during the second and third stages of instruction-tuning, the LLM employed Low-Rank Adaptation (LoRA) <cit.> for efficient parameter tuning. All medical data were formatted as either a single-round conversation (plain text and image-text paired data) or multi-round conversations (interleaved image-text data) for model training. Next, we would provide a detailed introduction to the P2Med-MLLM.
§.§.§ Efficient Large Language Model Finetuning
The LLM pre-trained on web datasets lacked the vertical domain knowledge required for pediatric pneumonia, leading to suboptimal performance for corresponding medical tasks. It was essential to update the LLM parameters using medical data. Due to constraints in computational resources, finetuning the full parameters of the LLM, which consisted of 7B parameters, was unfeasible. To address these challenges, we adopted LoRA for efficient parameter tuning.
LoRA introduced low-rank matrices, denoted as A and B, which had a significantly smaller number of parameters than the original model weights. The adaptation was formulated as:
W' = W + Δ W
where W was the original weight matrix, and Δ W was the low-rank update defined as:
Δ W = A B^T
where A ∈ℝ^d × r and B ∈ℝ^d × r, with r being the rank which was much smaller than d, the dimension of W. T represented the transpose operation.
By training only the low-rank matrices A and B while keeping the original LLM parameters frozen, we achieved efficient optimization with significantly reduced computational overhead. The lightweight nature of these low-rank matrices ensured that there was almost no additional inference latency introduced during the inference stage. Ultimately, we efficiently incorporated critical medical knowledge into the LLM, enhancing its performance in this specialized field without the need for extensive computational resources.
§.§.§ 2D/3D Medical Image Perception
Traditional approaches typically employed a linear projection <cit.> or a Multi-Layer Perceptron (MLP) <cit.> as the cross-modal connector to convert medical image embeddings into visual tokens for being integrated into LLM. However, these conventional methods encountered significant challenges when processing 3D CT images, which typically consisted of more than 30 slices. The conversion of these images resulted in an excessively large number of visual tokens, far exceeding the LLM's maximum token limit. For instance, a 2D image was encoded as 576 visual tokens, whereas a 3D image with 30 slices was encoded as 30 × 576 = 17,280 visual tokens, which far exceeded the typical LLM maximum length of 4,096 tokens.
To address this challenge, we utilized a lightweight Transformer decoder-only <cit.> structure based on the attention mechanism, named as the perceiver module <cit.>, to simultaneously process 2D/3D medical images embeddings into a fixed number of visual tokens. Specifically, the perceiver first incorporated learnable temporal and positional embeddings into the image embeddings, which were then flattened for injecting into the attention layer. The attention layer operates were as follows:
Q = W^Q h, K = W^K x, V = W^V x
where h represented the learnable latent array, while x corresponded to the flattened visual features. Q, K, and V denoted the query, key, and value vectors used in cross-attention interactions. W^Q, W^K, and W^V were learned weight matrices.
The attention mechanism then computed:
Attention(Q, K, V) = softmax(QK^T/√(d_k))V
where d_k was the dimension of the key vector.
Ultimately, through this unified architecture, the perceiver module efficiently processed the perception of both 2D and 3D medical images, ensuring that the resulting visual tokens were compatible in length with the LLM. This approach optimized the integration of complex medical imaging data with the LLM, making it suitable for advanced medical tasks.
§.§.§ Multimodal Medical Data Formats
The medical data collected for training were categorized into plain text data and multimodal data. Each data instance input X_i and output X_o were reformatted into an instruction-following structure:
𝑋_p 𝑋_i^1 ⟨⟩ 𝑋_o^1 ⟨⟩
where X_p referred predefined instructional prompts for different tasks. Please see Sec <ref> for the illustrations of different prompts. The model was designed to predict the assistant's responses and where to stop. Therefore, only green tokens were used to calculate the training loss.
For the plain text data, which primarily concerned record generation, X_i referred to patient-specific information, while X_o consisted of the resultant medical records. In the context of multimodal medical image-report data, the input X_i referred to visual data such as X-ray or CT images, and X_o consisted of the findings and impressions, interpreting and summarizing the visual observations. Qualitative examples illustrating our data formats were shown in Fig. <ref>. For the original Chinese version, please refer to Fig. <ref>.
The analysis of the collected data revealed that a patient sometimes underwent multiple radiological examinations, resulting in correlated medical image-report pairs. For instance, sequential reports may contain comparisons such as “Compared to the scan from October 29, 2022, both lungs show ...". Treating each medical image-report pair as an independent instruction instance can hinder the model's ability to recognize relationships across a patient's sequential image-report data. To mitigate this limitation, we converted multiple related image-report pairs of a patient into an interleaved data format:
𝑋_p 𝑋_i^1 ⟨⟩ 𝑋_o^1 ⟨⟩,
𝑋_p 𝑋_i^2 ⟨⟩ 𝑋_o^2 ⟨⟩…
This approach enabled our model to consider all previously associated image-report data when generating new reports.
§.§ Training Details
§.§.§ Data Preprocessing
In the analysis of medical imaging and textual data for pediatric pneumonia, we initially de-identified all patient-related information. For the preprocessing of 2D chest X-ray images, each chest X-ray examination retained either an anteroposterior view or both anteroposterior and lateral views, and the x-axis and y-axis were resized to 336 pixels. For the preprocessing of 3D chest CT images, we selected lung reconstruction series with a slice thickness of 5.0 mm and normalized them based on a window level of -500 HU and a window width of 1,200 HU. Each chest CT examination retained either a non-contrast series or both non-contrast and contrast-enhanced series, and the x-axis and y-axis were resized to 336 pixels. When the z-axis dimensions of non-contrast and contrast-enhanced series differed, the shorter one was padded with zeros to match. For the preprocessing of textual data including medical reports, outpatient, and inpatient records, we removed records with any missing element in the model's ground truth output. Additionally, we excluded records containing over 4,000 tokens, as their excessive lengths hindered the effective learning of the LLM. Specifically, for deduplication of outpatient records, we utilized 5-grams with a Jaccard similarity threshold of 0.85, 16 rows, and 256 bands. Meanwhile, we filtered out outpatient records with n-grams less than 5.
§.§.§ Implementation
We utilized a 24-layer, 2D ViT-L/14 with 1,024 embedding dimensions as the vision encoder, initialized with CLIP weights. The perceiver was a 6-layer transformer decoder with a learnable latent array of 32 × 4,096 dimensions. For the LLM, we employed the 32-layer, 7B Chinese-LLaMA-2. Our final model comprised 8B parameters. During the three training stages, we froze the vision encoder and LLM, updating only the perceiver and LoRA parameters. All models were implemented in PyTorch and trained on 8 NVIDIA A6000 GPUs with 48 GB memory each. To prevent gradient errors during backpropagation, each batch during training samples was either image-text pairs or plain text data. For optimization, we used the Adam optimizer with a cosine decay scheduler and a warmup ratio of 0.03. Detailed hyperparameters were provided in Table <ref>.
IEEEtran
§ SUPPLEMENTARY MATERIALS
|
http://arxiv.org/abs/2409.03033v1 | 20240904190121 | Machine-aided guessing and gluing of unstable periodic orbits | [
"Pierre Beck",
"Jeremy P. Parker",
"Tobias M. Schneider"
] | nlin.CD | [
"nlin.CD"
] |
[email protected]
Emergent Complexity in Physical Systems Laboratory (ECPS), École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
[email protected]
Division of Mathematics, University of Dundee, Dundee DD1 4HN, United Kingdom
[email protected]
Emergent Complexity in Physical Systems Laboratory (ECPS), École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
§ ABSTRACT
Unstable periodic orbits (UPOs) are believed to be the underlying dynamical structures of spatio-temporal chaos and turbulence. Finding these UPOs is however notoriously difficult. Matrix-free loop convergence algorithms deform entire space-time fields (loops) until they satisfy the evolution equations. Initial guesses for these robust variational convergence algorithms are thus periodic space-time fields in a high-dimensional state space, rendering their generation highly challenging. Usually guesses are generated with recurrency methods, which are most suited to shorter and more stable periodic orbits. Here we propose an alternative, data-driven method for generating initial guesses: while the dimension of the space used to discretize fluid flows is prohibitively large to construct suitable initial guesses, the dissipative dynamics will collapse onto a chaotic attractor of far lower dimension. We use an autoencoder to obtain a low-dimensional representation of the discretized physical space for the one-dimensional Kuramoto-Sivashinksy equation, in chaotic and hyperchaotic regimes. In this low-dimensional latent space, we construct loops based on the latent POD modes with random periodic coefficients, which are then decoded to physical space and used as initial guesses. These loops are found to be realistic initial guesses and, together with variational convergence algorithms, these guesses help us to quickly converge to UPOs. We further attempt to `glue' known UPOs in the latent space to create guesses for longer ones. This gluing procedure is successful and points towards a hierarchy of UPOs where longer UPOs shadow sequences of shorter ones.
Machine-aided guessing and gluing of unstable periodic orbits
Tobias M. Schneider
September 9, 2024
=============================================================
§ INTRODUCTION
It is widely accepted that unstable periodic orbits (UPOs) play an important role in supporting chaotic dynamics in many driven dissipative nonlinear systems.
Periodic orbits are believed to be dense in the chaotic attractor of such systems and are organized in a hierarchical fashion, where longer orbits shadow a sequence of shorter ones.
A well-known example displaying this hierarchical organization structure is the chaotic Lorenz ODE (ordinary differential equation) system with its famous chaotic attractor, which resembles the shape of a butterfly. Periodic orbits and chaotic trajectories shadowing them can be encoded by their sequential passage from one wing to the other. This leads to a description in terms of symbolic dynamics and reveals that periodic orbits are related to each other, with longer ones shadowing shorter ones <cit.>. The symbolic encoding, moreover, allows one to enumerate all UPOs.
For chaotic PDEs, including the Navier-Stokes equations in the turbulent regime, we expect the same hierarchical organization of UPOs as in low-dimensional ODEs; and formal periodic orbit theory aimed at describing ergodic averages in terms of expansions over UPOs or, alternatively, over prime cycle sequences characterizing those UPOs, assumes it <cit.>. However, at least in the context of fluid flows, we are not aware of any direct demonstration of the hierarchical UPO organization. This fact highlights the algorithmic and numerical challenges inherent in identifying UPOs in high-dimensional chaotic systems. These computational challenges are especially severe for long periodic orbits that may be shadowing several shorter ones.
Physically relevant nonlinear dissipative chaotic PDEs, including model equations and the full Navier-Stokes equations, are usually formulated in 1-3 spatial dimensions, but their solution space is a formally infinite-dimensional function space. The spatial and temporal dimensions are typically discretized with many points, yielding a high-dimensional set of coupled ODEs. The identification of UPOs of the Navier-Stokes equations <cit.> suggests that a similar dynamical systems approach as in ODEs can be applied to PDEs, with spatiotemporal chaos being viewed as a chaotic walk between invariant solutions, such as equilibria, UPOs and invariant tori <cit.>.
However, for 3D fluid flows, only very few UPOs have been identified due to the difficulties in computing them efficiently. Specifically, an envisioned hierarchical organization of UPOs has not been directly and conclusively demonstrated thus far.
Much research has been conducted on the identification of UPOs, which is usually done in two steps: first by defining an adequate guess for a UPO and secondly by converging this guess to a solution of the system. The obvious and most common methods for finding UPOs are Newton shooting methods <cit.>. Here, the initial guess for the UPO is represented by a point in state space, namely the initial condition, and a period T. These two are then varied until the time-integrated trajectory closes in on itself. Typically, the optimization step is solved with a Newton algorithm. However, the exponential error amplification encountered when time-integrating a chaotic dynamical system leads to convergence issues, particularly when searching for long UPOs.
More recently, loop convergence algorithms <cit.> and their matrix-free variations <cit.> have shown to be effective in finding UPOs <cit.>. The guess is now a space-time field that is already periodic (a loop) with time T but it does not satisfy the flow equations. The matrix-free variational methods from <cit.> deform the loop until its tangent vectors align everywhere with the flow vectors prescribed by the equations. This removes the time-integration aspect and consequently the challenges associated with an exponential error amplification characteristic of chaotic systems.
In order to converge to UPOs (and in particular many distinct and long ones) we require a method to construct good guesses. Conventionally, guesses are extracted from recurrency methods <cit.>, where one looks for sub-trajectories in a long DNS of the system that almost close in on themselves. The downside of this method is that a trajectory is required to follow a UPO for an entire period, which is unlikely due to their unstable nature. As a result, this method is biased towards the same few frequently visited UPOs (usually short and less unstable ones). Although short periodic orbits are expected to have larger contributions in periodic orbit theory <cit.>, longer and more unstable periodic orbits are still necessary to obtain more accurate statistics <cit.>. They are also interesting from a control-theoretic point of view as they capture the dynamics and can be tracked for varying control parameter values <cit.>. Moreover, some dynamics appear to only be captured by long periodic orbits. <cit.> study the Kuramoto-Sivashinsky PDE, and for their control parameter of choice, the shortest UPO they find has period 12.08, while orbits that connect dynamically different parts of the chaotic attractor have periods of around 355.34 or more. Identifying guesses for such long UPOs that converge in the context of shooting methods is extremely difficult and requires unrealistically precise and extremely rarely observed recurrences. However, loop-based convergence algorithms that formulate a guess as a loop representing an entire space-time field have much more robust convergence properties than shooting approaches. This may allow to extract guesses within the convergence algorithm's convergence radius using alternative methods instead of a recurrency analysis.
While the solution space is formally infinite-dimensional, trajectories in nonlinear, chaotic, driven, dissipative systems (such as the Navier-Stokes equations) collapse on a chaotic attractor once transients have died down. This attractor can be embedded in a curved manifold of far lower dimension - often termed the inertial manifold. Consequently, the high-dimensionality of the system's state space that renders the identification of UPOs so challenging, may be interpreted as an artifact of not knowing the most appropriate coordinates for describing the lower-dimensional intrinsic dynamics within the inertial manifold. If one had access to coordinates that approximately parametrized the lower dimensional (as compared to the discretization dimension) manifold that the attractor is embedded in, one could use these reduced coordinates to construct initial guesses for UPOs. In combination with the robust loop-convergence algorithms, even randomly drawn closed curves that lie in the inertial manifold and match the statistical properties of the attractor may be sufficient to define realistic guesses and identify UPOs. In analogy to the analysis of low-dimensional ODEs such as the Lorenz system, concatenating sequences of short UPOs to formulate guesses for long UPOs within such reduced coordinates may further allow to construct the hierachical sets of UPOs for PDEs that are expected to exist in theory, but so far have not been demonstrated directly.
Linear model order reduction methods such as Dynamic Mode Decomposition (DMD) <cit.> and Principal Component Analysis (PCA) <cit.> (more commonly known as Proper Orthogonal Decomposition (POD) in the fluid dynamics community) are very popular for dimensionality reduction. While they capture a great deal of information, they are known to generalise less well to highly nonlinear systems and are outperformed by nonlinear deep learning methods such as autoencoders <cit.> <cit.>. The fact that autoencoders in particular seem efficient in giving a low-dimensional representation of spatiotemporal chaos is demonstrated by <cit.>. They train an autoencoder to identify low-dimensional embeddings of monochromatically forced Kolmogorov flow. They find that even for very low latent dimensions, such as N_h = 3, they obtain small losses, and come to the conclusion that much of the dynamics, such as low-dissipation events, live in a low-dimensional space. Even high-dissipation events are captured by only slightly larger latent dimensions, such as N_h = 32. Within the low-dimensional latent space, <cit.> re-define a recurrence function to obtain guesses for periodic orbits. <cit.> also explore autoencoders and combine them with a neural network in the latent space for time-series prediction. In particular, they study how the quality of the autoencoder improves as the latent dimension approaches the manifold dimension of the chaotic attractor.
We propose to use an autoencoder to obtain an approximation of the low-dimensional manifold coordinates. Inside the latent space defined by the autoencoder, we randomly define loops that are in statistical agreement with the attractor. To this end, we define the loops as linear combinations of the latent POD modes with random periodic coefficients chosen to match the moments of the latent flow. We can arbitrarily adjust the complexity and length of these loop guesses and target longer UPOs by increasing the number of `twists' or `turns' in the loop. Moreover, the low-dimensionality of the latent space allows us to concatenate or `glue' orbits together. By gluing orbits, we define longer and more accurate guesses in a hierarchical fashion as has been observed for ODEs but not for PDEs, to the best of our knowledge.
The structure of this paper is as follows: in section <ref> we give a brief reminder of loop-based convergence methods and introduce our setup for illustrating the methods in the Kuramoto-Sivashinsky PDE in chaotic and hyperchaotic regimes. In section <ref> we describe our methods used for defining initial guesses and describe the complete convergence setup for finding UPOs. We also define the notion of latent gluing and explain how we generate new, longer guesses by concatenating shorter UPOs. In section <ref> we first apply these methods to Kuramoto-Sivashinsky for parameter value L = 39 (low-dimensional chaos), and then in the hyperchaotic case at L = 100. We discuss and conclude in section <ref>.
§ BACKGROUND
§.§ Loop convergence methods
<cit.> treat the general PDE ∂_tu = F(u) for a real field u(x,t) on a n-dimensional spatial domain 𝒳⊂ℝ^n with initial condition u_0. The flow function f^t advances the dynamical system in time u(t) = f^t(u_0) = u_0 + ∫_0^tFdt' where t is the time. A fixed point u^* is a solution that satisfies F(u^*) = 0. A periodic orbit is characterised by an instantaneous field u (the initial condition) and a period T>0 that satisfy f^T(u) - u = 0 such that for any T^* < T this equation is not satisfied.
In shooting methods, a guess for a periodic orbit consists of an initial condition u_0 and a period T. The pair (u_0, T) are then varied until the time-integrated curve closes in on itself. This is done by solving the equation f^T(u_0) - u_0 = 0, typically via Krylov subspace methods <cit.> including Newton GMRES hook-step methods <cit.> and variations <cit.>. More recently, <cit.> define a cost function that is the norm of this equation and use gradient-based optimization to minimize it until a root is found.
In the loop convergence algorithm of <cit.>, the guess consists of an entire space-time field u(x, t), defined on 𝒳× [0,T)_periodic that is time-periodic with a guess period T, but does not necessarily satisfy the evolution equations of the system. A priori, the period T is unknown, and hence the field is re-scaled such that s = t / T and ũ(x, s) := u(x, sT). Hence ũ is a function defined on 𝒳× [0,1)_periodic mapping to ℝ^n. A solution ṽ(x, s) of the system then satisfies the re-scaled equation
1/T∂ṽ/∂ s = F(ṽ)
Defining the residual vector r for a loop ũ(x, s) by
r = F(ũ) - 1/T∂ũ/∂ s
we obtain the cost function J of a loop:
J := ∫_0^1∫_𝒳r·r drds
A loop with J = 0 satisfies the flow equations and is a closed curve, and hence a periodic orbit. Conceptually, the loop is deformed until it satisfies the flow equations. Geometrically, the tangent vectors of the loop are aligned with the velocity vectors of the flow.
<cit.> minimize the cost function with a variational approach. While the method is very robust <cit.>, it involves a large Jacobian and therefore does not scale to high-dimensional systems, a challenge laid out explicitly by <cit.> and <cit.>. Inspired by a similar approach on equilibria by <cit.>, <cit.> formulated a matrix-free adjoint-based method (which we will use in this paper) for minimizing J over the space of loops which scales to high-dimensional systems, like the Navier-Stokes equations. They use the Kuramoto-Sivashinsky equation as a test-bed, while <cit.> apply a similar method to 2D Kolmogorov flow and explicitly address incompressibility in different ways. <cit.> deal with the challenge of computing pressure in the presence of solid walls in the 3D Navier-Stokes equations and introduce a method to accelerate the adjoint-based variational method with DMD. <cit.> also show that a similar approach can be used to compute connecting orbits through the construction of an analogous cost-function that undergoes an adjoint-based minimization process.
§.§ The Kuramoto-Sivashinsky equation
The 1D Kuramoto-Sivashinsky equation (KSE) is a nonlinear PDE which arises in the modelling of the evolution of viscous liquid films down a vertical plane <cit.>, reaction-diffusion systems <cit.>, and flame-fronts <cit.> and exhibits chaotic behaviour for certain parameter values. In this paper, we use the following non-dimensional formulation of the KSE:
u_t + uu_x + u_xx +u_xxxx = 0
where u is the velocity field and we assume that the spatial domain is L-periodic, such that u(x,t) = u(x+L,t). The domain length L is the control parameter. The system is invariant under spatial and temporal translation, as well as under reflection x→-x, u→-u. We will work in the anti-symmetric subspace u(x) = -u(-x) (denoted by 𝕌^+ in <cit.>), as was done for example in <cit.> and <cit.>. This discretizes the spatial translation invariance, and reduces it to x → x + L/2. The system is also invariant under Galilean transformations, however this is filtered out by the imposed anti-symmetry condition.
§.§.§ Low-dimensional chaos : L = 39
Initially, we set L = 39, as in <cit.>, for which low-dimensional chaos is observed. Although this is not the simplest chaos observed for the KSE <cit.>, it is simple enough for us to show-case our methods before we move to a more complicated system. In this case, we discretize the spatial dimension with N_x = 64 points, turning the scalar function u into a 64-dimensional state vector u (not to be confused with the n-dimensional continuous field u in section <ref>). The data of a long trajectory (dt = 0.1, T_max = 155,000) is generated using the ETDRK4 scheme <cit.>. To ensure that the data is indeed just from the chaotic attractor, we cut off the first 50,000 time-steps. A space-time plot of an example trajectory generated in this setup is shown at the top of figure <ref>.
§.§.§ Hyperchaos : L = 100
In the second instance, we set L = 100, when the system is hyperchaotic with 5 positive Lyapunov exponents according to <cit.>. At L = 39, the dynamics are still relatively simple (albeit chaotic) due to the restricted symmetry and the narrow spatial domain. The more complex, hyperchaotic system resembles true spatiotemporal chaos, which is more akin to turbulence in fluids. In this case, we discretize the spatial dimension with N_x = 170 points, for which we observe a sufficient drop between the largest and smallest frequency of the time-averaged energy spectrum, while not over-resolving the system and making computations too slow. A typical trajectory of the system is presented at the bottom of figure <ref>, showing the increased complexity. Again, we generate one long trajectory where we cut off the first 50,000 time-steps, in order to make sure that all our data is part of one chaotic attractor.
§ METHODS
In this section we first introduce the data-driven dimensionality reduction technique we use, namely autoencoders. We then explain how we generate loop guesses inside the autoencoder's low-dimensional latent space and describe the algorithmic procedure which we employ to converge to periodic orbits. Finally, we set out the gluing procedure to connect two existing periodic orbits, which serves as a guess for longer UPOs and helps us identify a hierarchy of UPOs.
§.§ Data-driven dimensionality reduction
§.§.§ Architecture
We apply an autoencoder to reduce the physical discretization dimension N_x to a latent dimension N_h. Autoencoders are neural networks that consist of two parts, namely the encoder ℰ: ℝ^N_in→ℝ^N_h and the decoder 𝒟: ℝ^N_h→ℝ^N_in. Given input data y∈ℝ^N_in, we would like to train the network so that approximately 𝒟∘ℰ (y) ≈y with N_h ≪ N_in.
The architecture of the autoencoder we employ in this paper varies slightly with the control parameter L, but the general architecture is similar: the encoder ℰ consists of three dense layers, with the third one having N_h nodes. The decoder has the same setup as the encoder, just in reverse. For the L = 100 case, we add four linear layers after the third dense layer of ℰ in order to minimize the rank of the latent space as described by <cit.> and explored by <cit.>. Illustrations of the networks are shown in figure <ref>. Note that we chose these architectures because they worked well for our purposes, but we do not claim that they are perfectly optimized. In both variations of the KSE explored later, we use ReLU activation functions on all layers except the linear layers: ReLU(x) = max{0,x}.
§.§.§ Training
The numerical data obtained from direct numerical simulation (as described in <ref>) is re-scaled before it is used for training. We subtract the mean flow and then use a min-max re-scaling:
u^* = (u - u_mean) - u_min/u_max-u_min + ϵ∈ [0,1)^N_x
where division is applied component-wise and ϵ = 10^-8 is a small constant to avoid division by 0. The minimum and maximum are vectors and also taken component-wise over u - u_mean. We drop the ^* for what follows for convenience. Due to the imposed anti-symmetry in the KSE and one component always being zero, we only use N_in = N_x / 2 - 1 components of u as inputs y. As loss function ℒ, we use the mean relative difference between the input y and the output 𝒟∘ℰ(y) rather than the standard mean-squared error, as dividing by the norm of y scales the loss in an interpretable manner. For N data points {y_n}_n=1^N, the loss is:
ℒ = 1/N∑_n=1^N||𝒟∘ℰ(y_n) - y_n||^2/||y_n||^2 + ϵ
The best choice for N_h is a priori not known other than that we would like N_h ≪ N_in. We take topological quantities such as the Kaplan-Yorke dimension D_KY as guidance <cit.>. Note, however, that D_KY is a global average dimension of the attractor, and locally the topology might be more complicated. Since we are not looking for perfect guesses, D_KY is a good starting point. One way to identify an appropriate N_h is to train the autoencoder for multiple values of N_h and compare how the loss of the network evolves <cit.>. For each parameter choice of L and a selection of values for N_h, we train 20 autoencoders with different initializations for each value of N_h. Of the 20 networks, we pick the one with the best test loss. We then decide on the value of N_h by considering D_KY and by observing for which value of N_h we have considerable drops in the test loss.
In this section, we introduced the data-driven dimensionality reduction techniques that we will use to obtain an approximate low-dimensional representation of the high-dimensional discretized system. In the next section, we explain how the autoencoder's latent space gives us effective low-dimensional coordinates appropriate for defining loop guesses that are time-periodic space-time fields, lie on the chaotic attractor by matching the statistics of the flow and can be adjusted in length.
§.§ Loops based on POD modes
We construct the guesses for UPOs based on linear combinations of the proper orthogonal decomposition (POD) modes <cit.> in the latent space with periodic coefficients. In general, consider a long time-series stacked in a matrix U∈ℝ^p× N, where the rows are p time-steps {u_i}_i = 1^p, and u_i∈ℝ^N. To calculate the POD modes, we compute the covariance matrix of the zero-mean time-series: let ũ_i = u_i - u, where u is the mean flow, and let Ũ∈ℝ^p× N be the corresponding zero-mean time-series. The unbiased estimator C for the covariance matrix is then given by
C = 1/p-1Ũ^TŨ∈ℝ^N× N
The POD modes ϕ_1, ..., ϕ_N are the eigenvectors of C
Cϕ_k = λ_kϕ_k
with corresponding eigenvalues λ_1, ...,λ_N. Without loss of generality, the modes ϕ_1, ..., ϕ_N are ordered such that the eigenvalues are in decreasing order λ_1 ≥ ... ≥λ_N ≥ 0 (note that since the covariance matrix C is symmetric and positive semi-definite, its eigenvalues are real and positive).
The POD modes can be interpreted as fluctuations around the mean flow. Thus, we define a loop L(x, s) via a linear combination of the ϕ_k (x) with periodic coefficients a_k(s,{X_m,k}_m = 0^M)
L(x, s) = u + ∑_k = 1^N a_k(s, {X_m,k}) ϕ_k(x)
where s is a periodic parameter and the {X_m,k}_m = 0^M are a sequence of independent and identically distributed (iid) random variables with a distribution X_m,k∼ X to be determined.
We want the distribution of X to be so that loops on average match the first and second moments of the flow. Matching the first moment then means that the loop-average over the distribution of loops should agree with the mean flow. We denote this averaging by 𝔼_X,s[...], meaning 𝔼_X[⟨ ... ⟩_s], where ⟨ ... ⟩_s = 1/2π∫_0^2π... ds. This gives the following two moment matching conditions:
𝔼_X,s[L] = u
cov_X,s(L) C^(L) = C
For our purposes, we assume that the X_m,k do not depend on s, and thus the order of integration between 𝔼_X and ⟨ ... ⟩_s does not matter.
We match the first (equation <ref>) and second moment (equation <ref>) by setting
𝔼_X,s[a_k] = 0
var_X,s(a_k) = λ_k
for k = 1, ..., N. The detailed derivation of equations <ref> and <ref> is given in the appendix.
We now define the coefficients a_k. Since we want them to be time-periodic, we write the coefficients as a sum of sines and cosines
a_k(s, A_:,k, B_:,k) = ∑_m = 0^M α_m [A_m, kcos(ms) - B_m, ksin(ms)]
where s∈[0,2π), M is the number of sine/cosine modes to be included in the sum, and the coefficients A_m, k, B_m, k∼ X are iid. The α_m are constants that give different weights of choice to higher frequency terms, for example α_m = (m + 1) / (M + 1).
By substituting equation <ref> into equations <ref> and <ref>, we find that
𝔼_X[A_m,k] = 𝔼_X[B_m,k] = 0
var_X(A_:,k) = var_X(B_:,k) = λ_k ( ∑_m = 0^M α_m^2)^-1
Thus, setting A_:,k, B_:,k∼𝒩(0, λ_k ( ∑_m = 0^M α_m^2)^-1) fulfills our requirements. Again, the detailed derivation is given in the appendix.
The parameter M allows us to define longer guesses. Geometrically, adding higher modes to the sum in equation <ref> introduces extra `twists' or `turns' in our loop. This can be seen intuitively by thinking of Poincaré sections: the shortest UPOs are expected to have only p=1 intersections with an adequately chosen Poincaré section. UPOs with p = 2 intersections would require an extra twist in their geometric loop representation to intersect twice with the hyperplane. Thus, if we want to target UPOs with p Poincaré intersections, we set M = p. We will verify this intuition in section <ref>.
Defining loop guesses via equation <ref> allows us to generate random guesses by ad-hoc loops that are time-periodic and statistically lie on the attractor by matching moments up to second order. Moreover, the number of sine/cosine modes M allows us to adjust the length of the guess and therefore to directly target long UPOs. Nevertheless, the guesses themselves do not know anything about the dynamics, making it a crude approach for loop definition. While we will see that we can easily generate promising guesses this way and converge to UPOs, we will also present the method's current limitations.
§.§ Algorithm for guessing and converging loops to UPOs
In sections <ref> and <ref>, we laid out the methods for obtaining a low-dimensional representation of the high-dimensional discretized system and for generating guesses that are time-periodic space-time fields that match the flow statistics up to second order. Together with a loop convergence algorithm, we can now devise a general scheme for guessing loops and converging them to periodic orbits:
* Obtain data {u_m}_m=1^M of the PDE of interest via direct numerical simulation.
* Train an autoencoder for an adequate choice of N_h to get ℰ and 𝒟.
* Obtain the latent POD modes ξ_1, ... ,ξ_K with eigenvalues γ_1, ..., γ_K based on a timeseries {h_i}_i = 1^p, where h_i∈ℝ^N_h, and K ≤ N_h is such that ∀ i = 1, ..., K we have γ_i > 0.
* Define a loop L in the latent space following the approach from section <ref>:
L(s) = h + ∑_k = 1^K a_k(s)ξ_k
where
a_k(s) = ∑_m = 0^M α_m [A_m, kcos(ms) - B_m, ksin(ms)]
and s∈ [0,2π). The coefficients A_m,k, B_m,k are randomly drawn from a normal distribution 𝒩(0, λ_k ( ∑_m = 0^M α_m^2)^-1).
* Decode the loop to physical space 𝒟(L).
* Use the adjoint solver from Azimi et al. <cit.> to converge the cost function J of the loop to J≈10^-4. Small weight adjustments are made to the gradient of the period T for stability.
* Use a Newton solver to converge the loop to machine precision.
As a post-processing step, every UPO's time-resolution is increased to 256 time-steps and re-converged.
§.§ Latent gluing of UPOs
Given two periodic orbits 𝒫_1 and 𝒫_2 with respective periods T_1, T_2, we will explore the possibility of concatenating or `gluing' them together in latent space and using this as an initial guess for a longer periodic orbit. The motivation behind this is symbolic dynamics, where a trajectory is described symbolically by its sequential passage through different parts of state space <cit.>. This appeals to a hierarchy of periodic orbits, where long UPOs shadow shorter ones <cit.>. Since this hierarchy appears to exist in ODE systems, we expect that this also applies in the KSE's physical and latent spaces.
Given discretizations P_1∈ℝ^N_t_1× N_x and P_2∈ℝ^N_t_2× N_x of these orbits and latent representations L_i = ℰ(P_i)∈ℝ^N_t_i× N_h, we define the latent glued orbit G∈ℝ^ (N_t_1 + N_t_2) × N_x by first finding the indices I, J that minimize the distance between the two orbits in the latent space
I,J = _i,j ||L_1^(i) - L_2^(j)||_2
where L_1^(i) and L_2^(j) are the i-th and j-th rows (or time-steps) of L_1, L_2 respectively. We define this minimal distance to be
ℓ_2 = ||L_1^(I) - L_2^(J)||_2
The number of time-steps of each discretization are adequately chosen such that N_t_1/N_t_2≈ T_1/T_2. Define the naively glued orbit G_0 by vertically stacking L_1 and L_2 at the points of closest passage
G_0 =
[ L_1^(1:I); L_2^((J+1):end); L_2^(1:J); L_1^((I+1):end) ]
Since this introduces a jump discontinuity, we smooth G_0 in the latent space to get the new guess G: We set the high-frequency temporal modes in Fourier space to zero and keep only the lowest 1/6 positive (and lowest 1/6 negative) Fourier modes. The guess is then defined as 𝒟(G) with guess period T = T_1 + T_2, and then follows steps 6 and 7 of algorithm <ref>.
§ RESULTS
We apply the methods described in section <ref> to the KSE for two different parameter regimes: first L=39, for which low-dimensional chaos is observed, and then for the hyperchaotic case at L = 100.
§.§ Low-dimensional chaos L = 39
Due to the imposed anti-symmetry in the system, half of the N_x = 64 discretization components are redundant. One of the remaining components is always zero. Thus the input vectors for the autoencoder have N_in = 31 dimensions. We train 20 autoencoders for each of N_h = 1,...,5 and find the final test losses shown in table <ref> and figure <ref>. As expected, the loss decreases when we increase N_h. The most significant drop is observed up to N_h = 3, which is in agreement with the Kaplan-Yorke dimension D_KY≈ 2.3 <cit.>. One might interpret this as measuring the dimension of the attractor <cit.>.
We want to be able to visualize our methods in 3D, avoid overfitting and also test the effectiveness of the method. Although it might be too small to obtain an exact representation of the system, we decide to continue from here on with N_h = 3. Figure <ref> shows the performance of the autoencoder on test data. This trajectory was not part of the training data and is hence entirely new to the network. We observe that the network is able to identify the key structures of the trajectory.
Figure <ref> visualises the attractor in the autoencoder's latent space together with a UPO in 3D (top) and in 2D projections (bottom). The points plotted in the figure are part of the data-sets used to train and test the autoencoder.
§.§.§ Periodic orbit searches
We generate multiple loops for different ranges of periods. We use all 3 latent POD modes to generate the loops as all 3 eigenvalues γ_1, γ_2, γ_3 are non-zero. As defined in section <ref>, we set the number of sine/cosine modes M to be equal to the targeted number of intersections p with an adequate Poincaré section. <cit.> uses û_1 = 0, (û_1)_t > 0 (where û_1 is the first component of the Fourier transform of u), in which case the dynamics appear to have a return time (the time between two consecutive intersections with the Poincaré section) of approximately 25 time units. Therefore, when we target short orbits with p = 1, we let M = 1 and choose guess period T = 25. For orbits with p = 2, we let M = 2 to introduce a `twist' in the loop and set the guess period to T = 50. For p = 3, we introduce two twists by choosing M = 3, and so on. For longer UPO searches (M≥ 3), we pick a range of periods between 25M and 25(M+1). Table <ref> in the appendix shows nicely how the number of Poincaré intersections of the final UPOs scales with M.
Figure <ref> compares decoded guess loops (of various lengths) to the UPOs and periods they converged to. We note that the guesses are realistic as they look like they could be trajectories of the KSE. This is emphasized when comparing the guesses to the final orbit they converge to: the guesses have similar sequences of patterns as the UPOs, and thus they look alike. In general, we find that many of our loops converge to periodic orbits, confirming that they are good initial guesses for loop convergence algorithms. The guesses look realistic and are already close to the UPOs that they eventually converge to. The detailed outputs of the runs for M = 1,2 and 3 are given in the tables <ref>-<ref>, where we generated 200, 500 and 700 loops respectively. We also indicate the number of times we converge to fixed points and when the algorithm does not converge. The latter happens either if we stopped the convergence algorithm too early or when the minimization of J gets caught in a local minimum, where J > 0 and ∇ J =0, instead of converging to a global minimum with J = 0. In cases where we converge to multiples of a short orbit (for example twice the 25.37 UPO) in higher-period runs, we cut the orbit into its shortest periodic component and re-converge. We conduct such searches for orbits with Poincaré intersections up to p = 4. Every UPO that we find is verified to still exist at temporal resolution N_t = 256. We note that for short orbits, we converge very often, with around 70% or 76% of guesses converging for targeted Poincaré intersections p = 1 and p = 2 respectively. As orbits get longer, the success rate naturally drops. While for p = 3, over half of the guesses still converge, when we target p = 4 this falls to around one quarter. These drops are to be expected as longer UPOs are generally harder to converge to, however we note that when we do latent gluing in section <ref>, the success rate for guesses with large periods shoots up significantly. A detailed overview of all runs is given in table <ref> in the appendix.
§.§.§ Latent gluing
We create multiple new guesses using the methodology laid out in section <ref> by gluing the orbits 𝒫_i found above. We limit ourselves to gluing orbits with period T_i < 100 for computational efficiency reasons. We run through all possible combinations of gluing two orbits together. Every 𝒫_i has a symmetric counter-part 𝒫_i^s obtained by the shift x↦ x + L/2. Thus, we glue 𝒫_i with both 𝒫_j and 𝒫_j^s. Since we find 18 orbits with T<100, this gives a total of 306 combinations. Figure <ref> compares the distribution of random distances of a long time-series in latent space to the distribution of distances of closest passage ℓ_2 between two UPOs in latent space and confirms that the points of gluing of two UPOs are indeed close together.
For illustration purposes, figure <ref> shows a 2D projection of the gluing process in the latent space between two short orbits with periods T_1 ≈ 24.908 and T_2 ≈ 25.371. In this case, the gluing is easy to follow visually, and the glued guess with initial period T_1 + T_2 ≈ 50.279 converges quickly to a periodic orbit with T ≈ 50.368. One can easily see that the converged orbit shadows the initial two. Figure <ref> shows the same process for two longer initial periodic orbits with periods T_1 ≈ 83.804 and T_2 ≈ 85.559. The glued guess with initial period T_1 + T_2 ≈ 169.363 looks very similar to the converged orbit with period T ≈ 169.467. This is even more apparent in figure <ref>, which shows the decoded, physical plots of these orbits, comparing the glued loop to the final periodic orbit. Again, the converged long orbit appears to shadow the initial two short ones. Figure <ref> confirms that for the glued guesses that converged, the final period is approximately equal to the sum of the initial periods.
Out of the 306 total guesses, 227 converge to UPOs, 160 of which are distinct, and 79 do not converge (either the optimization of J gets stuck in local minima or we need to run the convergence for longer). The largest period found this way is T ≈ 171.096 and we find a general success rate of 74.2%. Impressive is also the success rate for long UPO guesses: loops with guessed periods T_1 + T_2 > 100 converge in 210 / 284 ≈ 73.9% of cases, compared to the approximately 25% from purely random guesses (with M = 4) observed in the previous section. We note that some UPOs struggle with being glued to other orbits, such as those with periods 53.135, 57.227, and 57.627 (see table <ref> in the appendix).
Since we expect this hierarchy of periodic orbits, where long orbits shadow shorter ones, we indeed also expect the glued guesses to perform much better than the random ones. Doing the gluing process directly in physical space might also work just as well. The key take-away is that the expected hierarchy of UPOs appears to be present in this PDE system, and also in the latent attractor. This is a good confirmation that the autoencoder is able to capture a coherent, low-dimensional representation of the chaotic attractor of the KSE obtained through a nonlinear dimensionality reduction technique. Moreover, when the ℓ_2 distance between the two points of closest passage of two UPOs in latent space is small, then the convergence rate is larger. We obtain a convergence rate of 82.2% for guesses with ℓ_2 < 0.07. These cover over 50% of the guesses we attempted. Guesses with ℓ_2 ≥ 0.07 converge in only 65.8% of cases, showing that this ℓ_2 is a good indicator of whether orbits are gluable. Figure <ref> shows the cumulative convergence rate against ℓ_2.
Out of the total 153 glued symmetry pairs, in 14 cases neither glued guess converges. In 51 cases, we find that while one glued guess does not converge the other converges to a periodic orbit. We also observe that while 39 pairs converge to the same orbit, 49 pairs converge to two distinct ones. For the full output details, see table <ref> in the appendix.
In summary, in this section we applied the methods described in section <ref> to the KSE in the case of low-dimensional chaos. First, we generated guesses for UPOs by sampling random closed curves in the latent space that on average match the latent flow statistics up to second moments. By varying the number of sine/cosine modes in the linear combination of POD modes, namely the parameter M, we observed that the resulting UPOs usually had p = M intersections with the Poincaré section. This allows us to directly target longer UPOs by increasing M. Next, we glued UPOs together in the latent space at their closest points of passage. The resulting new guesses had very high convergence rates, indicating that the hierarchy of UPOs observed in ODEs is also present here. This also gives a method to search for longer UPOs. In the next section, we will apply these methods to the hyperchaotic case with L = 100.
§.§ Hyperchaos L = 100
Taking the Kaplan-Yorke dimension D_KY≈ 9.2 <cit.> of the system as guidance, we train 20 autoencoders for each of the latent dimensions N_h = 8, ..., 14. Figure <ref> shows the best test losses for each value of N_h. We note that there is a strong drop until N_h = 11, followed by another strong drop at N_h = 13. It is likely that, when training more N_h = 12 models, we would find a test loss that continues the exponential trend observed for N_h = 9, 10, 11, 13. By considering the test losses, as well as the D_KY, we decided to continue with the more parsimonious N_h = 11. Figure <ref> shows the performance of the autoencoder applied to a physical trajectory part of the test set. We note that the yellow spots (where the relative error is ≥ 2) are more frequent and larger, indicating that the autoencoder struggles more with the hyperchaotic system. Nevertheless, the autoencoder is able to reconstruct the general shapes and structures of the system, and thus shows satisfactory results. Moreover, the latent POD decomposition only has 9 non-zero eigenvalues, resulting in an effective 9 latent dimensions that we use to define our guesses.
§.§.§ Guesses for the period
In the L = 39 case, we related the period guesses to the system's approximate average return time of the Poincaré section. This becomes less straight-forward in higher-dimensional systems, such as the L = 100 case, as a Poincaré section of an n-dimensional system should be an (n-1)-dimensional subspace. Defining a guess period in this case is non-trivial, however we require one to be able to apply the convergence algorithm from <cit.>.
For a UPO of the system, it is possible to integrate around the UPO and obtain the time taken to traverse it. While a guess for a UPO is not a solution of the KSE, we do have access to ∂_t u at each point of the loop guess by evaluating the right-hand side of the KSE. Therefore, we decided to estimate the period guess of the loop by treating it as if it was a solution of the system and integrating around the loop the time taken to traverse it. More concretely, for a discretized loop time-series {u_t}_t = 1^N_t with N_t time-steps, the guess for the period is
T_guess = ∑_i = 1^N_t || Δu_i||/1/N_t∑_i = 1^N_t || ∂_t u_i||
where we assume that the time-gap dt between time-steps is constant. The numerator can be interpreted as the length of the loop, and the denominator as the average velocity around the loop. Figure <ref> shows the distribution of T_guess for different values of M. Similar to the L = 39 system, we can aim for longer or shorter UPOs T_guess by introducing more "twists" in our loop i.e. by changing M.
Note that it would also have been a reasonable choice to use ∂_t u_i ·t̂ in the denominator of equation <ref>, where t̂ is the normalized loop tangent at u_i, instead of ||∂_t u_i||. The sign of this dot product would tell us whether the guess loop is going in the correct direction at u_i. However, since random vectors in high-dimensional spaces are likely to be orthogonal <cit.>, this results in T_guess blowing up and over-estimating the period.
§.§.§ Periodic orbit searches
For the initial test searches that we conducted, we observed a sharp increase in T during the adjoint looping procedure related to the increase in parameter L. To counteract this and to better target the guess period, we reduce the weight of the period gradient and to gradually increase it again as the loss approaches 0. Moreover, we looked at different values for α_m = [(m+1)/(M+1)]^β by varying the exponent β that distributes the weighting between smaller and higher modes. We tried β = 1/2, 1, 2 and M, and found the best performance again with β = 1.
We conduct searches for UPOs based on guesses generated with M = 1,2 and 3. For each value of M, we generate 1,000 guesses. A summary of these three searches and their results is given in table <ref>. Examples of guesses generated from the autoencoder and the periodic orbits they converge to are shown in figure <ref>. As for the L = 39 case, we observe that the guesses and the periodic orbits that they converge to share similar shapes. A noticeable difference to L = 39 case is that during the convergence, the loops undergo a stretching and squeezing process with certain parts of the orbit being traversed faster or slower than for the guess, and the initial period often being an underestimate. This highlights the challenge of correctly parametrizing the initial guess inside the latent space. T_guess appears to underestimate the period, which may be due to a bias from our Riemann-sum approach in the definition of T_guess.
In general, we observe good convergence rates for short orbits, with 153 orbits converged for M = 1, of which 130 are distinct. We note that the converged periods create more of a continuous spectrum (rather than multiples of an average return time as in the L = 39 system), again reflecting the increased complexity of this system. The success rate naturally drops for M = 2 and 3, going to 49 and 11 orbits respectively (all of which are distinct). We attribute this drop to various factors, namely the much increased complexity of the system at L = 100, the ad-hoc definition for guesses based on sines and cosines without taking into account the dynamics apart from the moment matching, the difficulty in correctly parametrizing the speed of traversing the guess in the latent space, and how efficient the algorithm that we used is at converging long UPOs. Indeed, since long UPOs require more time-steps for accurate convergence, it is likely that if we let the convergence run for longer that more of our guesses would converge. Nevertheless, the ability to find many short orbits with such ease is a success in itself: the generation of these guesses only requires a one-time up-front cost in training the network. Once this is done, guesses can be generated cheaply and instantaneously.
From the three main searches we obtained 213 UPOs, 190 of which are distinct, with periods ranging from 10.03 to 110.11. As mentioned earlier, we also conducted other searches to examine different values of α_m, different discretization methods, and also initial searches without an adequately weighted period gradient. During these less successful searches, we also found other periodic orbits, giving us a total of 492 distinct orbits, with periods ranging from 9.96 to 110.11. All UPOs that we found were verified to still exist at temporal resolution N_t = 256 and spatial resolution N_x = 256. Figure <ref> shows the number of periodic orbits that we have found up to a given period. Based on <cit.> we expect the number of UPOs to increase exponentially for large periods. On figure <ref> we plot the exponential trend for orbits with periods T∈ [20,30]. For T > 30 we have not found enough UPOs to continue the trend. For such long UPOs, we expect the structure of the loop state space to be much more complex, implying the existence of many more local minima and thus less successful convergence rates (as also observed in <cit.>).
§.§.§ Latent gluing
Since we have a set of 492 UPOs available to us, an exhaustive gluing approach would give us a total of over 241,000 glued guesses. We shall not attempt all of these for computational efficiency reasons. Using the results from section <ref>, we will restrict ourselves to the following two searches:
* Search A: Guesses where the initial two orbits are close in the latent space, with ℓ_2 < 0.1, and where T_1 + T_2 < 125. This gives a total of 877 guesses.
* Search B: A random selection of 1,000 glued guesses among those with T_1 + T_2 < 125.
We restrict ourselves to T_1 + T_2 < 125 for computational efficiency again, since the larger the expected period T, the higher the time-resolution should be and the longer it takes to compute these orbits. Physical plots of a typical example of a successful gluing between two UPOs are shown in figure <ref>. Figure <ref> shows the 2D projections of the initial orbits and the final one onto the first three latent POD modes. We note that the initial UPOs appear embedded within the glued UPO, in the sense that the long glued UPO shadows the shorter ones.
Search A results in a success rate of approximately 9.8%, with the cumulative convergence rates shown in figure <ref>. In particular, looking at the 88 closest orbits (the closest 10%), the convergence rate is 18.2%. Search B, which consists of 1,000 random glued guesses, only has a success rate of 3.8%, clearly showing that a small ℓ_2 increases the likelihood of two orbits being able to be glued together. This indicates that also for the hyperchaotic system there seems to be a hierarchy of UPOs, where long UPOs shadow shorter ones, both in the physical space and in the autoencoder's latent space.
The success rate for L=100 is noticeably smaller than for L = 39. While this may be partly attributed to the higher complexity of the system, it also indicates that there might be potential for optimizing such gluing procedures. Of course, since glued orbits have a larger expected period, attempting to converge these orbits for longer might also increase the success rate.
§ CONCLUSION & DISCUSSION
In this paper we have introduced a new method for generating initial guesses for periodic orbits by randomly drawing loops in the low-dimensional latent space defined by an autoencoder. The autoencoder's latent dimension was chosen in a process where multiple networks were trained for various latent dimensions. We picked those networks for which we observed a strong drop with respect to the latent dimension (one can interpret this as autoencoders approximating the intrinsic coordinates of the manifolds that describe the chaotic attractor) and that performed the best among those. The loop guesses are constructed by linear combinations of the latent POD modes (that have non-zero eigenvalues) with periodic coefficients drawn from a random distribution that match the latent flow statistics. These loops are then decoded back to the physical space and together with a guess period serve as an initial guess for a loop convergence algorithm. We apply this method to the Kuramoto-Sivashinsky PDE in regimes of low-dimensional chaos and hyperchaos. The decoded loops lie close to the chaotic attractor, look realistic and prove to be good guesses for periodic orbits in loop convergence algorithms, with many of them converging to periodic orbits. This provides an alternative to recurrence methods, where guesses are based on near recurrences in a long DNS, and are thus rare and expensive to generate, while also being biased towards short and less unstable periodic orbits. We note that the derivation for loop guesses based on POD modes holds for both the physical and the latent space. For systems exhibiting low-dimensional chaos, defining guesses based on the physical POD modes is a valid approach and gives acceptable guesses. However, we observed that with guesses purely based on the physical POD modes we tend to converge to the same few UPOs. As the system becomes more complicated, it appears necessary to rely on a nonlinear order reduction method, like autoencoders, and define loops in the latent space via the latent POD modes to enforce a larger variation of loop guesses that approximately lie on the latent attractor.
Motivated by a hierarchy of UPOs that is present in ODE systems, such as the Lorenz equations, we explored the method of latent gluing, where we concatenate two periodic orbits based on where they are closest to each other in the latent space. After smoothing this new loop inside the latent space and decoding it back to the physical space, we use this as an initial guess for longer periodic orbits, as we expect the hierarchy of UPOs to carry over from ODEs to PDEs. Many of these guesses converge to UPOs, indicating that this hierarchy does indeed hold in the specific PDE system, with long UPOs shadowing shorter ones. This motivates further research into whether this hierarchy is also present in more complicated spatiotemporally chaotic PDE systems, such as the Navier-Stokes equations. Additionally, the gluing provides a method for generating new, longer UPOs from known shorter ones. Importantly, in both regimes of low-dimensional chaos and hyperchaos, the gluing is much more successful if the distance of closest passage between the two initial UPOs is smaller. This indicates that the hierarchy is also present in the autoencoder's latent space and that the autoencoder is able to capture a coherent low-dimensional representation of the system. For both the chaotic and hyperchaotic systems we conclude that a small distance between the points of closest passage of two periodic orbits in latent space significantly increases the likelihood of whether two UPOs are `gluable'.
These results are a step forward for using loop convergence methods to find unstable periodic orbits, as we have shown that we can easily and cheaply generate good initial guesses. As expected, there are limitations when moving to long UPOs in hyperchaos. However, this also constitutes the outlook for the future: we believe that optimizing the autoencoder, as well as the guesses themselves so that they are built on knowledge of the dynamics will significantly improve this method.
Having identified the hierachical organization of UPOs with long orbits shadowing a sequence of shorter ones moreover fuels the hope to generate symbolic encodings of those UPOs not only for simple ODEs but for formally infititely dimensional PDE systems and thereby - at least approximately - enumerate UPOs which is of key relevance when trying to transfer ergodic theory concepts from chaotic ODEs to dissipative nonlinear chaotic PDEs underlying physically relevant phenomena such as fluid turbulence.
§ ACKNOWLEDGEMENTS
The authors thank Omid Ashtari for his invaluable insights on loop convergence methods as well as his code. This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant no. 865677).
§ APPENDIX
tocsectionAppendix
§.§ Training specifications
The networks were created with TensorFlow 2.11.0 and trained on a NVIDIA RTX A4000 GPU. The L = 39 networks had inputs of size 31 and the encoder part consisted of three dense layers with 32, 32 and N_h nodes respectively. The decoder has the inverse setup and consists of three dense layers of sizes 32, 32 and 31. Each layer has ReLU activation functions. We trained 20 networks for each N_h∈{1,2,3,4,5} for 500 epochs.
The L = 100 networks had inputs of size 84 and the encoder part consisted of three dense layers with 256, 128 and N_h nodes respectively with ReLU activation functions, followed by 4 linear layers with N_h nodes for implicit rank minimization. The decoder consists of three dense layers of sizes 128, 256 and 84, each followed by a ReLU activation function. We trained 20 networks for each N_h∈{8,9,10,11,12,13,14} for 1,000 epochs.
For all networks we used the AdamW optimizer and an initial learning rate of 7*10^-4 with exponential decay. Each network also had l_2 regularization.
§.§ Loops based on POD modes
§.§.§ Matching moments
The a_k are independent as we assume the X_m,k to be iid. Since 𝔼_X,s is linear and the POD modes form an orthonormal basis, equation <ref> is satisfied if and only if 𝔼_X,s[a_k] = 0. Using this in equation <ref>, we find that
C^(L)_ij = cov_X,s(L_i, L_j)
= cov_X,s(u_i + ∑_k = 1^N a_k(ϕ_k)_i, u_j + ∑_k = 1^N a_k(ϕ_k)_j)
= cov_X,s(∑_k = 1^N a_k(ϕ_k)_i, ∑_k = 1^N a_k(ϕ_k)_j)
= 𝔼_X,s[ ∑_p = 1^N a_p(ϕ_p)_i ∑_q = 1^N a_q(ϕ_q)_j]
= 𝔼_X,s[ ∑_p, q = 1^N a_p a_q(ϕ_p)_i (ϕ_q)_j]
= ∑_p, q = 1^N 𝔼_X,s[a_p a_q] (ϕ_p)_i (ϕ_q)_j
C^(L) = ∑_p, q = 1^N cov_X,s(a_p, a_q) ϕ_p ϕ_q^T
Since the a_k are independent from each other, then cov_X,s(a_p, a_q) = 0 when p≠ q and cov_X,s(a_p, a_q) = var_X,s(a_p) if p = q, giving
C^(L)_ij = ∑_p = 1^N var_X,s(a_p) (ϕ_p)_i (ϕ_p)_j
Now note that C = VDV^T, where V has columns ϕ_1, ..., ϕ_N and D = diag(λ_1, ..., λ_N). Hence
C_ij = ∑_p,q=1^N V_ip D_pq V_qj^T
= ∑_p=1^N λ_p V_ip V_jp
C_ij = ∑_p=1^N λ_p(ϕ_p)_i (ϕ_p)_j
By comparing this expression to equation <ref>, we can match second moments by setting
var_X,s(a_k) = λ_k
for k = 1, ..., N.
§.§.§ Deriving coefficients
We define the coefficients to be a sum of sines and cosines
a_k(s, A_:,k, B_:,k) = ∑_m = 0^M α_m [A_m, kcos(ms) - B_m, ksin(ms)]
where M is the number of sine/cosine modes to be included in the sum, and the coefficients A_m, k, B_m, k∼ X are iid. The α_m are constants that give different weights of choice to higher frequency terms, for example α_m = (m + 1) / (M + 1). Then
𝔼_X,s[a_k] = ∑_m = 0^M α_m{𝔼_X[A_m, k] ⟨cos(ms)⟩_s
- 𝔼_X[B_m, k] ⟨sin(ms)⟩_s}
Since the s-integrals are 0 for m>0, this only requires 𝔼_X[A_0,k] = 0. To simplify the variance calculation, we will set 𝔼_X[A_m,k] = 𝔼_X[B_m,k] = 0 for all m. The variance is given by
var_X,s(a_k) = 𝔼_X,s[(∑_m = 0^M α_m(A_m, kcos(ms)
- B_m, ksin(ms) ))^2]
= ∑_m,n = 0^M α_mα_n( 𝔼_X[A_m, kA_n, k] ⟨cos(ms)cos(ns)⟩_s
- 𝔼_X[A_m, kB_n, k] ⟨cos(ms)sin(ns)⟩_s
- 𝔼_X[B_m, kA_n, k] ⟨sin(ms)cos(ns)⟩_s
+ 𝔼_X[B_m, kB_n, k] ⟨sin(ms)sin(ns)⟩_s)
= α_0^2𝔼_X[A_0,k^2] + ∑_m=1^M1/2α_m^2[𝔼_X[A_m,k^2] + 𝔼_X[B_m,k^2]]
= var_X(A_:,k)∑_m = 0^Mα_m^2
var_X(A_:,k) = λ_k ( ∑_m = 0^M α_m^2)^-1
Thus, letting A_:,k, B_:,k∼𝒩(0, λ_k ( ∑_m = 0^M α_m^2)^-1), the loops are random, periodic and on average match the first and second moments of the flow.
§.§ Supplementary tables
|
http://arxiv.org/abs/2409.02255v1 | 20240903192704 | $1/f$ Noise in the Heliosphere: A Target for PUNCH Science | [
"Jiaming Wang",
"William H. Matthaeus",
"Rohit Chhiber",
"Sohom Roy",
"Rayta A. Pradata",
"Francesco Pecora",
"Yan Yang"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP",
"physics.space-ph"
] |
UTF8gbsn
0009-0008-8723-610X]Jiaming Wang (王嘉明)
Department of Physics and Astronomy, University of Delaware
0000-0001-7224-6024]William H. Matthaeus
Department of Physics and Astronomy, University of Delaware
0000-0002-7174-6948]Rohit Chhiber
Department of Physics and Astronomy, University of Delaware
Heliophysics Science Division, NASA Goddard Space Flight Center
0000-0003-3891-5495]Sohom Roy
Department of Physics and Astronomy, University of Delaware
0009-0005-9366-6163]Rayta A. Pradata
Department of Physics and Astronomy, University of Delaware
0000-0003-4168-590X]Francesco Pecora
Department of Physics and Astronomy, University of Delaware
0000-0003-2965-7906]Yan Yang
Department of Physics and Astronomy, University of Delaware
§ ABSTRACT
We present a broad review of 1/f noise observations in the heliosphere, and discuss and complement the theoretical background of generic 1/f models as relevant to NASA's Polarimeter to Unify the Corona and Heliosphere (PUNCH) mission. First observed in the voltage fluctuations of vacuum tubes, the scale-invariant 1/f spectrum has since been identified across a wide array of natural and artificial systems, including heart rate fluctuations and loudness patterns in musical compositions. In the solar wind, the interplanetary magnetic field trace spectrum exhibits 1/f scaling within the frequency range from around [2 × 10^-6]Hz to [10^-4]Hz at 1 au. One compelling mechanism for the generation of 1/f noise is the superposition principle, where a composite 1/f spectrum arises from the superposition of a collection of individual power-law spectra characterized by a scale-invariant distribution of correlation times. In the context of the solar wind, such a superposition could originate from scale-invariant reconnection processes in the corona. Further observations have detected 1/f signatures in the photosphere and corona at frequency ranges compatible with those observed at 1 au, suggesting an even lower altitude origin of 1/f spectrum in the solar dynamo itself. This hypothesis is bolstered by dynamo experiments and simulations that indicate inverse cascade activities, which can be linked to successive flux tube reconnections beneath the corona, and are known to generate 1/f noise possibly through nonlocal interactions at the largest scales. Conversely, models positing in situ generation of 1/f signals face causality issues in explaining the low-frequency portion of the 1/f spectrum. Understanding 1/f noise in the solar wind may inform central problems in heliospheric physics, such as the solar dynamo, coronal heating, the origin of the solar wind, and the nature of interplanetary turbulence.
§ INTRODUCTION
1/f noise, otherwise known as “flicker noise”, refers to a signal in which the amplitude of the spectral density P(f) inversely scales with the frequency f. Such a spectrum has the unique property that the integrated power per octave remains constant across frequencies. Mathematically, the integration of the spectrum over a frequency range f_1 to f_2, i.e. ∫ P(f) df ∼∫ df/f ∼logf_2/f_1, depends solely on the ratio f_2/f_1. This total power is insensitive to rescaling of the frequency by an arbitrary factor, a reflection of the scale invariance of the power df/f. The distribution P(f) ∼ 1/f is often referred to as a scale-invariant distribution and is observed across a wide array of physical systems <cit.>, including heliospheric plasmas such as the interplanetary magnetic field <cit.> and elsewhere <cit.>. When the scale-invariant 1/f noise spectrum is observed, it is often fruitful to search for a scale-invariant physical process that produces the observation. In this way, the study of 1/f noise can lead to new physical insights. Some examples of such efforts are given in Section <ref>.
Understanding the origin of the interplanetary 1/f observations was counted among the scientific motivations for design of the Parker Solar Probe (PSP) mission <cit.>. As will be discussed in Section <ref>, the interplanetary structures associated with the 1/f observations are necessarily very large – relevant 1/f spectrum begins to emerge at around [10^-5]Hz, corresponding to about 1/5 of the typical reciprocal spacecraft-frame correlation time at 1 au <cit.>. Note that this range of frequencies spans implied length scales ranging from a few correlation scales to 1 au.
NASA's Polarimeter to Unify the Corona and Heliosphere (PUNCH) mission will likely, in principle, contain information relevant to the nature of the 1/f noise observed in situ. But will PUNCH be able to detect and characterize its source? At present this is unclear, as the methods for unambiguously translating the PUNCH imagery into spectral information are still subjects of ongoing research (see paper by Pecora et al., this volume). In anticipation of such developments, we point out this opportunity to use PUNCH to understand at some level the origin of the interplanetary 1/f noise. In this paper we review a history of ideas for how 1/f emerges generically across various physical processes (Section <ref>), and make some existing and potential connections to solar wind and coronal observations. Our aim is to provide a foundational background for researchers who will mine PUNCH data for evidence concerning the origins of interplanetary 1/f noise.
§ BACKGROUND AND EXAMPLES
Low-frequency 1/f spectrum was first observed by <cit.> when studying voltage fluctuation noise in vacuum tubes. The spectrum, such as the one shown in Fig. <ref>, was subsequently analyzed by <cit.> and termed “flicker noise”. The spectrum was postulated to originate from events of foreign molecules incident upon and disrupting the electron-emitting cathodes, consequently generating voltage fluctuations with Lorentzian spectral profiles. Superposition of these Lorentzian profiles were later shown in many works <cit.> to lead to 1/f spectral behavior.
Since then, 1/f noise has been identified and analyzed in numerous systems, including semiconductors <cit.>, music and speech <cit.>, human heartbeats and cognition <cit.>, and more. For instance, the distinctive 1/f spectrum is observed in loudness fluctuations and pitch fluctuations, such as in Bach's 1st Brandenburg Concerto <cit.>. Other diverse examples include fluctuations in turbulent He II <cit.>, and in cellular automaton and other systems exhibiting self organized criticality <cit.>.
In the solar wind, 1/f spectrum is often observed to span several orders of magnitude in frequency near 1 au. It extends from approximately the reciprocal of the transit time to 1 au (around 100 hours) to nearly two decades higher in frequency, around the reciprocal of the correlation timescale <cit.>. Beyond this point, the spectrum gradually transitions to the more negative “-5/3” or “-3/2” power-law indices associated with the classical descriptions of the inertial range of plasma turbulence <cit.>. These classical power-law spectra are theoretically grounded in the principle of scale invariance of energy flux across scales within the inertial range, which is defined as a range of scales much smaller than a (fixed) correlation scale and much larger than a typical scale where dissipation comes into play.
The argument for the generation of the 1/f part of the spectrum relies on a generic superposition principle. The principle posits that an ensemble of the so-called “purely” random signals with a scale-invariant distribution of correlation times collectively exhibits a 1/f spectrum <cit.> (see Section <ref> for details). Therefore the presence of 1/f signals in the solar wind can be attributed to a scenario in which the correlation scale itself is distributed in a scale-invariant fashion, over a sufficiently large range. The possibility of a distribution of correlation scales, as opposed to a single scale, becomes relevant for complex systems, apparently including the solar wind <cit.>, in which the observed fluctuations emerge from many distinct solar sources.
§.§ 1/f in the solar wind
One of the earliest observations of 1/f spectrum in the solar wind dates back to <cit.>, where it appears in the trace of the magnetic field spectral tensor across more than an order of magnitude in frequency before the correlation scales, as observed at 1 au by IMP 8 and ISEE 3, as well as near 4 to 5 au by Voyager 1 (Fig. <ref>).
<cit.> observe 1/f noise within the frequency range of 2.7 × 10^-6 to [8.5 × 10^-5]Hz in the 1 au magnetic field spectrum (see Fig. <ref>). The authors attribute it to the superposition of signals from uncorrelated magnetic reconnection events occurring in the corona near the solar surface, with their respective correlation times collectively following a log-normal distribution. This explanation relies on the general superposition principle in which the detailed properties of the individual reconnection events are not crucial. We should note here that the explanation of 1/f provided in <cit.> is easily adapted to processes other than successive coronal reconnection events, or even processes occurring beneath the photosphere.
In a more recent study, <cit.> analyze magnetic field fluctuation measurements in the fast solar wind streams using PSP Encounter 10 measurements, and observe a spectral index of -1 within the energy-containing range at distances beyond around 25 solar radii. The spectral index, however, flattens to -1/2 closer to the Sun (see Fig. <ref>). Moreover, the “break frequency”, which serves as the approximate boundary between energy-containing and inertial scales, is found to decrease with increasing heliocentric distance. These findings suggest a radial and dynamical evolution of the observed 1/f noise in the solar wind, which favors in situ generation mechanisms within the local wind. This type of mechanism, as opposed to the non-local superposition principle, could involve a linear instability or a modification to the reasoning that leads to a Kolmogorov-like cascade <cit.>. We elaborate such ideas in Section <ref>.
The puzzle of the origin of interplanetary 1/f noise presents a clear but fundamental dichotomy that is not yet unambiguously resolved – does this signal emerge from a collective statistical principle <cit.>, or is it a consequence of a particular dynamical process emerging from local dynamics alone <cit.>? In the following section, we review theories relevant to the former argument. Then in Section <ref>, we provide an in-depth examination of solar wind observations as well as related simulation and experimental results that may shed light on this ongoing debate.
§ A ROBUST SUPERPOSITION PRINCIPLE
Theories have been established concerning a generic generation of 1/f noise through a superposition principle. <cit.> proposes that if samples of “purely” random processes (those with autocorrelations of the form e^-t/τ where τ is the correlation time of the sample) exist with a minimum correlation time τ_1 and a maximum τ_2, then a scale-invariant distribution of the correlation times would lead to an averaged power spectrum exhibiting 1/f scaling over the frequency range 1/τ_2 ≪ω≪ 1/τ_1. The resulting 1/f spectrum can span several orders of magnitude if τ_2 is much greater than τ_1. However, in the context of the solar wind, two notable deviations from Machlup's model apply: (1) correlation times tend to follow log-normal distributions instead of inverse distributions <cit.>, and (2) measurements typically display a Kolmogorov power-law spectrum with an index of -5/3 in the inertial range (on scales smaller than the correlation scales) under the regime of incompressible, isotropic magnetohydrodynamic (MHD) turbulence with high Reynolds numbers <cit.>. <cit.> show that a log-normal distribution with a sufficiently large variance has an extended, inverse-like proportion. And we show in Appendix <ref> that an ensemble of datasets with an arbitrary power-law index -α < -1 can collectively produce a 1/f spectrum through the same sense of superposition.
§.§ Machlup's superposition principle
In examining the relationship between 1/f noise and its mechanism of generation, necessarily involving occurrences of unexpected or rare events,[Events with scale-invariant occurrences are often considered unexpected because the tail of the distribution is asymptotic and non-integrable. So tail events are inherently rare but always possible, making them unexpected.] <cit.> hypothesizes that if the spectrum has no characteristic time – wherein the same amount of energy resides between any two frequencies separated by a fixed factor – then so might the distribution of the generating events. Suppose there exists an ensemble of nonlinear, chaotic processes naturally displaying exponentially decaying autocorrelations, and their characteristic correlation times τ follow the scale-invariant (or inverse) distribution
ρ(τ) d τ∝d τ/τ.
Then the spectrum of an individual event, as written below in angular frequency (ω = 2π f) domain, is of the form of a Lorentzian:
S(τ, ω) ∝∫_-∞^∞ e^-iω t e^-t/τ dt ∝τ/1+ω^2 τ^2.
And the overall spectrum of this ensemble of events of equal integrated power[Derivation of <cit.> assumes that all samples in the ensemble share the same variance, or integrated power across scales. However, this condition is not necessary – 1/f spectrum can still emerge following the logic of Eq. <ref> if the variances follow a scale-invariant distribution.] is
S(ω) = ∫_τ_1^τ_2 S(τ, ω) ρ(τ) dτ∝tan^-1(τω)/ω|_τ_1^τ_2,
where τ_1 and τ_2 are the minimum and maximum correlation times of the generating events, respectively. If the correlation times span several orders of magnitudes, i.e., τ_1 ≪τ_2, then within the frequency region satisfying τ_1 ≪ 1/ω≪τ_2,
S (ω) ∝π/2 + 𝒪(τ_2 ω)^-1 + 𝒪(τ_1 ω)^1/ω.
To a zeroth-order approximation, S (ω) ∝ 1/ω.
§.§ Connection between inverse and log-normal distributions
We note that the correlation time distribution as given in Eq. <ref> is not integrable. Neither are the sharp boundaries of τ_1 and τ_2 assumed in the last section physical. So what exactly happens in the limits of τ→ 0 and τ→∞? <cit.> propose the log-normal distribution that behaves like 1/τ in the intermediate range, and naturally governs multiplicative processes in which the total probability is the product of probabilities of several independent random variables.
A log-normal distribution describes a random variable X whose logarithm is normally distributed. Suppose Z is a standard normal variable. Then a log-normal distribution is defined for variable X = e^μ + σ Z as
f(x) = 1/x σ√(2π)exp[ - (logx-μ)^2/2σ^2],
where μ and σ is the mean and standard deviation of the variable logX, respectively. From Eq. <ref>, an inverse scaling f(x) ∼ 1/x under certain conditions becomes apparent. If we define x≡ e^μ, then f(x) ∝ 1/x when
[ log(x/x) ]^2/2 σ^2≪ 1.
Following conventions from <cit.>, given a fraction of tolerable deviation θ, a log-normal distribution is scale-invariant within a fraction of θ inside the domain of random variable x satisfying[The condition in <cit.> is |log(x/x)| ≤ 2 θσ^2, which is established in log scale. The condition as in Eq. <ref> is established in linear scale.]
| log( x/x)| ≤√(2θσ^2).
It is known that many solar wind variables, such as the correlation times, solar wind speed, density, temperature, magnetic field strength and so on, follow log-normal distributions <cit.>. The abundance of log-normal distributions may be attributed to the multiplicative nature of physical processes occurring within the solar wind. Suppose the value of a random variable X is based on the product of N independent variables X_1, ⋯, X_N, i.e.
X = X_1 ×⋯× X_N.
Then evoking the Lyapunov Central Limit Theorem, the distribution of the logarithm, logX = logX_1 + ⋯ + logX_N, follows a normal distribution if certain regular conditions are satisfied. In <cit.>, X may represent the productivity of a researcher, while X_1, ⋯, N represent research merits, such as the ability to recognize a research topic, the ability to evaluate results, and so on.
In the case of the solar wind, X_1, ⋯, N may represent successive reconnection events or successive foldings in a dynamo <cit.>. In particular, magnetic structures of initial lengthscale λ_0 may experience N successive reconnections, each modifying λ_0 by a factor of (1+ϵ), so that the ensemble lengthscale λ = λ_0 (1+ϵ)^N mimics Eq. <ref> and is log-normally distributed <cit.>. In such cases where the successive events X_1, ⋯, N are identical, the Lindeberg–Lévy Central Limit Theorem dictates that logX is normally distributed for sufficiently large N if the second moment of logX_1, ⋯, N exists.
§ THEORETICAL ISSUES AND FURTHER OBSERVATIONS
The superposition principle elaborated upon in Section <ref> provides a compact pathway to explain the generation of 1/f signals, but it is not specific regarding
the origin of the underlying processes that are superposed. Therefore, any hypothesis concerning the nature of these processes must be assessed for consistency based on principles beyond the superposition mechanism itself. On this basis, the suggestion by <cit.> that the superposition
involves successive reconnections occurring in the deep corona avoids problems related to the available time scales. Specifically, sub-Alfvénic coronal dynamics (including reconnections) are not strongly limited by convection time or expansion time (see Section <ref>), since in this region the MHD characteristics travel both towards and away from the Sun.
The original observations of 1/f-like solar wind magnetic field spectra <cit.> revealed a tendency toward spectral indices shallower than the Kolmogorov value of -5/3 at lower frequencies, typically below [10^-4]Hz to a few times [10^-5]Hz. However, the significance of 1/f spectrum was not fully recognized in these early reports. It was <cit.> who first placed an emphasis on the form f^-1, and in addition employed data records long enough to probe frequencies down to [10^-6]Hz and lower. The need for extended data records is a recurrent theme in identifying 1/f power spectrum, as discussed further in this section.
While there has been a variety of mechanisms proposed to explain shallow 1/f solar wind spectra, especially in the trace magnetic field component spectra, these mechanisms are usefully categorized into those that operate locally in the interplanetary medium, and those that originate in the lower corona or even in the solar interior. From a practical and physical point of view, the region for local processes can be viewed as the super-Alfvénic solar wind, while coronal and solar processes operate at lower altitudes. We distinguish local and solar mechanisms for generating the 1/f signal in this simple and perhaps ad hoc way.
§.§ Local origins of 1/f.
The pioneering observations by <cit.> set the stage for wide ranging investigations into the origins of the interplanetary 1/f signal. It is noteworthy that their Fig. 2 (Fig. <ref> here) is labeled by both frequency and wavenumber, where the wavenumber spectrum assumes the form k^-1, in accordance with the Taylor frozen-in hypothesis <cit.>.
Associated with this spectral law, <cit.> offered as a possible explanation for the observed 1/f an inverse cascade of magnetic helicity. We evaluate this as a suggestion of a local process. (We revisit it later as a possible solar process.) The inverse cascade process operates in the regime of incompressible magnetohydrodynamics <cit.> under conditions where helicity is an ideal invariant. In freely decaying turbulence with appropriate boundary conditions, magnetic helicity is known to be important and can be responsible for effects such as Taylor relaxation <cit.> or selective decay <cit.>. While these mechanisms are intriguing and physically appealing, their applicability to the solar wind is questionable, as the magnetic helicity of
fluctuations is not an ideal invariant in magnetohydrodynamics (MHD) when a mean field is present – such as the Parker spiral field that threads the interplanetary medium. In addition, inverse cascade processes are generally very slow compared to direct cascade processes, and the observed 1/f range at 1 au has only a few nonlinear times to develop during transit from the Sun <cit.>. In any case, the inverse cascade scenario involves local MHD scale processes that would need to occur in transit from the corona if the process is to be considered local.
On the other hand, the case where 1/f signal is already present below the Alfvén critical region is unrelated to in situ solar wind dynamics and avoids transit time issues. <cit.> mention the possibility of an alternative explanation that would involve “some appropriate superposition of streams”. This suggestion was subsequently developed into the <cit.> model that employed the superposition principle described in Section <ref>.
Another theoretical approach to explaining the solar wind's 1/f spectrum as a local process was proposed by <cit.> and <cit.>. Their model differs from the superposition principle in that the physical processes producing the 1/f signal occur in situ in the evolving solar wind due to the interaction of Kolmogorov-like nonlinear effects and the influence of expansion. Ideas along these lines have been examined in recent Parker Solar Probe (PSP) observations <cit.>, though remote generation in the corona is not ruled out by these authors. A variant of the above ideas has appeared in a recent preprint <cit.>.
These recent studies based on PSP observations discuss their results in the context of the works by <cit.> and <cit.>. <cit.> use weak-turbulence theory to propose that the 1/f behavior can be produced by the parametric decay of Alfvén waves that are initially highly imbalanced (with a dominant outward propagating mode), leading to an inverse cascade wherein the dominant Alfvén mode acquires a 1/f scaling over time. A narrowing of the range of frequencies displaying 1/f is then predicted for heliocentric distances smaller than 0.3 au. A similar prediction was made by <cit.> based on arguments relating to the low magnetic compressibility of the solar wind. The authors conjecture that 1/f is the steepest possible spectrum at (large) scales where δ B/B∼ 1, a limit that is imposed by the saturation of magnetic fluctuations δ B that are bounded on a sphere of radius equal to the background field B. Consequently, near the Sun, where δ B/B < 1 <cit.>, the 1/f range is predicted to disappear. See <cit.> for a detailed discussion of the implications of their observations for the <cit.> and <cit.> models.
§.§ Causality and range of influence
A feature of all the local mechanisms presented so far is that they refer only to observed frequencies above around [10^-4]Hz <cit.>. However, as emphasized above, the entirety of the observed interplanetary 1/f signal extends to frequencies as low as [2 × 10^-6]Hz.[In fact it is possible that the signal extends to even lower frequencies. But the signal tends to merge with harmonics of the solar rotation period, as seen in Fig. <ref>.] <cit.> and <cit.> study the tendency towards slopes shallower than the inertial range values when examining frequencies between 10^-2 and [10^-4]Hz, a phenomena noted in similar frequency ranges in early works <cit.> but without clear explanation. Without prejudice to the applicability of the local mechanisms down to [10^-4]Hz, one may ask if such processes can be extended to much lower frequencies. Nearly two more decades of frequency must be accessed to include the full observed 1/f range.
The concept of causality or “range of influence” becomes critical at this juncture <cit.>. We ask the question: Over what distance can MHD processes exert influence during passage to 1 au? An estimation of this distance amounts to a
position-dependent estimate of an MHD causality limit using a few key time and space scales. Start by assessing the maximum distance an MHD signal can travel in transit to [1]au= [1.5 × 10^13]cm. The transit time for the wind
at [400]km/s is then T_tr∼[3.8 × 10^5]s. An upper bound estimate of range of influence using the Alfvén speed (∼[50]km/s) is L_roi = V_A T_tr∼[0.1]au. This corresponds to a time of passage T_roi = L_roi /V_SW≈[3.5 × 10^4]s, and a frequency f_roi = 1/T_roi = 2.8 ×[10^-5]Hz. This is the lowest frequency that can be influenced by Alfvén wave propagation at 1 au. For turbulent motions propagating at speed δ V < V_A, the range of influence will decrease and the corresponding frequency will be higher than f_roi. The fiducial line drawn in Fig. <ref> is around [8 × 10^-5]Hz, and in the present estimation would correspond to a turbulence amplitude a factor of 2 or 3 smaller than V_A.
The range of influence can be meaningfully compared with other time and length scales. First, the correlation scale L_c at 1 au is on average around [10^6]km, with broad variation <cit.>. At V_SW = [400]km/s, a structure of size L_c passes an observation point in [2.5 × 10^3]s, corresponding to [4 × 10^-4]Hz, again with substantial variation. This is comfortably greater than the frequency associated with range of influence, as it must be.
Next, we note that solar wind plasma streaming transits 1 au at [400]km/s in time T_tr∼[100]hr, or in frequency, f_tr = 1/T_tr∼[2.8 × 10^-6]Hz. Comparing these with observations (e.g., Fig. <ref>), we see that
the 1/f noise range rather neatly spans (and extends somewhat beyond) the frequency interval from f_tr to f_roi.
The fact that the nominal correlation frequency is somewhat higher than the range of influence frequency is reasonable and expected. The spectrum is well known to roll over in to ∼ f^-5/3 at frequencies above f_cor. But since the correlation lengths in individual samples are known to be log-normally distributed <cit.>, this rollover is not expected to be sharp. Moreover, as stated above,
the range of influence differs in turbulence
with different amplitudes δ V and this amplitude itself is broadly distributed, perhaps again in a log-normal distribution <cit.>. Finally the approximation that the log-normal distribution of correlation times is nearly scale-invariant may break down at the extremes of the 1/f spectral range. Given these variabilities, it seems inevitable that the transition from the 1/f range to the turbulence f^-5/3 range should take place gradually. In observations such as Figs. <ref>, <ref>, and <ref>, this transition takes place over about a decade of frequency.
Consideration of the above timescale inequalities implies that Kolmogorov-like direct cascade, or any process requiring standard nonlinear turbulence time scales, will not be able to operate over the full range of the observed 1/f spectrum in the time of available transit to 1 au. Analogous estimates can be readily made for other heliocentric distances or other values of turbulence timescales. This casts doubt on theoretical explanations for the 1/f signals. Minimally it indicates that local in situ explanations must be supplemented by some other non-local process that fortuitously produces a spectrum that smoothly matches the spectrum that extends to much lower frequencies.
§.§ Origin in coronal and solar processes
In conjunction with 1/f observations from long data records at 1 au, <cit.> offered a theoretical explanation based on a particular scenario in which the Montroll-Shlesinger-Machlup superposition principle (see Section <ref>) is invoked. It is based on the elementary idea that a collision between two flux tubes can lead to reconnection and merging, producing a plasmoid with a larger cross section, potentially doubling in size if the colliding flux tubes are of equal dimensions. This process is repeatable given a certain probability for reconnection and merger. Several stages of such merger lead to a multiplicative process and hierarchy
that may be described using a log-normal distribution. Then, invoking the developments of <cit.>, a scale-invariant distribution can be achieved over some range of scale sizes. As these structures are accelerated into the solar wind, they plausibly represent the 1/f signal that is observed without further dynamical evolution. The model of <cit.> based on photospheric and coronal observations added considerable detail to the successive, scale-invariant coronal reconnection model. A cartoon taken from <cit.> is shown here as Fig. <ref>.
The concept that coronal reconnection is a multiscale process capable of supporting numerous successive reconnections is well founded in observations, based on the detection of the low-lying mixed polarity magnetic carpet <cit.>. Simulations have shown that prolific flux tube collisions and resulting current sheet formation are to be expected in this highly dynamic, anisotropic turbulent medium <cit.>.
The successive reconnection scenario readily lends itself to a description of a multiplicative process in the sense of Eq. <ref>. The expectation of
a log-normal distribution of the resulting correlation scales follows immediately. In fact such a distribution is observed <cit.>.[We note that further analysis of the distributions of the magnetic fluctuations is found in <cit.>, who describe potential departures from log-normality in the magnetic field itself. However, these authors did not examine the correlation scales in particular.]
§.§ Spectral evolution and generation of correlation
The above scenario is essentially a coronal process that does not require the participation of in situ dynamics in the super-Alfvénic wind. However, as the solar wind expands, local turbulence can generate spatial correlations in the form of the Kolmogorov spectrum and its hierarchy of locally produced magnetic field and vorticity structures <cit.>. A familiar and often quoted net product of such nonlinear couplings is a net flux of energy through the inertial range towards smaller scales. However this cascade is the average result of a vast number of triadic interactions <cit.>, which transfer energy almost equally toward larger and smaller scales, with the net being towards the latter.
This property has numerous influences on a turbulent system. But relevant here is the expectation that freely evolving turbulence will generate correlations at increasing scales. This is a fundamental feature of the <cit.> analysis of homogeneous turbulence, and it implies that the similarity scale that defines the long-wavelength bound on the inertial range must increase in time. In the present context this leads to the observed increase in length scale of the solar wind correlations on average with increasing radial distance <cit.>. This gradually converts the
observed high frequency part of the 1/f range into the correlated Kolmogorov-like f^-5/3 (or k^-5/3 using the frozen-in property) range. Indeed, this is observed and well documented <cit.> as the upper end of the 1/f range evolves towards lower frequency at increasing radial distance. This has been often described as the migration towards lower frequency of the “break point” between the Kolmogorov and the 1/f spectral ranges <cit.>, even if this transition is often gradual rather than sharp, as illustrated, e.g., in Fig. <ref>.
§ CONNECTIONS TO INVERSE CASCADE AND DYNAMO
The possibility that the observed 1/f signal is related to inverse cascade <cit.> activity crucially depends on where the process is purported to occur. For in situ inverse cascade-related activity in the super-Alfvénic solar wind, the issue of available time immediately enters. The problem with establishing the observed 1/f spectrum in the interplanetary medium due to direct cascade processes was discussed in Section <ref>. But it is well known that inverse cascade processes are significantly slower than their direct cascade counterparts, and have been been shown in various circumstances to require many, even hundreds, of nonlinear eddy turnover times to have significant effects <cit.> on low frequency time variations. On this basis, it seems that accounting for the full frequency range of the observed 1/f solar wind noise through inverse cascade in the super-Alfvénic solar wind is essentially ruled out.
In the sub-Alfvénic corona and in the solar dynamo, the situation regarding inverse cascade activity is markedly different. In the low plasma-beta corona, plasma turbulence is likely characterized by a high degree of quasi-two dimensional anisotropy, often described by Reduced Magnetohydrodynamics <cit.>. Such systems asymptotically approach two dimensionality, a limit in which a quasi-conserved mean square magnetic potential can apparently support inverse cascade activity <cit.>. This process is indeed associated with successive reconnections of a sea of magnetic flux tubes <cit.>. It is also associated with the more rapid decay of energy relative to mean square potential <cit.>, a feature known as selective decay. Therefore, it is possible to view the model for 1/f driven by scale-invariant coronal reconnections <cit.> as supported by inverse cascade processes.
Likewise, for the solar dynamo, connections between inverse cascade and 1/f noise generation are equally clear. Observations provide a “smoking gun” involvement of sub-photospheric dynamics in producing observable spectral signatures. Particularly suggestive are observed azimuthal wavenumber spectra from photospheric line-of-sight magnetic fields reported by <cit.> using Kitt Peak magnetograms, and by <cit.>, employing data from the SOHO/MDI instrument. In each case there is evidence of a 1/k dependence in the photospheric spatial structure, which may be regarded as a signature of MHD inverse cascade <cit.>. Elementary models provide a simple relationship between spatial structures in the photosphere and observed spectra in the solar wind <cit.>.
On the theoretical side, spherical MHD dynamo simulations carried out for very long time scales indicate spectral transfer consistent with inverse cascade while also producing 1/f noise in the time domain. Such simulations employ highly idealized boundary conditions and ideal MHD equations <cit.>, thus enabling long timescale runs. The 1/f signals appear when runs are initialized with significant magnetic helicity or with significant rotation. In the same instances, the flows experience strong condensation of energy into the largest scale degrees of freedom in the sphere, a signature of the possibility of inverse cascade <cit.>. There is also support for dynamo generation of 1/f in laboratory experiments <cit.> and supporting simulations <cit.>.
§ OBSERVABILITY BY PUNCH
Remarkably, and perhaps fortuitously, the PUNCH mission <cit.> will provide images covering a range of scales arguably of direct relevance to the observed 1/f spectra. PUNCH's high-resolution imaging data is designed to have at least 10× better resolution than previous imagers such as STEREO <cit.>, thus in principle resolving structures at scales within the turbulence inertial range. Roughly speaking, PUNCH images will span scales up to a few 1 au, and with resolutions as fine as [10^6]km, comparable to the expected correlation scale. Therefore PUNCH will in principle capture images of the plasma responsible for the 1/f signal. The challenge lies in the interpretation of the images.
One issue is that PUNCH will detect density variations, not magnetic field, except perhaps by inference with regards to coronal structures <cit.>. However, there are some in situ observations of 1/f signal in density <cit.>, as well as inferred results from the SOHO UVCS instrument in the deep corona <cit.>. PUNCH will also observe the inner solar wind and corona in a different orientation than either STEREO imaging, or in situ measurements such as those from the ACE or Wind missions at a fixed position using the Taylor hypothesis. Fig. <ref> illustrates the essence of these differences in a highly idealized format. The most significant issue in quantifying the correspondence between PUNCH images and spectral characteristics is the averaging over depth of field that is intrinsic in PUNCH images. This greatly complicates the interpretation of image spatial scales to the in situ observed range of 1/f frequencies. Preliminary studies of this problem have begun in anticipation of the PUNCH launch (see Pecora et al., this volume). Provided that ongoing studies can establish useful connections between PUNCH images and spectral distributions, this mission will have the potential to reveal the origin and evolution of the elusive interplanetary 1/f signal.
§ DISCUSSION
1/f noise appears in such a wide variety of physical systems <cit.> that it is difficult to fully review its occurrences in a limited space. Nevertheless we attempt here to provide a broad but incomplete survey to suggest
its generic nature. There are also a variety of detailed mechanisms that are suggested to explain its emergence. We have not delved deeply into these in systems other than the heliosphere, favoring instead a class of models based on the classical developments due to <cit.> and extended by <cit.>. The basic idea is that an ensemble with a scale-invariant distribution of relaxation times, when superposed and sampled, gives rise to a range of 1/f behavior. The applicability of this reasoning is substantially extended upon the realization <cit.> that a log-normal distribution of relaxation times can readily produce the range of scale-invariant relaxation times required to obtain 1/f spectrum. We presented arguments and reviewed observations that provide evidence that leads us to favor the above scale-invariant superposition explanation. We should note, once again, that our review is not exhaustive, and that statistical explanations such as self-organized criticality have been offered as a variant explanation <cit.>.
For heliospheric 1/f noise, we reviewed two distinct models: a coronal model based on successive reconnections manifestly related to the superposition principle, and another model that relies on the generation of 1/f due to inverse cascade in the solar dynamo. The latter model may also fit the superposition class if the underlying mechanism (not reviewed here) involves successive stretch-and-fold dynamics <cit.>.
Both of the above models are intrinsically nonlinear and neither appears to be limited by available time for developing dynamics. This is not the case for models originating in the super-Alfvénic wind, where available time is limited by convection time to the position of observation. We have explained how this limited range of influence disallows inverse cascade to very long wavelengths, and also limits the ability of any MHD process to access the low frequencies (∼[10^-6]Hz at which 1/f is observed at 1 au). This is not to say that local mechanisms proposed to explain 1/f-like spectra near [10^-4]Hz need to be rejected <cit.>. But it does provide a challenge to explain how the local 1/f spectrum matches smoothly onto the 1/f signal that extends to near [10^-6]Hz. In this regard, it is useful to recall the adage from <cit.>: “If you have not found the 1/f spectrum, it is because you have not waited long enough.”
Because long data records are needed to study the
1/f signal, there have been relatively few in depth studies of its origins and connections to solar or coronal phenomena. The purpose of the present paper has been to assemble an overview of current knowledge about 1/f in the heliosphere, and to point towards what we believe are the likely close connections between the solar dynamo, coronal dynamics, and effects observed in the super-Alfvénic wind, including 1 au and beyond. As the space physics community delves into these
likely connections, the possibility may emerge that deeper knowledge of 1/f noise and relevant statistics might translate into quantitative connections to solar terrestrial relations, and perhaps eventually, connections to space weather prediction. At present this is a speculative remark,
and one might even imagine that modulation of 1/f noise might be a trigger for rare events such as large flares or CMEs. If such connections are established it could represent an entirely new and statistical approach to this complex coupling between solar dynamics and geospace responses.
While observations and simulations are continuing to reveal aspects of 1/f noise in the heliosphere, it is fair to say that there remain aspects and many details that are incompletely understood. We anticipate that advanced
instrumentation on upcoming missions such as PUNCH will provide valuable information to further reveal the mysteries of phenomena such as 1/f noise in the solar wind.
This research is partially supported by the NASA LWS grants 80NSSC20K0377 (subcontract 655-001) and 80NSSC22K1020, by the NASA IMAP project at UD under subcontract SUB0000317 from Princeton University, by the NASA/SWRI PUNCH subcontract N99054DS at the University of Delaware, by the NASA HSR grant 80NSSC18K1648, and by National Science Foundation grant AGS-2108834.
§ 1/F FROM ARBITRARY SPECTRAL INDEX
In this appendix, we build upon the discussion in Section <ref> to demonstrate that the superposition-based generation mechanism for 1/f noise is applicable to a time series ensemble with any index -α, where α > 1. A straightforward approach <cit.> is to assume, in place of the Lorentzian profile in Eq. <ref>, the following form for individual time series' spectrum:
S(τ, ω) ∝τ/(1+ω^2 τ^2)^α/2,
where S(ω) is flat at small frequencies (ωτ≪ 1) and has a power-law index of -α at large frequencies (ωτ≫ 1). With inversely distributed correlation times as in Eq. <ref>, the superposed spectrum becomes
S(ω) ∝1/ω∫_x_1^x_2dx/(1+x^2)^α/2 = 1/ω[ ∫_0^∞dx/(1+x^2)^α/2 - ∫_0^x_1dx/(1+x^2)^α/2 - ∫_x_2^∞dx/(1+x^2)^α/2],
where x ≡ωτ, and in the frequency range of interest, x_1 ≡ωτ_1 ≪ 1 and x_2 ≡ωτ_2 ≫ 1. The first integral in Eq. <ref> has a definite solution of √(π)Γ[(α-1)/2]/2 Γ(α/2) where Γ denotes the gamma function. The third integral can be approximated as ∫_x_2^∞ dx/x^α = x_2^1-α/(α-1) for x_2 ≫ 1. The solution to the second integral is proportional to the hypergeometric function, x_1 _2F_1 (1/2, α/2; 3/2; -x_1^2 ) = x_1 - α x_1^3 / 6 + 𝒪(x_1^5). Thus
S(ω) ∝1/ω[ √(π)Γ (α-1/2)/2 Γ(α/2) - 𝒪(x_2^1-α) - x_1 + 𝒪(x_1^3) ].
To the least order approximation, S(ω) ∝ 1/ω only if α > 1, that is, the order of the second term in the bracket is less than that of the first term. The constraint of α > 1 also ensures that Γ[(α-1)/2] is positive.
We now propose an alternative derivation of 1/f noise that avoids the hypergeometric function (as well as other complicated math), and avoids assuming a certain spectral form, as in Eq. <ref>. Instead, we assume that the spectrum is flat below a certain break frequency, and transitions to a power-law at higher frequencies. The main idea is to evaluate the expected slope of log(S)(log(ω)) through assigning a weighting function for the slopes at any given ω. To maintain clarity in notation, we now use τ_c to represent the correlation time for an individual time series. The associated frequency, at which the power spectrum transitions from a flat profile to a power-law decay with an index of -α, is denoted as ω_c ≡ 1/τ_c. Meanwhile, ω indicates the range of frequencies within the domain of interest.
The reciprocal of an inversely distributed random variable is also inversely distributed. Indeed ω_c follows the distribution
ρ(ω_c) dω_c = d ω_c/ω_c log(ω_1/ω_2),
where ω_1 ≡ 1/τ_1 > ω_2 ≡ 1/τ_2, and ω_c ∈ [ω_2, ω_1]. For each power density spectrum normalized to an equal total power (of unity for example), the height of the power spectral density at ω_c is denoted as S_c, and is constant at lower frequencies. The values of S_c are inversely proportional to ω_c, and is inversely distributed as
ρ(S_c) dS_c = dS_c/S_c log(ω_1/ω_2).
Above the break frequency, a power spectrum follows the form S(ω) = S_c(ω/ω_c)^-α assuming continuity. Therefore, at any given frequency ω, the power-law index, denoted as β, is either 0 if the chosen spectrum has ω_c > ω, or -α if ω_c < ω. Intuitively, the expected value of β is
β (ω) = ∫_S_c(ω_1)^S_c(ω_2)β(ω) S(ω) ρ(S_c) dS_c /∫_S_c(ω_1)^S_c(ω_2) S(ω) ρ(S_c) dS_c ,
where S_c(ω) represents the magnitude S_c given the spectrum has a break frequency at ω. Here, ρ(S_c) can be considered as the probability density of S(ω), and S(ω) is the weighting function of β at frequency ω.
Why is S(ω) the weighting function of β (ω) under the context of spectrum superposition? To show this, consider two spectra S_μ (ω) ∝ (ω/ω_μ)^μ and S_ν (ω) ∝ (ω/ω_ν)^ν of power-law indices μ and ν, respectively, where ω_μ and ω_ν are arbitrary positive constants. The slope of S_μ + S_ν in log-log scale can be directly calculated as
∂/∂log(ω)log(S_μ + S_ν) = d ω/d log(ω)∂/∂ωlog(S_μ + S_ν) = μ S_μ + ν S_ν/S_μ + S_ν.
We have shown that on log-log scale, power indices of power spectra can be computed with a weighted average.
The expectation β (ω) can now be confidently computed using Eq. <ref>, keeping in mind that β = -α when S_c>S(ω), and β = 0 otherwise:
β (ω) = -α∫_S_c(ω)^S_c(ω_2) S_c (ω/ω_c)^-αρ(S_c) dS_c /∫_S_c(ω_1)^S_c(ω) S_c ρ(S_c) dS_c + ∫_S_c(ω)^S_c(ω_2) S_c (ω/ω_c)^-αρ(S_c) dS_c = -α[ 1 + (1-α) 1-(ω/ω_1)/(ω_2/ω)^α-1-1]^-1.
Within the frequency region where ω_2 ≪ω≪ω_1, and assuming α > 1, we arrive at the desired result of β (ω) = -1.
natexlab#1#1
[Bak et al.(1987)Bak, Tang, & Wiesenfeld]Bak87
Bak, P., Tang, C., & Wiesenfeld, K. 1987, , 59, 381, 10.1103/PhysRevLett.59.381
[Batchelor(1970)]Batchelor70
Batchelor, G. K. 1970, The Theory of Homogeneous Turbulence (Cambridge, UK: Cambridge University Press)
[Bavassano et al.(1982)Bavassano, Dobrowolny, Mariani, & Ness]Bavassano82
Bavassano, B., Dobrowolny, M., Mariani, F., & Ness, N. F. 1982, , 87, 3617, 10.1029/JA087iA05p03617
[Behannon(1978)]Behannon78
Behannon, K. W. 1978, Reviews of Geophysics and Space Physics, 16, 125, 10.1029/RG016i001p00125
[Bemporad et al.(2008)Bemporad, Matthaeus, & Poletto]Bemporad08
Bemporad, A., Matthaeus, W. H., & Poletto, G. 2008, , 677, L137, 10.1086/588093
[Bernamont(1937)]Bernamont37
Bernamont, J. 1937, Proceedings of the Physical Society, 49, 138, 10.1088/0959-5309/49/4S/316
[Bourgoin et al.(2002)Bourgoin, Marié, Pétrélis, Gasquet, Guigon, Luciani, Moulin, Namer, Burguete, Chiffaudel, Daviaud, Fauve, Odier, & Pinton]Bourgoin02
Bourgoin, M., Marié, L., Pétrélis, F., et al. 2002, Physics of Fluids, 14, 3046, 10.1063/1.1497376
[Bruno & Carbone(2013)]Bruno13
Bruno, R., & Carbone, V. 2013, Living Reviews in Solar Physics, 10, 2, 10.12942/lrsp-2013-2
[Bruno et al.(2019)Bruno, Telloni, Sorriso-Valvo, Marino, De Marco, & D'Amicis]Bruno19
Bruno, R., Telloni, D., Sorriso-Valvo, L., et al. 2019, , 627, A96, 10.1051/0004-6361/201935841
[Burlaga & Goldstein(1984)]Burlaga84
Burlaga, L. F., & Goldstein, M. L. 1984, , 89, 6813, 10.1029/JA089iA08p06813
[Burlaga & Szabo(1999)]Burlaga99
Burlaga, L. F., & Szabo, A. 1999, , 87, 137, 10.1023/A:1005186720589
[Caloyannides(1974)]Caloyannides74
Caloyannides, M. A. 1974, Journal of Applied Physics, 45, 307, 10.1063/1.1662977
[Chandran(2018)]Chandran18
Chandran, B. D. G. 2018, Journal of Plasma Physics, 84, 905840106, 10.1017/S0022377818000016
[Chhiber(2018)]Chhiber2018thesis
Chhiber, R. 2018, PhD thesis, University of Delaware
[Chhiber(2022)]Chhiber22
—. 2022, , 939, 33, 10.3847/1538-4357/ac9386
[Coleman(1968)]Coleman68
Coleman, Paul J., J. 1968, , 153, 371, 10.1086/149674
[Davis et al.(2023)Davis, Chandran, Bowen, Badman, de Wit, Chen, Bale, Huang, Sioulas, & Velli]Davis23
Davis, N., Chandran, B. D. G., Bowen, T. A., et al. 2023, , 950, 154, 10.3847/1538-4357/acd177
[de Karman & Howarth(1938)]deKarman38
de Karman, T., & Howarth, L. 1938, Proceedings of the Royal Society of London Series A, 164, 192, 10.1098/rspa.1938.0013
[Deforest et al.(2022)Deforest, Killough, Gibson, Henry, Case, Beasley, Laurent, Colaninno, Waltham, & Punch Science Team]DeForest22
Deforest, C., Killough, R., Gibson, S., et al. 2022, in 2022 IEEE Aerospace Conference, 1–11, 10.1109/AERO53065.2022.9843340
[DeForest et al.(2018)DeForest, Howard, Velli, Viall, & Vourlidas]DeForest18
DeForest, C. E., Howard, R. A., Velli, M., Viall, N., & Vourlidas, A. 2018, , 862, 18, 10.3847/1538-4357/aac8e3
[DeForest et al.(2016)DeForest, Matthaeus, Viall, & Cranmer]DeForest16
DeForest, C. E., Matthaeus, W. H., Viall, N. M., & Cranmer, S. R. 2016, , 828, 66, 10.3847/0004-637X/828/2/66
[Denskat & Neubauer(1982)]Denskat82
Denskat, K. U., & Neubauer, F. M. 1982, , 87, 2215, 10.1029/JA087iA04p02215
[Dmitruk & Matthaeus(2007)]Dmitruk07
Dmitruk, P., & Matthaeus, W. H. 2007, , 76, 036305, 10.1103/PhysRevE.76.036305
[Dmitruk et al.(2011)Dmitruk, Mininni, Pouquet, Servidio, & Matthaeus]Dmitruk11
Dmitruk, P., Mininni, P. D., Pouquet, A., Servidio, S., & Matthaeus, W. H. 2011, , 83, 066318, 10.1103/PhysRevE.83.066318
[Dmitruk et al.(2014)Dmitruk, Mininni, Pouquet, Servidio, & Matthaeus]Dmitruk14
—. 2014, , 90, 043010, 10.1103/PhysRevE.90.043010
[Dutta & Horn(1981)]Dutta81
Dutta, P., & Horn, P. M. 1981, Reviews of Modern Physics, 53, 497, 10.1103/RevModPhys.53.497
[Einaudi & Velli(1994)]Einaudi94
Einaudi, G., & Velli, M. 1994, , 68, 97, 10.1007/BF00749122
[Feynman & Ruzmaikin(1994)]Feynman94
Feynman, J., & Ruzmaikin, A. 1994, , 99, 17645, 10.1029/94JA01098
[Fox et al.(2016)Fox, Velli, Bale, Decker, Driesman, Howard, Kasper, Kinnison, Kusterer, Lario, Lockwood, McComas, Raouafi, & Szabo]Fox16
Fox, N. J., Velli, M. C., Bale, S. D., et al. 2016, , 204, 7, 10.1007/s11214-015-0211-6
[Frisch et al.(1975)Frisch, Pouquet, Leorat, & Mazure]Frisch75
Frisch, U., Pouquet, A., Leorat, J., & Mazure, A. 1975, Journal of Fluid Mechanics, 68, 769, 10.1017/S002211207500122X
[Fyfe et al.(1977)Fyfe, Montgomery, & Joyce]Fyfe77
Fyfe, D., Montgomery, D., & Joyce, G. 1977, Journal of Plasma Physics, 17, 369, 10.1017/S0022377800020687
[Gailitis et al.(2004)Gailitis, Lielausis, Platacis, Gerbeth, & Stefani]Gailitis04
Gailitis, A., Lielausis, O., Platacis, E., Gerbeth, G., & Stefani, F. 2004, Physics of Plasmas, 11, 2838, 10.1063/1.1666361
[Giacalone et al.(2006)Giacalone, Jokipii, & Matthaeus]Giacalone06
Giacalone, J., Jokipii, J. R., & Matthaeus, W. H. 2006, , 641, L61, 10.1086/503770
[Gilden et al.(1995)Gilden, Thornton, & Mallon]Gilden95
Gilden, D. L., Thornton, T., & Mallon, M. W. 1995, Science, 267, 1837, 10.1126/science.7892611
[Gómez et al.(2013)Gómez, Martín, & Dmitruk]Gomez13
Gómez, D., Martín, L. N., & Dmitruk, P. 2013, Advances in Space Research, 51, 1916, 10.1016/j.asr.2012.09.016
[Hoch et al.(1975)Hoch, Busse, & Moss]Hoch75
Hoch, H., Busse, L., & Moss, F. 1975, , 34, 384, 10.1103/PhysRevLett.34.384
[Huang et al.(2023)Huang, Sioulas, Shi, Velli, Bowen, Davis, Chandran, Matteini, Kang, Shi, Huang, Bale, Kasper, Larson, Livi, Whittlesey, Rahmati, Paulson, Stevens, Case, de Wit, Malaspina, Bonnell, Goetz, Harvey, & MacDowall]Huang23
Huang, Z., Sioulas, N., Shi, C., et al. 2023, , 950, L8, 10.3847/2041-8213/acd7f2
[Huang et al.(2024)Huang, Velli, Shi, Zhu, Chandran, Réville, Bowen, Sioulas, Pulupa, Huang, & Huang]Huang24arXiv
Huang, Z., Velli, M., Shi, C., et al. 2024, arXiv e-prints, arXiv:2405.15967, 10.48550/arXiv.2405.15967
[Isaacs et al.(2015)Isaacs, Tessein, & Matthaeus]Isaacs15
Isaacs, J. J., Tessein, J. A., & Matthaeus, W. H. 2015, Journal of Geophysical Research (Space Physics), 120, 868, 10.1002/2014JA020661
[Jensen(1990)]Jensen90
Jensen, H. J. 1990, , 64, 3103, 10.1103/PhysRevLett.64.3103
[Johnson(1925)]Johnson25
Johnson, J. B. 1925, Physical Review, 26, 71, 10.1103/PhysRev.26.71
[Kadomtsev & Pogutse(1974)]Kadomtsev74
Kadomtsev, B. B., & Pogutse, O. P. 1974, Soviet Journal of Experimental and Theoretical Physics, 38, 283
[Klein et al.(1992)Klein, Matthaeus, Roberts, & Goldstein]Klein92
Klein, L. W., Matthaeus, W. H., Roberts, D. A., & Goldstein, M. L. 1992, in Solar Wind Seven Colloquium, ed. E. Marsch & R. Schwenn, 197–200
[Kobayashi & Musha(1982)]Musha82
Kobayashi, M., & Musha, T. 1982, IEEE Transactions on Biomedical Engineering, BME-29, 456.
<https://api.semanticscholar.org/CorpusID:31743603>
[Kraichnan(1965)]Kraichnan65
Kraichnan, R. H. 1965, Physics of Fluids, 8, 1385, 10.1063/1.1761412
[Levitin et al.(2012)Levitin, Chordia, & Menon]Levitin12
Levitin, D. J., Chordia, P., & Menon, V. 2012, Proceedings of the National Academy of Science, 109, 3716, 10.1073/pnas.1113828109
[Machlup(1981)]Machlup81
Machlup, S. 1981, in Sixth International Conference on Noise in Physical Systems (National Bureau of Standards, Wash. DC), 157–160
[Matteini et al.(2018)Matteini, Stansby, Horbury, & Chen]Matteini18
Matteini, L., Stansby, D., Horbury, T. S., & Chen, C. H. K. 2018, , 869, L32, 10.3847/2041-8213/aaf573
[Matthaeus et al.(2007)Matthaeus, Breech, Dmitruk, Bemporad, Poletto, Velli, & Romoli]Matthaeus07
Matthaeus, W. H., Breech, B., Dmitruk, P., et al. 2007, , 657, L121, 10.1086/513075
[Matthaeus et al.(2018)Matthaeus, Chhiber, Usmanov, Parashar, Goldstein, & Oughton]Matthaeus2018AGU
Matthaeus, W. H., Chhiber, R., Usmanov, A. V., et al. 2018, in AGU Fall Meeting Abstracts, Vol. 2018, SH54A–02
[Matthaeus & Goldstein(1982)]Matthaeus82
Matthaeus, W. H., & Goldstein, M. L. 1982, , 87, 6011, 10.1029/JA087iA08p06011
[Matthaeus & Goldstein(1986)]Matthaeus86
—. 1986, , 57, 495, 10.1103/PhysRevLett.57.495
[Matthaeus & Montgomery(1980)]Matthaeus80
Matthaeus, W. H., & Montgomery, D. 1980, Annals of the New York Academy of Sciences, 357, 203, 10.1111/j.1749-6632.1980.tb29687.x
[Montgomery et al.(1978)Montgomery, Turner, & Vahala]Montgomery78
Montgomery, D., Turner, L., & Vahala, G. 1978, Physics of Fluids, 21, 757, 10.1063/1.862295
[Montgomery et al.(1979)Montgomery, Turner, & Vahala]Montgomery79
—. 1979, Journal of Plasma Physics, 21, 239, 10.1017/S0022377800021802
[Montroll & Shlesinger(1982)]Montroll82
Montroll, E. W., & Shlesinger, M. F. 1982, Proceedings of the National Academy of Science, 79, 3380, 10.1073/pnas.79.10.3380
[Mullan(1990)]Mullan90
Mullan, D. J. 1990, , 232, 520
[Nakagawa & Levine(1974)]Nakagawa74
Nakagawa, Y., & Levine, R. H. 1974, , 190, 441, 10.1086/152896
[Padhye et al.(2001)Padhye, Smith, & Matthaeus]Padhye01
Padhye, N. S., Smith, C. W., & Matthaeus, W. H. 2001, , 106, 18635, 10.1029/2000JA000293
[Perez & Chandran(2013)]Perez13
Perez, J. C., & Chandran, B. D. G. 2013, , 776, 124, 10.1088/0004-637X/776/2/124
[Ponty et al.(2005)Ponty, Mininni, Montgomery, Pinton, Politano, & Pouquet]Ponty05
Ponty, Y., Mininni, P. D., Montgomery, D. C., et al. 2005, , 94, 164502, 10.1103/PhysRevLett.94.164502
[Ponty et al.(2004)Ponty, Politano, & Pinton]Ponty04
Ponty, Y., Politano, H., & Pinton, J.-F. 2004, , 92, 144503, 10.1103/PhysRevLett.92.144503
[Rappazzo et al.(2008)Rappazzo, Velli, Einaudi, & Dahlburg]Rappazzo08
Rappazzo, A. F., Velli, M., Einaudi, G., & Dahlburg, R. B. 2008, , 677, 1348, 10.1086/528786
[Rincon(2019)]Rincon19
Rincon, F. 2019, Journal of Plasma Physics, 85, 205850401, 10.1017/S0022377819000539
[Ruffolo et al.(2020)Ruffolo, Matthaeus, Chhiber, Usmanov, Yang, Bandyopadhyay, Parashar, Goldstein, DeForest, Wan, Chasapis, Maruca, Velli, & Kasper]Ruffolo20
Ruffolo, D., Matthaeus, W. H., Chhiber, R., et al. 2020, , 902, 94, 10.3847/1538-4357/abb594
[Ruiz et al.(2014)Ruiz, Dasso, Matthaeus, & Weygand]Ruiz14
Ruiz, M. E., Dasso, S., Matthaeus, W. H., & Weygand, J. M. 2014, , 289, 3917, 10.1007/s11207-014-0531-9
[Schottky(1926)]Schottky26
Schottky, W. 1926, Physical Review, 28, 74, 10.1103/PhysRev.28.74
[Schrijver & Title(2003)]Schrijver03
Schrijver, C. J., & Title, A. M. 2003, , 597, L165, 10.1086/379870
[Servidio et al.(2009)Servidio, Matthaeus, Shay, Cassak, & Dmitruk]Servidio09
Servidio, S., Matthaeus, W. H., Shay, M. A., Cassak, P. A., & Dmitruk, P. 2009, , 102, 115003, 10.1103/PhysRevLett.102.115003
[Servidio et al.(2010)Servidio, Wan, Matthaeus, & Carbone]Servidio10
Servidio, S., Wan, M., Matthaeus, W. H., & Carbone, V. 2010, Physics of Fluids, 22, 125107, 10.1063/1.3526760
[Shockley(1957)]Shockley57
Shockley, W. 1957, Proceedings of the IRE, 45, 279, 10.1109/JRPROC.1957.278364
[Taylor(1938)]Taylor38
Taylor, G. I. 1938, Proceedings of the Royal Society of London Series A, 164, 476, 10.1098/rspa.1938.0032
[Taylor(1974)]Taylor74
Taylor, J. B. 1974, , 33, 1139, 10.1103/PhysRevLett.33.1139
[Tu & Marsch(1995)]Tu95
Tu, C. Y., & Marsch, E. 1995, , 73, 1, 10.1007/BF00748891
[Van Der Ziel(1950)]vandeZiel50
Van Der Ziel, A. 1950, Physica, 16, 359, 10.1016/0031-8914(50)90078-4
[Vaĭnshteĭn & Zel'dovich(1972)]Vainshtein72
Vaĭnshteĭn, S. I., & Zel'dovich, Y. B. 1972, Soviet Physics Uspekhi, 15, 159, 10.1070/PU1972v015n02ABEH004960
[Velli et al.(1989)Velli, Grappin, & Mangeney]Velli89
Velli, M., Grappin, R., & Mangeney, A. 1989, , 63, 1807, 10.1103/PhysRevLett.63.1807
[Verdini et al.(2012)Verdini, Grappin, Pinto, & Velli]Verdini12
Verdini, A., Grappin, R., Pinto, R., & Velli, M. 2012, , 750, L33, 10.1088/2041-8205/750/2/L33
[Verma(2019)]Verma19
Verma, M. K. 2019, Energy Transfers in Fluid Flows: Multiscale and Spectral Perspectives (Cambridge University Press), 10.1017/9781316810019
[Voss & Clarke(1975)]Voss75
Voss, R. F., & Clarke, J. 1975, , 258, 317, 10.1038/258317a0
[Zhou et al.(1990)Zhou, Matthaeus, Roberts, & Goldstein]Zhou90
Zhou, Y., Matthaeus, W. H., Roberts, D. A., & Goldstein, M. L. 1990, , 64, 2591, 10.1103/PhysRevLett.64.2591
|
http://arxiv.org/abs/2409.03015v1 | 20240904181124 | Glauber-Sudarshan States, Wave Functional of the Universe and the Wheeler-De Witt equation | [
"Suddhasattwa Brahma",
"Keshav Dasgupta",
"Fangyi Guo",
"Bohdan Kulinich"
] | hep-th | [
"hep-th",
"gr-qc"
] |
=1
patterns
positioning
decorations.markings
intersections
0.8 in
6.7in
#1#1-1.5pt
=cmcsc19
=cmss10
=cmss10 at 7pt
=manfnt
compat=1.13
|
http://arxiv.org/abs/2409.02805v1 | 20240904152453 | Global Solution of a Functional Hamilton-Jacobi Equation associated with a Hard Sphere Gas | [
"Chenjiayue Qi"
] | math.AP | [
"math.AP",
"math-ph",
"math.MP"
] |
Global Solution of a Functional Hamilton-Jacobi Equation associated with a Hard Sphere Gas
[
September 9, 2024
==========================================================================================
§ ABSTRACT
In recent years it has been shown for hard sphere gas that, by retaining the correlation information, dynamical fluctuation and large deviation of empirical measure around Boltzmann equation could be proved, in addition to the classical kinetic limit result by Lanford. After taking low-density limit, the correlation information can be encoded into a functional Hamilton-Jacobi equation. The results above are restricted to short time. This paper establishes global-in-time construction of a solution of the Hamilton-Jacobi equation, by analyzing a system of coupled Boltzmann equations. The global solution converges to a non-trivial stationary solution of the Hamilton-Jacobi equation in the long-time limit under proper assumptions.
Acknowledgements
The author is very grateful to Laure Saint-Raymond and Thierry Bodineau for their many inspiring discussions on the topic of this paper, as well as their valuable suggestions on the overall understanding of its main results.
§ INTRODUCTION
In the seminal work of Lanford <cit.>, it is shown that the average dynamics of a hard sphere gas in the low-density limit is governed by the Boltzmann equation. The proof establishes the propagation of chaos for a hard sphere gas: dynamical correlations between different hard spheres are negligible in a certain sense.
Since the result above could be seen as a law of large numbers, one can also look at the corresponding central limit theorem and large deviation theory. In <cit.>, by retaining the correlation information between different particles, the dynamical fluctuations and large deviations of the empirical measure around Boltzmann equation are derived.
In particular, the correlation information is encoded into the so-called cumulant generating functional ℐ_(t,g), and it is shown that after taking the low-density limit the limiting functional ℐ(t,g) satisfies a functional Hamilton-Jacobi equation. The functional Hamilton-Jacobi equation could provide a direct new proof for the convergence of the empirical measure towards the solution of Boltzmann equation. It also plays an important role in establishing the dynamical fluctuations and large deviations of a hard sphere gas. All the results in <cit.> and <cit.> mentioned above are however restricted to short times. Here we mention the recent breakthrough <cit.> on extending Lanford's argument <cit.> into long times.
The current paper is devoted to the construction of global-in-time solution ℐ(t,g) of the limiting functional Hamilton-Jacobi equation. The construction is based on the study of the Euler-Lagrange system (coupled Boltzmann equations) associated with the Hamilton-Jacobi equation.
In subsection <ref> we recall the basic setting of the hard sphere gas, while introducing the formulation of the functional Hamilton-Jacobi equation. Then in subsection <ref>, we introduce the associated Euler-Lagrange system, i.e. the coupled Boltzmann equations. In subsection <ref> we claim the main results of our paper: the global well-posedness of the coupled Boltzmann equations, and further the existence of global-in-time bounded solutions of the functional Hamilton-Jacobi equation. In subsection <ref>, we discuss future directions based on the current results.
§.§ A Hamilton-Jacobi Equation for Hard Sphere Gas
One approach to describe hard sphere gas at microscopic level is to fix the total number N, as well as the diameter >0 of these identical hard spheres. The evolution for the positions (𝐱_1^,...,𝐱_N^)∈T^dN and velocities (𝐯_1^,...,𝐯_N^)∈R^dN of the N particles, satisfies a system of ordinary differential equations (Newton's laws)
d𝐱_i^/dt=𝐯_i^, d𝐯_i^/dt=0 ,
with specular reflection at collsions if |𝐱_i^-𝐱_j^|=
(𝐯_i^)':=𝐯_i^-(𝐯_i^-𝐯_j^)·ωω, (𝐯_j^)':=𝐯_j^+(𝐯_i^-𝐯_j^)·ωω, ω:=𝐱_i^-𝐱_j^/
This means after a collision with |𝐱_i^-𝐱_j^|=, the velocities (𝐯_i^,𝐯_j^) of the two particles will be changed into ((𝐯_i^)',(𝐯_j^)'). This will induce a well-defined trajectory for initial conditions of full Lebesgue measure in the canonical phase space 𝒟_N^
𝒟_N^:={(𝐱_1^,...,𝐱_N^,𝐯_1^,...,𝐯_N^)∈T^dN×R^dN: ∀ i≠j, |𝐱_i^-𝐱_j^|>},
excluding multiple collisions and accumulation of collision times.
This microscopic dynamics induces a Liouville equation for the probability density W_N^ of the N particles, where W_N^(t,X_N,V_N) refers to the probability density of finding N hard spheres with configuration (X_N,V_N) at time t. Here the variable X_N refers to the positions of the N particles X_N:=(x_1,...,x_N), and the variable V_N refers to the velocities of the N particles V_N:=(v_1,...,v_N). The Liouville equation for W_N^ is
_t W_N^+V_N·∇_X_NW_N^=0,
with boundary condition corresponding to specular reflection.
One can further consider the grand canonical formulation of a hard sphere gas: instead of fixing the total number of particles, we assume the total number 𝒩 of particles to be random with a modified Poisson distribution law. For each diameter , we fix
a constant μ_ as the parameter for the modified Poisson distribution of the total number of particles. We assume that at initial time t=0 the probability density of having N particles and configuration (X_N,V_N) as the following with z_i:=(x_i,v_i)
1/𝒵^μ_^N/N!∏_i=1^Nf^0(z_i)1_𝒟_N^.
Here 𝒵^ is the normalizing factor defined as
𝒵^:=1+∑_N≥ 1μ_^N/N!∫_T^dN×R^dN∏_i=1^Nf^0(z_i)1_𝒟_N^dV_NdX_N.
It is clear that we have two sources of randomness: the number 𝒩 of particles is random, while given the total number N, the configuration (X_N,V_N) is also random. For each sample (X_N,V_N), it will follow the evolution law given by equations (<ref>) and (<ref>). Thus at each time t≥ 0 we would have the distribution law for (X_N,V_N).
In the Boltzmann-Grad scaling, we impose μ_=^-(d-1) to ensure the number of collisions per particle is of order 1 per unit time <cit.>. This scaling implies the asymptotic below for E_(𝒩), where E_ is the expectation upon the probability measure given by (<ref>)
lim_→ 0E_(𝒩)^d-1=1.
A central object in the study of hard sphere gas is the empirical measure π_t^, defined as
π_t^:=1/μ_∑_i=1^𝒩δ_𝐳_i^(t),
where δ_𝐳_i^(t) means the Dirac mass at 𝐳_i^(t)∈T^d×R^d. To encode the correlation information, the cumulant generating functional for hard sphere gas with diameter is introduced
Λ^(t,h):=1/μ_logE_[exp(μ_π_t^(h))]=1/μ_logE_[exp(∑_i=1^𝒩h(𝐳_i^(t)))],
where h is a test function with variables (x,v). By taking the Boltzmann-Grad limit → 0 with μ_^d-1=1, the functional Λ^(t,h) should converge to a limiting cumulant generating functional Λ(t,h). This functional satisfies the following functional Hamilton-Jacobi equation, with the functional derivative Λ(t,h)/ h(t) taken as a measure in x and v for each t
_tΛ(t,h)=ℋ(Λ(t,h)/ h(t),h(t))+∫ v·∇_x h Λ(t,h)/ h(t)dvdx, Λ(0,g)=∫ (e^g(0)-1)f^0 dvdx.
In this equation, the Hamiltonian ℋ(φ,p) is defined as
ℋ(φ,p)=1/2∫φ(x,v)φ(x,v_*)(e^Δ p (x,v,v_*)-1)((v_*-v)·ω)_+dω dv_* dvdx,
with ω∈S^d-1 being the collision direction. The function Δ p is defined as
Δ p(x,v,v_*):=p(x,v')+p(x,v_*')-p(x,v)-p(x,v_*).
The variables (v',v_*') is the pre-collisional configuration, defined in a way similar to (<ref>)
v'=v-((v-v_*)·ω)ω, v_*'=v_*+((v-v_*)·ω)ω.
This Hamilton-Jacobi equation contains a collision term represented by ℋ, and a tranport term, which resembles the Boltzmann equation. In the Hamiltonian ℋ, the term e^Δ p-1 represents the effect of collision: the p(x,v')+p(x,v_*') in Δ p has a similar role as the gain term in Boltzmann equation, and the -p(x,v)-p(x,v_*) in Δ p has the same role as the loss term in Boltzmann equation. By taking the derivative Λ(t,h)/ h(t) at h=0, the Boltzmann equation is recovered formally in a weak sense.
In fact, the Hamilton-Jacobi equation encodes much more information about the hard sphere dynamics than the usual Boltzmann equation, in particular it encodes all the dynamical correlations. For a complete justification of the contents above, the readers may read <cit.>. In <cit.>, more formal discussion with physical motivation about the meaning of this Hamiltonian is given.
One can also use test functions h on the entire trajectory during the time interval [0,t], as h(z([0,t])), which is the case in <cit.>. Particularly in this paper, we may choose the test function of the form
h(z([0,t]))=g(t,z(t))-∫_0^tD_sg(s,z(s))ds,
where D_s refers to _s+v·∇_x, and g is a function depending on variables (t,x,v). This choice of test functions enables us to integrate the transport term in equation (<ref>). It then gives the Hamilton-Jacobi equation for ℐ(t,g):=Λ(t,h), where h is defined through g by (<ref>)
_tℐ(t,g)=ℋ(ℐ(t,g)/ g(t),g(t)), ℐ(0,g)=∫ (e^g(0)-1)f^0 dvdx.
This functional equation has been introduced in Theorem 7 of <cit.>.
§.§ Coupled Boltzmann Equations as an Euler-Lagrange System
To find the solution ℐ(t,g) of the Hamiltonian-Jacobi equation (<ref>), it is shown in the subsection 7.1.1. of <cit.> that we can look at the associated Hamiltonian system. There are two interesting equivalent formulations of the Hamiltonian system. The first formulation is for s∈[0,t]
D_sφ_t(s)=ℋ/ p(φ_t(s),p_t(s)), ,
D_s(p_t-g)(s)=-ℋ/φ(φ_t(s),p_t(s)), .
The subscript t means we are studying the coupled system in the time interval s∈[0,t], with terminal data given at time t. Given a mild solution (φ_t,p_t) of equation (<ref>) on [0,t] with initial data φ_t(0)=f^0e^p_t(0) and terminal data p_t(t)=g(t), we define the functional ℐ(t,g) as
ℐ(t,g):=∫ (e^p_t(0)-1)f^0 dvdx+∫_0^t∫φ_t(s) D_s(p_t(s)-g(s))dvdxds+∫_0^tℋ(φ_t(s),p_t(s))ds.
It will be proved in Theorem <ref> that an equivalent form of the functional ℐ(t,g) constructed in equation (<ref>) is a mild solution of the Hamilton-Jacobi equation (<ref>). The notion of mild solution will be specified in Section <ref>.
However in this paper we do not directly deal with the coupled system given above. In <cit.> (Section 7), by performing the change of variables
(ψ_t(s),η_t(s))=(φ_t(s)e^-p_t(s),e^p_t(s)),
an alternative equivalent formulation with better symmetry is introduced
D_sψ_t =-ψ_t D_sg+∫((v_*-v)·ω)_+η_t(v_*)[ψ_t(v')ψ_t(v_*')-ψ_t(v)ψ_t(v_*)]dω dv_*, ψ_t(0)=f^0,
D_s η_t =η_t D_sg-∫((v_*-v)·ω)_+ψ_t(v_*)[η_t(v')η_t(v_*')-η_t(v)η_t(v_*)]dω dv_*, η_t(t)=e^g(t).
In the paper, we generalize the change of variables (<ref>) into
(ψ_t(s),η_t(s))=(φ_t(s)e^-p_t(s)+α'|v|^2,e^p_t(s)-α'|v|^2),
for arbitrary α'∈R. The change of variables (<ref>) in <cit.> corresponds to the particular case α'=0. It will be proved in Lemma <ref> that after this generalized change of variables, the (ψ_t,η_t) satisfies the following coupled Boltzmann equations during the time interval [0,t]
D_sψ_t=-ψ_t D_sg+∫((v_*-v)·ω)_+η_t(v_*)[ψ_t(v')ψ_t(v_*')-ψ_t(v)ψ_t(v_*)]dω dv_*,
D_s η_t=η_t D_sg-∫((v_*-v)·ω)_+ψ_t(v_*)[η_t(v')η_t(v_*')-η_t(v)η_t(v_*)]dω dv_*,
ψ_t(0)=f^0e^α'|v|^2, η_t(t)=e^g(t)e^-α'|v|^2.
Under the generalized change of variables, the form of the coupled Boltzmann equation is the same as (<ref>), but with different ψ_t(0) and η_t(t). It will be explained in Section <ref> that a proper choice of α' enables us to solve the equation in a convenient functional setting.
We define the biased collision operator 𝒬_η(ψ_1,ψ_2) as follows
𝒬_η(ψ_1,ψ_2):=1/2∫((v_*-v)·ω)_+ η(v_*)[ψ_1(v')ψ_2(v_*')+ψ_2(v')ψ_1(v_*')
-ψ_1(v)ψ_2(v_*)-ψ_2(v)ψ_1(v_*)]dω dv_*.
The function ϕ is defined as follows, related to the spatial transport
ϕ(s,x,v):=D_sg(s,x,v)=(_s+v·∇_x)g(s,x,v)
Based on the definitions of the biased collision operator and the function ϕ, we rewrite equation (<ref>) in a more compact form
D_s ψ_t =-ψ_tϕ+𝒬_η_t(ψ_t,ψ_t), ψ_t(0)=f^0e^α'|v|^2,
D_s η_t =η_tϕ-𝒬_ψ_t(η_t,η_t), η_t(t)=e^g(t)e^-α' |v|^2.
Since equation (<ref>) is equivalent to (<ref>) through the change of variables (<ref>), we can as well construct the functional ℐ(t,g) given in (<ref>), by solving equation (<ref>).
We call ψ_t the 'forward component', due to its given initial data and the positive sign of collision operator. The other component η_t is called the 'backward component', due to its given terminal data at time t and also the negative sign of collision operator. Each component provides a bias for the nonlinear collision of the other component, which is transparent in the equation (<ref>). If we take g-α'|v|^2≡ 0, the coupled Boltzmann equations would degenerate to the usual Boltzmann equation, with η_t≡ 1 and ϕ≡ 0.
A pivotal tool we use in the present paper to solve equation (<ref>) is the theory of global-in-time solution for the Boltzmann equation with given initial data. There have been many works dealing with global solutions of Boltzmann equation, with different notions of solutions. Early works include <cit.> for classical solutions, <cit.> for mild solutions, and <cit.> for renormalized solutions. Specifically in the present paper, we will adapt the perturbation regime for global mild solutions in a certain weighted L^∞ space <cit.>.
A subtle issue in the present paper is that we want to solve equation (<ref>) for those functions e^g(t) with quadratic exponential growth in the velocity variable, for example when g(t)=1/4|v|^2. If we take α'=0 in (<ref>), which corresponds to the original change of variables (<ref>) in <cit.>, it naturally requires the forward component ψ_t to have quadratic exponential decay in the velocity variable. The initial data f^0 could be assumed to have quadratic exponential decay, but it is hard to prove the propagation of this quadratic exponential decay. To overcome this difficulty, we will carry out a symmetrization procedure in Section <ref> by choosing a proper α'.
§ MAIN RESULTS
§.§ Global-in-time Solution
As explained in subsection <ref>, to solve the Hamilton-Jacobi equation (<ref>) we will look at the associated Euler-Lagrange system, which is the coupled Boltzmann equations (<ref>). The goal is to find a certain class of functions g such that, the mild solution ℐ(t,g) could be constructed for arbitrary time t≥ 0.
We say a pair of functions (ψ_t,η_t) is a mild solution of the coupled Boltzmann equations (<ref>), if for arbitrary s∈[0,t]
ψ_t(s)=S_sf^0e^α'|v|^2-∫_0^s S_s-τψ_t(τ)ϕ(τ)dτ+∫_0^s S_s-τ𝒬_η_t(τ)(ψ_t(τ),ψ_t(τ))dτ,
η_t(s)=S_-(t-s)e^g(t)e^-α'|v|^2-∫_s^t S_-(τ-s)η_t(τ)ϕ(τ)dτ+∫_s^t S_-(τ-s)𝒬_ψ_t(τ)(η_t(τ),η_t(τ))dτ.
Here the operators {S_τ}_τ∈R are the transport semigroup defined as S_τf(x,v)=f(x-τ v,v). For the Hamilton-Jacobi equation (<ref>), we say a functional ℐ(t,g) is a mild solution of the equation if for arbitrary t≥ 0
ℐ(t,g)=ℐ(0,g)+∫_0^t ℋ(ℐ(s,g)/ g(s),g(s))ds, ℐ(0,g)=∫ (e^g(0)-1)f^0dvdx.
The main result of the paper is to construct a mild solution of the Hamilton-Jacobi equation with f^0 close to the spatially homogeneous standard Maxwellian M
M(x,v)=(2π)^-d/2exp(-1/2|v|^2),
and the function e^g close to a certain reference function ℰ. In this paper, we consider those ℰ of the form
ℰ(x,v)=(2π)^-d/2exp(α|v|^2), α<1/2.
To simplify notations, we define the normalization function ℬ as e^-α'|v|^2. We want to choose a proper α' that symmetrizes the forward initial data and the backward terminal data in (<ref>), with f^0e^α'|v|^2 and e^g(t)e^-α'|v|^2 being close to the same coupled equilibrium 𝒢
𝒢:=M^1/2ℰ^1/2=(2π)^-d/2exp(-1/2(1/2-α)|v|^2).
This requires us to choose ℬ with Mℬ^-1=ℰℬ=𝒢, which yields
ℬ=M^1/2ℰ^-1/2=exp(-1/2(1/2+α)|v|^2),
and thus
α'=1/2(1/2+α).
Since we only consider those reference functions ℰ with α<1/2, the coupled equilibrium 𝒢 has quadratic exponential decay in v.
We will solve the coupled Boltzmann equations (<ref>) with ψ_t and η_t being perturbations around 𝒢. The perturbations should be in a L^∞ space with a polynomial weight on v, denoted by L_β^∞
‖ f‖_L_β^∞=sup_(x,v)∈T^d×R^d|f(x,v)(1+|v|)^β|.
For Theorem <ref>, we assume the forward initial data f^0ℬ^-1 is close to the coupled equilibrium 𝒢 with the perturbation L_x,v^2-orthogonal to a kernel 𝒦, representing the conserved quantities, to be defined in equation (<ref>)
‖ f^0ℬ^-1-𝒢‖_L_β+1^∞<c, f^0ℬ^-1-𝒢∈𝒦^⊥. H1
The kernel 𝒦 is defined as
𝒦:={𝒢,𝒢(v)v_1,...,𝒢(v)v_d,𝒢(v)|v|^2}.
Throughout the paper, the constant c>0 is always taken to be small enough and properly tunned according to other parameters.
The terminal data e^g(t)ℬ is assumed to be close to the coupled equilibrium 𝒢 with the perturbation orthogonal to the kernel 𝒦 at any time t>0
‖ e^g(t)ℬ-𝒢‖_L_β+1^∞< c, e^g(t)ℬ-𝒢∈𝒦^⊥. H2
These orthogonality conditions are common in the literature for the solution of Boltzmann equations on torus to have decay in time. For example, the readers may see Theorem 2.3.1 of <cit.>.
The function g and the forward initial data f^0ℬ^-1 are assumed to have certain regularity and continuity
ϕ(s,x,v) ≡ 0, , S_τf^0ℬ^-1∈(R_τ;L_β^∞) H3.
Throughout the paper, the dimension d will be taken as d≥ 3.
For arbitrary β>4 and σ>1, we can take constants c>0 and a_*>0 depending on β,σ such that for any f^0 and g satisfying the assumptions (<ref>)-(<ref>) and any time t>0, there exists a unique mild solution (ψ_t,η_t) of the coupled Boltzmann equations (<ref>) in the function class below
sup_0≤ s≤ t(1+s)^σ‖ψ_t(s)-𝒢‖_L_β^∞<a_*, sup_0≤ s≤ t(1+(t-s))^σ‖η_t(s)-𝒢‖_L_β^∞<a_*.
Furthermore, the functional ℐ(t,g) in (<ref>) is well-defined for any functions f^0 and g satisfying the assumptions (<ref>)-(<ref>), and is a global-in-time mild solution of the Hamilton-Jacobi equation (<ref>). The mild solution ℐ(t,g) is uniformly bounded for any time t≥ 0 and any functions f^0,g satisfying the assumptions (<ref>)-(<ref>). This solution ℐ(t,g) also converges to a non-trivial stationary solution as t→ +∞.
We present now a similar result (Theorem <ref>) for forcing g satisfying a different set of assumptions. The assumptions are more general in one aspect, while being more restrictive in another aspect.
For Theorem <ref>, the initial data f^0ℬ^-1 is only assumed to be close to the coupled equilibrium 𝒢, without the orthogonality condition
‖ f^0ℬ^-1-𝒢‖_L_β+1^∞<c.H4
The terminal data e^g(t)ℬ is assumed to be close to the coupled equilibrium 𝒢 without orthogonality condition, but its perturbation is assumed to decay exponentially in time with σ>0
‖ e^g(t)ℬ-𝒢‖_L_β+1^∞< e^-σ tc.H5
The function g and the forward initial data f^0ℬ^-1 are assumed to have certain regularity and continuity, where the function ϕ is defined in (<ref>) and _ϕ>0 is a small enough positive constant
‖ϕ‖_L_t^1(L_x,v^∞)+‖ϕ‖_C_t^0(L_x,v^∞)≤_ϕ, ,
S_τf^0ℬ^-1∈(R_τ;L_β^∞)
H6
For arbitrary β>4 and σ>1, we can take constants c>0 and a_*>0 depending on β,σ such that for any f^0 and g satisfying the assumptions (<ref>)-(<ref>) and any time t>0, there exists a unique mild solution (ψ_t,η_t) of the coupled Boltzmann equations (<ref>) in the function class below
sup_0≤ s≤ t‖ψ_t(s)-𝒢‖_L_β^∞<a_*, sup_0≤ s≤ te^σ s‖η_t(s)-𝒢‖_L_β^∞<a_*.
Furthermore, the functional ℐ(t,g) in (<ref>) is well-defined for any functions f^0 and g satisfying the assumptions (<ref>)-(<ref>), and is a global-in-time mild solution of the Hamilton-Jacobi equation (<ref>). The mild solution ℐ(t,g) is uniformly bounded for any time t≥ 0 and any functions f^0,g satisfying the assumptions (<ref>)-(<ref>).
Figure <ref> illustrates the shape of the perturbations ψ_t(s)-𝒢 and η_t(s)-𝒢 in different cases: Theorem <ref> or <ref>.
In Theorem <ref>, for each terminal time t, the forward component ψ_t(s) decays forwards to the coupled equilibrium 𝒢, and the backward component η_t also decays backwards to the 𝒢. The polynomial decay rate σ>1, and the size ‖η_t(t)-𝒢‖_L_β^∞ of perturbation at terminal time t is uniform for all terminal time t.
In Theorem <ref>, we do not have the estimate for the convergence towards equilibrium. This is because we do not assume orthogonality condition for the initial data f^0 and the terminal data e^g(t), as well as the function D_sg could be not constant zero. These may give some (hydrodynamic) modes of constant order that will be preserved in the evolution. The price of having this generality is that, we assume that the size ‖η_t(t)-𝒢‖_L_β^∞ of the perturbation at terminal time t must decay as t→+∞.
§.§ Future Directions
Global Solution of Forced Boltzmann Equation: Based on the results of the current paper, it would be interesting to look at the global-in-time solution φ of the forced Boltzmann equation with forcing p given by
D_sφ=∫(φ(v')φ(v_*')exp(-Δ p)-φ(v)φ(v_*)exp(Δ p))((v_*-v)·ω)_+dω dv_*.
This type of modified equation is crucial for the large deviation theory established in <cit.> for a hard sphere gas. It is shown that for appropriate function p, we are able to look at the asymptotic probability of empirical measure π_t^ converging to an atypical density φ(t)
π_t^→φ(t), → 0,
when φ is a solution of (<ref>).
Previously only local-in-time result about the forced Boltzmann equation is known. The relation between this future direction and the present results is that, the forced equation (<ref>) is exactly the forward equation in the Euler-Lagrange system (<ref>). The difference is that, in this paper we solve (<ref>) with given g, while for (<ref>) we solve it with a given forcing p.
Relation with Schrödinger Problem: In the previous paragraph we mentioned that for φ being a solution of (<ref>), we can study the large deviation cost of it. The (φ,p) is related with (ψ,η) through the change of variables (<ref>), with φ=ψη.
Based on the solution (ψ_t,η_t) given in Theorem <ref>, as t→ +∞ the corresponding density profile φ_t converges to a 'Relaxation and Anti-Relaxation' dynamics: it first relaxes to an equilibrium, and then anti-relaxes to an atypical density profile. This behaviour is related to the Schrödinger problem, namely the computation of the optimal path, given a large deviation cost function, followed by particle system from a given density at time 0 to another density at time t. The mean-field version of this relation has been investigated in <cit.>. For a survey of the Schrödinger Problem and its connection with optimal transport, see <cit.>.
Uniform Control of the Limiting Cumulant Generating Functional: As we have explained in Subsection <ref>, the following functional is used to encode the correlation information of a hard sphere gas with diameter
Λ^(t,h):=1/μ_logE_[exp(μ_π_t^(h))].
After taking the Boltzmann-Grad limit, this functional should converge to the functional Λ(t,h) encoding correlation of the limiting particle system.
In <cit.> it has been shown the functional Λ(t,h) coincide with the solution ℐ(t,g) of the Hamilton-Jacobi equation (<ref>) in a finite time interval [0,T_c) for T_c<+∞, with h determined by g as in (<ref>). The coincidence between Λ(t,h) and ℐ(t,g) for any time t still remains to be proved.
One of the main difficulties for hard sphere gas to have global-in-time results about kinetic limit, dynamical fluctuations, and large deviations is the divergence of the upper bound for cumulant generating functionals when the time t approaches T_c. By establishing global-in-time solution ℐ, with the coincidence between Λ(t,h) and ℐ(t,g) for t∈[0,T_c), we can provide a uniform upper bound for the limiting cumulant generating functional (Figure <ref>). If this coincidence between functionals can be extended to the whole time interval, then the solution ℐ will provide a global-in-time uniform control of the limiting cumulant generating functional.
However the current result does not provide uniform control for the -cumulant generating functional. This will be left to future work. A uniform control of the cumulant generating functionals is expected to be an important step in proving long-time results about kinetic limit, dynamical fluctuations, and large deviations.
§ SYMMETRIZATION AND PERTURBATION REGIME FOR COUPLED BOLTZMANN EQUATIONS
In this section, we will perform the symmetrization procedure and define the perturbation regime, needed to solve the coupled Boltzmann equations (<ref>).
As we have discussed in the previous sections, the symmetrization procedure consists in the change of variables
(ψ_t(s),η_t(s))=(φ_t(s)e^-p_t(s)+α'|v|^2,e^p_t(s)-α'|v|^2), α'=1/2(1/2+α).
It will be proved in Lemma <ref> that the pair (ψ_t,η_t) satisfies the coupled Boltzmann equation (<ref>).
After the symmetrization, we will perform a perturbation decomposition to (ψ,η): we look at the evolution for the perturbation of (ψ,η) from the coupled equilibrium, and we denote the perturbation as (ψ_p,η_p). The evolution equation (<ref>) of the perturbation is given as (<ref>). Finding a mild solution of equation (<ref>) is equivalent to finding a mild solution of equation (<ref>), and we will use a fixed-point method to find the solution of the latter.
A pair of functions (φ_t,p_t) is a mild solution of the coupled Boltzmann equations (<ref>), if and only if the pair of functions (ψ_t,η_t) given by the change of variables (<ref>) is a mild solution of the coupled Boltzmann equations with ℬ defined in (<ref>)
D_s ψ(s) =-ψ(s)ϕ(s)+𝒬_η(ψ,ψ), ψ(0)=f^0ℬ^-1,
D_s η(s) =η(s)ϕ(s)-𝒬_ψ(η,η), η(t)=e^g(t)ℬ.
Expanding the Hamiltonian system (<ref>), we have the following equation with Δ p_t defined in (<ref>)
D_sφ_t=∫(φ_t(v')φ_t(v_*')exp(-Δ p_t)-φ_t(v)φ_t(v_*)exp(Δ p_t))((v_*-v)·ω)_+dω dv_*,
D_sp_t=ϕ(s)-∫φ_t(v_*)(exp(Δ p_t)-1)((v_*-v)·ω)_+dω dv_*,
φ_t(0)=f^0e^p_t(0), p_t(t)=g(t)
According to the definition of Δ p_t, a pair (φ_t,p_t) is a solution of the equation if and only if (φ_t,p_t-α'|v|^2) is a solution of the equation. Thus
D_sφ_t=∫(φ_t(v')φ_t(v_*')exp(-Δ p_t)-φ_t(v)φ_t(v_*)exp(Δ p_t))((v_*-v)·ω)_+dω dv_*,
D_sp_t=ϕ(s)-∫φ_t(v_*)(exp(Δ p_t)-1)((v_*-v)·ω)_+dω dv_*,
φ_t(0)=f^0e^α'|v|^2e^p_t(0), p_t(t)=g(t)-α'|v|^2
This concludes the proof of the lemma.
It is natural to look at the evolution of perturbations, with the hope that the smallness of the initial and terminal perturbations could imply the global well-posedness of the equation
ψ_p:=ψ-𝒢, η_p:=η-𝒢.
The evolution of the perturbations (ψ_p,η_p) should satisfy of the following coupled Boltzmann equations
_sψ_p(s)
= -v·∇_x ψ_p(s)+2𝒬_𝒢(ψ_p,𝒢)_+𝒬_η_p(ψ_p,ψ_p)+2𝒬_η_p(ψ_p,𝒢)+𝒬_𝒢(ψ_p,ψ_p)_-ψ_pϕ(s)-𝒢ϕ(s),
_sη_p(s)
= -v·∇_x η_p(s)-2𝒬_𝒢(η_p,𝒢)_-𝒬_ψ_p(η_p,η_p)-2𝒬_ψ_p(η_p,𝒢)-𝒬_𝒢(η_p,η_p)_+η_pϕ(s)+𝒢ϕ(s).
The mild solution of (<ref>) is defined as the fixed point of the fixed-point map Γ=(Γ^+,Γ^-), which will be introduced in Definition <ref>. For simplicity, we identify the mild solutions of (<ref>) with the mild solutions of (<ref>). At a rigorous level, for the solution (ψ_p,η_p) of (<ref>) considered in this paper, by performing series expansion the corresponding (ψ,η) can be shown to be a solution of (<ref>).
For the formal proof, We only detail it for the evolution of forward perturbation ψ_p, while the proof for the backward perturbation η_p is almost the same.
The definition of perturbation (<ref>) implies
ψ=𝒢+ψ_p, η=𝒢+η_p.
Use the equation above to replace (ψ,η) with (ψ_p,η_p) in equation (<ref>). For the evolution of the forward component, we have
_s (𝒢+ψ_p)(s)=-v·∇_x(𝒢+ψ_p)(s)-(𝒢+ψ_p)(s)ϕ(s)+𝒬_𝒢+ψ_p(𝒢+ψ_p,𝒢+ψ_p).
Since the reference function is independent of the time t and space x variables, we get
_s ψ_p(s)=-v·∇_xψ_p(s)-(𝒢+ψ_p)(s)ϕ(s)+𝒬_𝒢+ψ_p(𝒢+ψ_p,𝒢+ψ_p).
Now we only need the following equality to expand the third-order nonlinear collision term
𝒬_𝒢+ψ_p(𝒢+ψ_p,𝒢+ψ_p)=2𝒬_𝒢(ψ_p,𝒢)+2𝒬_η_p(ψ_p,𝒢)+𝒬_𝒢(ψ_p,ψ_p)+𝒬_η_p(ψ_p,ψ_p).
This equality can be checked directly, using the fact that 𝒢 is the exponential of a collision invariant
𝒬_𝒢(𝒢,𝒢)=0.
The coupled Boltzmann equations (<ref>) for the perturbation (ψ_p,η_p) will be one of our central objects in the rest of the paper. In the definition below, we define the relevant notation needed to study that equation.
We define the linear operators and separately as the linearized Boltzmann operators for the forward perturbation ψ_p or the backward perturbation η_p
ψ_p:=-v·∇_x ψ_p+2𝒬_𝒢(ψ_p,𝒢), η_p:=v·∇_x η_p+2𝒬_𝒢(η_p,𝒢).
The nonlinearity in the evolution of perturbation is denoted as
𝒩[ψ_p,η_p]:=2𝒬_η_p(ψ_p,𝒢)+𝒬_𝒢(ψ_p,ψ_p)+𝒬_η_p(ψ_p,ψ_p).
We introduce the map Γ:(ψ_p,η_p)↦ ([ψ_p,η_p],[ψ_p,η_p]), whose fixed-point is a mild solution of the coupled Boltzmann equations (<ref>)
{ [ψ_p,η_p](s):=e^sψ_p(0)-∫_0^s e^(s-τ)(𝒢+ψ_p)ϕ(τ)dτ+∫_0^se^(s-τ)𝒩[ψ_p,η_p]dτ,
[ψ_p,η_p](s):=e^(t-s)η_p(t)-∫_s^t e^(τ-s)(𝒢+η_p)ϕ(τ)dτ+∫_s^t e^(τ-s)𝒩[η_p,ψ_p]dτ.
.
In the notations, the + sign of B^+ and means that the operators are associated with the forward perturbation ψ_p, while the - sign of B^- and means that the operators are associated with the backward perturbation η_p. Specifically, since the terminal data of η_p is given at time t, we will consider the evolution of η_p in the reversed time. This caused the operator B^- having a different sign from the linearized collision operator in the second line of equation (<ref>).
Sections <ref>, <ref>, and <ref> will be devoted to proving the existence and uniqueness of the fixed-point (ψ_p,η_p) of Γ,
[ψ_p,η_p]=ψ_p, [ψ_p,η_p]=η_p.
Specifically, we will prove that the map Γ is a contraction map in certain function spaces. In section <ref> we prove the decay estimates for the semigroups generated by and ; In section <ref> we prove estimates of the nonlinear terms involved in the fixed-point problem; In section <ref>, we prove the contraction property of the map Γ.
§ ESTIMATE OF RELEVANT SEMIGROUPS IN L_Β^∞ NORM
The operator (resp. ) defined in Definition <ref> generates a strongly continuous semigroup e^s (resp. e^s) on L^2(T_x^d×R_v^d). We can decompose the two semigroups as
e^sB^+=𝒟_1^+(s)+𝒟_2^+(s), e^sB^-=𝒟_1^-(s)+𝒟_2^-(s).
The components 𝒟_1^+(s) and 𝒟_1^-(s) have explicit expressions as
𝒟_1^+(s)f(x,v)=e^-ν(v)sf(x-sv,v), 𝒟_1^-(s)f(x,v)=e^-ν(v)sf(x+sv,v),
where ν(v) is the frequency multiplier defined in (<ref>). For hard spheres, there are positive constants c_1,c_2>0 such that c_1(1+|v|)≤ν(v)≤ c_2(1+|v|).
The components 𝒟_2^+(s) and 𝒟_2^-(s) are respectively defined in (<ref>) and (<ref>).
Recall the definition of 𝒢 as
𝒢:=M^1/2ℰ^1/2=(2π)^-d/2exp(-1/2(1/2-α)|v|^2).
We define 𝒫 as the projection operator from L^2(T_x^d×R_v^d) to the subspace 𝒦 defined in (<ref>). The normalized orthogonal basis spanning 𝒦 is independent of the variable x, and is denoted as {f_i}_0≤ i≤ d+1. Notice that {f_i}_0≤ i≤ d+1 is different from the basis in (<ref>), since that basis may is not orthogonal and normalized. For a function f∈ L^2(T_x^d×R_v^d), we say f∈𝒦^⊥ if 𝒫f=0.
Lemma <ref> is essentially a reorganization of the results in <cit.>. Its proof will be recalled in Appendix <ref>.
For β>4, there exist constants ν_*>0 and C>0 such that for any function f∈𝒦^⊥ and s≥ 0, we have
‖𝒟_2^+(s)f‖_L_β^∞≤ Ce^-ν_* s‖ (1+|v|)^-1f‖_L_β^∞, ‖𝒟_2^-(s)f‖_L_β^∞≤ Ce^-ν_* s‖ (1+|v|)^-1f‖_L_β^∞,
and for any function f∈ L_β^∞ and s≥ 0, we have
‖𝒟_2^+(s)f‖_L_β^∞≤ C‖ f‖_L_β^∞, ‖𝒟_2^-(s)f‖_L_β^∞≤ C‖ f‖_L_β^∞,
‖𝒟_1^+(s)f‖_L_β^∞≤ Ce^-ν_*s‖ f‖_L_β^∞, ‖𝒟_1^-(s)f‖_L_β^∞≤ Ce^-ν_*s‖ f‖_L_β^∞.
For general data f∈ L_β^∞, the estimates of 𝒟_2^+(s)f and 𝒟_2^-(s)f are improved by the next Proposition.
For β>4, there exists a constant C>0 such that for arbitrary function f∈ L_β^∞ and s≥ 0 we have
‖𝒟_2^+(s)f‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞, ‖𝒟_2^-(s)f‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞.
We detail the proof only for the forward component 𝒟_2^+ since the proof for the other component 𝒟_2^- is exactly the same.
First we consider the decomposition of f as f=𝒫f+(𝕀-𝒫)f
𝒟_2^+(s)f=𝒟_2^+(s)𝒫f+𝒟_2^+(s)(𝕀-𝒫)f.
Using Lemma <ref> and Lemma <ref> as well as the decomposition of f, there is
‖𝒟_2^+(s)f ‖_L_β^∞≤‖𝒟_2^+(s)𝒫f‖_L_β^∞+‖𝒟_2^+(s)(𝕀-𝒫)f ‖_L_β^∞≤ C‖𝒫f ‖_L_β^∞+C‖ (1+|v|)^-1(𝕀-𝒫)f ‖_L_β^∞
Thus to derive the desired estimate (<ref>), it is enough to prove
‖𝒫f ‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞, ‖ (1+|v|)^-1(𝕀-𝒫)f ‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞
For ‖𝒫f‖_L_β^∞, we use the fact that 𝒫f∈𝒦, and that 𝒦 is spanned by a family of orthogonal functions {f_j}_0≤ j≤ d+1 with exponential decay in v, therefore belonging to L_β^∞
‖𝒫f‖_L_β^∞ = ‖∑_j=0^d+1⟨ f,f_j⟩_L_x,v^2f_j‖_L_β^∞≤∑_j=0^d+1‖⟨ f,f_j⟩_L_x,v^2f_j‖_L_β^∞.
Using Cauchy-Schwartz inequality for the inner product ⟨ f,f_j⟩ as well as the fact ‖ f_j‖_L_β^∞ is bounded, we further have
‖⟨ f,f_j⟩_L_x,v^2f_j‖_L_β^∞≤ C‖‖ f‖_L_x,v^2‖ f_j‖_L_x,v^2 f_j‖_L_β^∞≤ C‖ f‖_L_x,v^2.
Since β>4, the norm L_β-1^∞ is stronger than the norm L_x,v^2. This implies
‖⟨ f,f_j⟩_L_x,v^2f_j‖_L_β^∞≤ C‖ f‖_L_x,v^2≤ C‖ f‖_L_β-1^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞.
Combining (<ref>) with (<ref>), we derive the first inequality in (<ref>)
‖𝒫f‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞.
For the second inequality in (<ref>), using the triangle inequality we have
‖ (1+|v|)^-1(𝕀-𝒫)f ‖_L_β^∞≤‖ (1+|v|)^-1f ‖_L_β^∞+‖ (1+|v|)^-1𝒫f ‖_L_β^∞.
By (<ref>) there is
‖ (1+|v|)^-1𝒫f ‖_L_β^∞≤‖𝒫f‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞
This implies the second inequality in (<ref>), and thus concludes the proof of the theorem.
§ CONTROL OF THE NONLINEARITY
This section is devoted to the control of the nonlinear terms in the fixed-point map Γ in definition <ref>. Subsection <ref> proves the control of the biased collision operator. Subsection <ref> further gives the estimate of nonlinear terms needed for Theorem <ref>, while Subsection <ref> is for Theorem <ref>. The framework of the estimates in this section originates from <cit.>, with some additional analysis to handle the coupled Boltzmann equations.
§.§ Estimates for the Biased Collision Operator
For any parameter β>4 and functions η,ψ_1,ψ_2∈ L_β^∞, the biased collision operator 𝒬_η(ψ_1,ψ_2) is bounded from above by
‖ (1+|v|)^-1𝒬_η(ψ_1,ψ_2) ‖_L_β^∞≤ C‖η‖_L_β^∞‖ψ_1‖_L_β^∞‖ψ_2 ‖_L_β^∞.
Recall the definition (<ref>) of the biased collision operator
𝒬_η(ψ_1,ψ_2):=1/2∫((v_*-v)·ω)_+ η(v_*)[ψ_1(v')ψ_2(v_*')+ψ_2(v')ψ_1(v_*')
-ψ_1(v)ψ_2(v_*)-ψ_2(v)ψ_1(v_*)]dω dv_*.
The operator is a summation of four terms. We will detail the proof of the following inequality
‖(1+|v|)^-1[∫((v_*-v)·ω)_+ η(v_*)ψ_1(v')ψ_2(v_*')dω dv_*]‖_L_β^∞≤ C‖η‖_L_β^∞‖ψ_1‖_L_β^∞‖ψ_2 ‖_L_β^∞.
The inequality above gives an upper bound for one of the four terms in the biased collision operator. The proof of upper bound for the other three terms is essentially the same. These together imply (<ref>).
By the definition (<ref>) of the L_β^∞ norm, we have for arbitrary function g∈ L_β^∞
|g(v)|≤‖ g‖_L_β^∞ (1+|v|^β)^-1.
Using the inequality above, we deduce
‖(1+|v|)^-1[∫((v_*-v)·ω)_+ η(v_*)ψ_1(v')ψ_2(v_*')dω dv_*]‖_L_β^∞
≤ ‖∫_R^d×S^2 (1+|v|)^-1((v_*-v)·ω)_+(1+|v_*|)^-β(1+|v'|)^-β(1+|v_*'|)^-β dω dv_*‖_L_β^∞
×‖η‖_L_β^∞‖ψ_1‖_L_β^∞‖ψ_2 ‖_L_β^∞.
According to the definition (<ref>) of the L_β^∞ norm, this term is less than
‖∫_R^d×S^d-1 (1+|v|)^-1((v_*-v)·ω)_+(1+|v|)^β(1+|v_*|)^-β(1+|v'|)^-β(1+|v_*'|)^-β dω dv_*‖_L^∞
×‖η‖_L_β^∞‖ψ_1‖_L_β^∞‖ψ_2 ‖_L_β^∞.
To control the L^∞-norm, we write
((v_*-v)·ω)_+≤ |v-v_*|≤ |v|+|v_*|.
With the inequality above, we are able to control the L^∞ norm in (<ref>)
‖∫_R^d×S^d-1 (1+|v|)^-1((v-v_*)·ω)_+(1+|v|)^β(1+|v_*|)^-β(1+|v'|)^-β(1+|v_*'|)^-β dω dv_*‖_L^∞
≤ ‖∫_R^d×S^d-1 (1+|v|)^-1(|v|+|v_*|)(1+|v|)^β(1+|v_*|)^-β(1+|v'|)^-β(1+|v_*'|)^-β dω dv_*‖_L^∞.
By the definition of the pre-collisional configuration (v',v_*'), we have
(1+|v'|)^-β(1+|v_*'|)^-β≤ (1+|v'|+|v_*'|+|v'||v_*'|)^-β≤ C(1+|v|)^-β.
Using the fact that β>4, the inequality above further implies
‖∫_R^d×S^d-1 (1+|v|)^-1((v-v_*)·ω)_+(1+|v|)^β(1+|v_*|)^-β(1+|v'|)^-β(1+|v_*'|)^-β dω dv_*‖_L^∞
≤ C‖∫_R^d×S^d-1 (1+|v|^-1)(|v|+|v_*|)(1+|v_*|)^-β dω dv_*‖_L^∞≤ C.
This concludes the proof of inequality (<ref>), and thus concludes the proof of the lemma.
Based on Lemma <ref>, we can derive the corollary below, by replacing the η or ψ_2 in Lemma <ref> with the coupled equilibrium 𝒢.
For any parameter β>4 and functions η,ψ_1,ψ_2∈ L_β^∞, we have
‖ (1+|v|)^-1𝒬_𝒢(ψ_1,ψ_2) ‖_L_β^∞≤ C‖ψ_1‖_L_β^∞‖ψ_2 ‖_L_β^∞, ‖ (1+|v|)^-1𝒬_η(ψ_1,𝒢) ‖_L_β^∞≤ C‖η‖_L_β^∞‖ψ_1 ‖_L_β^∞.
From now on the terminal time t is fixed, but all the constants are independent of t.
For the forward component, we define the convolution of the semigroup e^sB^+ and the biased collision operator as
Ψ^+[η,ψ_1,ψ_2](s):=∫_0^se^(s-τ)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ.
Due to the decomposition (<ref>) of e^sB^+, we also define the decomposition components of Ψ^+
Ψ_1^+[η,ψ_1,ψ_2](s):=∫_0^s𝒟_1^+(s-τ)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ
Ψ_2^+[η,ψ_1,ψ_2](s):=∫_0^s𝒟_2^+(s-τ)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ
For the backward component, we define the convolution of the semigroup e^sB^- and the biased collision operator as
Ψ^-[ψ,η_1,η_2](s)=∫_s^te^(τ-s)𝒬_ψ(τ)(η_1(τ),η_2(τ))dτ.
Due to the decomposition (<ref>) of e^sB^-, we also define the decomposition components of Ψ^+
Ψ_1^-[η,ψ_1,ψ_2](s):=∫_s^t𝒟_1^-(τ-s)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ
Ψ_2^-[η,ψ_1,ψ_2](s):=∫_s^t𝒟_2^-(τ-s)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ
Based on the definitions of Ψ^+ and Ψ^-, we can rewrite the fixed-point map Γ in a more useful form
[ψ_p,η_p]
= e^sB^+ψ_p(0)-∫_0^se^(s-τ)B^+(𝒢+ψ_p)ϕ dτ
+Ψ^+[η_p,ψ_p,ψ_p]+2Ψ^+[η_p,ψ_p,𝒢]+Ψ^+[𝒢,ψ_p,ψ_p],
[ψ_p,η_p]
= e^(t-s)B^-η_p(t)-∫_s^te^(τ-s)B^-(𝒢+η_p)ϕ dτ
+Ψ^+[ψ_p,η_p,η_p]+2Ψ^+[ψ_p,η_p,𝒢]+Ψ^+[𝒢,η_p,η_p].
A proper norm must be chosen to prove that Γ is a contraction map. For this purpose we define the norm P_β^σ.
We define the norm ‖·‖_P_β^σ as
‖ψ_p‖_P_β^σ:=sup_0≤ s≤ t(1+s)^σ‖ψ_p(s)‖_L_β^∞.
We also define the norm ‖·‖_E_β^σ as
‖ψ_p‖_E_β^σ:=sup_0≤ s≤ te^σ s‖ψ_p(s)‖_L_β^∞.
Recalling that t is the terminal time, we introduce the time reversal operator as
η_p^(s)=η_p(t-s).
According to the definition of P_β^σ and , we have
‖ψ_p(s)‖_L_β^∞≤ (1+s)^-σ‖ψ_p‖_P_β^σ, ‖η_p(s)‖_L_β^∞≤(1+(t-s))^-σ‖η_p^‖_P_β^σ,
where the second inequality is due to
‖η_p^‖_P_β^σ:=sup_0≤ s≤ t(1+s)^σ‖η_p^(s)‖_L_β^∞= sup_0≤ s≤ t(1+s)^σ‖η_p(t-s)‖_L_β^∞
= sup_0≤ s≤ t(1+(t-s))^σ‖η_p(s)‖_L_β^∞.
Similar inequalities are also true for E_β^σ
‖ψ_p(s)‖_L_β^∞≤ e^-σ s‖ψ_p‖_E_β^σ, ‖η_p(s)‖_L_β^∞≤ e^-σ (t-s)‖η_p^‖_E_β^σ.
§.§ Control of Convolutional Nonlinearity for Theorem <ref>
In this subsection, we provide the control of P_β^σ norm for Ψ^+ and Ψ^-, which is useful for Theorem <ref>.
In subsections <ref> and <ref>, all the estimates of Ψ^+ (resp. Ψ^-) will be reduced to the estimates of Ψ_1^+ and Ψ_2^+ (resp. Ψ_1^- and Ψ_2^-).
For any parameters β>4, σ>1, and functions η^,ψ_1,ψ_2∈ P_β^σ, we have the upper bound for the P_β^σ norm of Ψ^+[η,ψ_1,ψ_2] as
‖Ψ^+[η,ψ_1,ψ_2]‖_P_β^σ≤ C‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ.
Estimate of Ψ_2^+: First we would like to prove the upper bound for the L_β^∞ norm of Ψ_2^+(s), with 0≤ s ≤ t. According to the definition of Ψ^+, there is
‖Ψ_2^+[η,ψ_1,ψ_2](s)‖_L_β^∞= ‖∫_0≤τ≤ s𝒟_2^+(s-τ)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))dτ‖_L_β^∞.
Using the boundedness of 𝒟_2^+(s) from L_β-1^∞ to L_β^∞ given by Proposition <ref>, we further have
‖Ψ_2^+[η,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ s‖ (1+|v|)^-1𝒬_η(τ)(ψ_1(τ),ψ_2(τ))‖_L_β^∞ dτ.
To conclude the estimate of the L_β^∞ norm, we use Lemma <ref> to control the L_β^∞ norm related to the biased collision operator
‖Ψ_2^+[η,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ s‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ) ‖_L_β^∞dτ.
Based on the estimate (<ref>) of the L_β^∞ norm, we want to further control the P_β^σ norm of Ψ_2^+. According to the definition (<ref>) of the P_β^σ norm
‖Ψ_2^+[η,ψ_1,ψ_2]‖_P_β^σ =sup_s∈[0,t](1+s)^σ‖Ψ_2^+[η,ψ_1,ψ_2](s)‖_L_β^∞
≤sup_s∈[0,t]C(1+s)^σ∫_0≤τ≤ s‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ) ‖_L_β^∞dτ
≤ C(1+t)^σ∫_0≤τ≤ t‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ) ‖_L_β^∞dτ.
To get the upper bound in (<ref>), we will use the upper bounds (<ref>) provided by the norm P_β^σ. The inequality (<ref>) implies
‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ) ‖_L_β^∞≤(1+(t-τ))^-σ(1+τ)^-2σ‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ.
Combining (<ref>) with (<ref>) leads to
‖Ψ_2^+[η,ψ_1,ψ_2]‖_P_β^σ
≤ C(1+t)^σ∫_0≤τ≤ t(1+(t-τ))^-σ(1+τ)^-2σdτ‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ
≤ C(1+t)^σ(1+t)^-min{σ,2σ}‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ,
where the second inequality is due to Lemma <ref>. Noticing the simple fact that min{σ,2σ}=σ concludes the estimate of Ψ_2^+.
Estimate of Ψ_1^+: Using the explicit expression (<ref>) of 𝒟_1^+, we get
|Ψ_1^+[η,ψ_1,ψ_2](s,x,v)|
≤ C∫_0^se^-ν(v)(s-τ)|sup_x∈T^d𝒬_η(τ)(ψ_1(τ),ψ_2(τ))(x,v)|dτ
≤ C∫_0^se^-ν(v)(s-τ)(1+τ)^-2σν(v)(1+|v|)^-β((1+τ)^2σ(1+|v|)^β1/ν(v)|sup_x∈T^d𝒬_η(τ)(ψ_1(τ),ψ_2(τ))(x,v)|)dτ_.
Taking the supremum over τ and v in term (I), along with the fact that ν(v) is equivalent to 1+|v| up to constants, we further have
|Ψ_1^+[η,ψ_1,ψ_2](s,x,v)|
≤ ∫_0^se^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ (1+|v|)^-β(sup_0≤τ≤ s(1+τ)^2σ‖1/ν(v)𝒬_η(τ)(ψ_1(τ),ψ_2(τ))‖_L_β^∞)
≤ C∫_0^se^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ (1+|v|)^-β(sup_0≤τ≤ s(1+τ)^2σ‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ)‖_L_β^∞)
≤ C∫_0^se^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ (1+|v|)^-β‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ,
where in the second inequality we have used Lemma <ref>. For the time integral in the inequalities above, we decompose the integral and get
∫_0^se^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ
= ∫_0^s/2e^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ+∫_s/2^se^-ν(v)(s-τ)(1+τ)^-2σν(v)dτ
≤ e^-ν_*s/4∫_0^s/2e^-ν(v)(s-τ)/2(1+τ)^-2σν(v)dτ+(1+s/2)^-2σ∫_s/2^se^-ν(v)(s-τ)ν(v)dτ≤ C(1+s)^-σ.
The convolution inequality above implies
‖Ψ_1^+[η,ψ_1,ψ_2](s)‖_L_β^∞≤ (1+s)^-σ‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ.
Thus by the definition (<ref>) of the P_β^σ norm
‖Ψ_1^+[η,ψ_1,ψ_2]‖_P_β^σ≤ C‖η^‖_P_β^σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ.
This along with the estimate of Ψ_2^+ concludes the proof.
For any parameters β>4,σ>1, and functions η^,ψ_1∈ P_β^σ, we have the upper bound for the P_β^σ norm of Ψ^+[η,ψ_1,𝒢] as
‖Ψ^+[η,ψ_1,𝒢]‖_P_β^σ≤ C‖η^‖_P_β^σ‖ψ_1‖_P_β^σ.
Estimate of Ψ_2^+: Using exactly the same method of proving equation (<ref>) in Lemma <ref>, we have the upper bound for the L_β^∞ norm of Ψ^+[η,ψ_1,𝒢] as
‖Ψ_2^+[η,ψ_1,𝒢](s)‖_L_β^∞ ≤ C∫_0≤τ≤ s‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞dτ.
This further implies
‖Ψ_2^+[η,ψ_1,𝒢](s)‖_P_β^σ ≤ Csup_0≤ s≤ t(1+s)^σ∫_0≤τ≤ s‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞dτ
≤ C(1+t)^σ∫_0≤τ≤ t‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞dτ.
Again we use the upper bounds (<ref>) provided by the norm P_β^σ, just as in the proof of Lemma <ref>,
‖Ψ_2^+[η,ψ_1,𝒢]‖_P_β^σ≤ C(1+t)^σ∫_0≤τ≤ t(1+(t-τ))^-σ(1+τ)^-σ‖η^‖_P_β^σ‖ψ_1‖_P_β^σdτ
≤ C(1+t)^σ(1+t)^-min{σ,σ}‖η^‖_P_β^σ‖ψ_1‖_P_β^σ.
Noticing the simple fact that min{σ,σ}=σ concludes the estimate of Ψ_2^+.
Estimate of Ψ_1^+: The estimate of Ψ_1^+[η,ψ_1,𝒢] is similar to the estimate of Ψ_1^+[η,ψ_1,ψ_2]. Similar to (<ref>), we have
|Ψ_1^+[η,ψ_1,𝒢](s,x,v)|≤ C∫_0^se^-ν(v)(s-τ)(1+τ)^-σν(v)dτ (1+|v|)^-β‖η^‖_P_β^σ‖ψ_1‖_P_β^σ.
Then the time integral gives a factor (1+s)^-σ
‖Ψ_1^+[η,ψ_1,𝒢](s)‖_L_β^∞≤ (1+s)^-σ‖η^‖_P_β^σ‖ψ_1‖_P_β^σ.
This eventually gives
‖Ψ_1^+[η,ψ_1,𝒢]‖_P_β^σ≤ C‖η^‖_P_β^σ‖ψ_1‖_P_β^σ.
This along with the estimate of Ψ_2^+ conclude the proof.
For any parameters β>4,σ>1, and functions ψ_1,ψ_2∈ P_β^σ, we have the upper bound for the P_β^σ norm of Ψ^+[𝒢,ψ_1,ψ_2] as
‖Ψ^+[𝒢,ψ_1,ψ_2]‖_P_β^σ≤ C‖ψ_1‖_P_β^σ‖ψ_2‖_P_β^σ.
Estimate of Ψ_2^+: The proof of this lemma is slightly different from the proof of Lemma <ref> and Lemma <ref>. Again due to the definition of Ψ^+ there is
‖Ψ_2^+[𝒢,ψ_1,ψ_2](s)‖_L_β^∞= ‖∫_0≤τ≤ se^(s-τ)𝒬_𝒢(ψ_1(τ),ψ_2(τ))dτ‖_L_β^∞.
The reason for the difference is that for arbitrary τ≥ 0 we have
𝒬_𝒢(ψ_1(τ),ψ_2(τ))∈𝒦^⊥.
This equality can be verified as the following. Suppose h is a collision invariant. For each x we perform the integration over v
∫𝒢h(v)𝒬_𝒢(ψ_1(τ),ψ_2(τ))(v)dvdx
= 1/2∫((v_*-v)·ω)_+ h(v)𝒢(v)𝒢(v_*)
×(ψ_1(v')ψ_2(v_*')+ψ_2(v')ψ_1(v_*')-ψ_1(v)ψ_2(v_*)-ψ_2(v)ψ_1(v_*))dω dv_*dvdx.
Using the symmetry of the collision measure
((v_*-v)·ω)_+dω dv_*dv,
we can perform the change of variables (v,v_*,ω)↦ (v_*,v,-ω) or (v,v_*,ω)↦ (v',v_*',-ω). Then we would have
∫𝒢h(v)𝒬_𝒢(ψ_1(τ),ψ_2(τ))(v)dvdx
= 1/8∫((v_*-v)·ω)_+ 𝒢(v)𝒢(v_*)(h(v)+h(v_*)-h(v')-h(v_*'))
×(ψ_1(v')ψ_2(v_*')+ψ_2(v')ψ_1(v_*')-ψ_1(v)ψ_2(v_*)-ψ_2(v)ψ_1(v_*))dω dv_*dvdx.
Since h is a collision invariant, we have h(v')+h(v_*')-h(v)-h(v_*)=0. This implies
⟨ f_i, 𝒬_𝒢(ψ_1(τ),ψ_2(τ))⟩_L_x,v^2=0
for arbitrary 0≤ i≤ d+1, where {f_i}_0≤ i≤ d+1 is the basis of the kernel 𝒦. Thus there is 𝒬_𝒢(ψ_1(τ),ψ_2(τ))∈𝒦^⊥.
Using the orthogonality (<ref>) and Lemma <ref>, we can transform (<ref>) into
‖Ψ_2^+[𝒢,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ se^-ν_*(s-τ)‖ (1+|v|)^-1𝒬_𝒢(ψ_1(τ),ψ_2(τ))‖_L_β^∞dτ.
We can further control the L_β^∞ norm of the biased collision term by Lemma <ref>
‖Ψ_2^+[𝒢,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ se^-ν_*(s-τ)‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ)‖_L_β^∞dτ.
According to the definition of the P_β^σ norm, we have the inequality
‖Ψ_2^+[𝒢,ψ_1,ψ_2]‖_P_β^σ≤ C(1+s)^σ∫_0≤τ≤ se^-ν_*(s-τ)(1+τ)^-2σdτ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ
≤ C(1+s)^σ(1+s)^-2σ‖ψ_1‖_P_β^σ‖ψ_2 ‖_P_β^σ.
This concludes the estimate of Ψ_2^+.
Estimate of Ψ_1^+: The estimate of Ψ_1^+[𝒢,ψ_1,ψ_2] is also similar to the estimate of Ψ_1^+[η,ψ_1,ψ_2]. As in (<ref>), we have
‖Ψ_1^+[𝒢,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0^se^-ν(v)(s-τ)(1+τ)^-σν(v)dτ (1+|v|)^-β‖ψ_1‖_P_β^σ‖ψ_2‖_P_β^σ.
Consequently we get
‖Ψ_1^+[𝒢,ψ_1,ψ_2](s)‖_P_β^σ≤ C‖ψ_1‖_P_β^σ‖ψ_2‖_P_β^σ
This along with the estimate of Ψ_2^+ concludes the proof.
Since the evolution of the perturbations (ψ_p,η_p) is symmetric, we straightforwardly have the lemma below.
For any parameters β>4, σ>1, and functions ψ,η_1^,η_2^∈ P_β^σ, we have the following upper bounds for the P_β^σ norm of various terms
{ ‖ (Ψ^-[ψ,η_1,η_2])^‖_P_β^σ≤ C‖ψ‖_P_β^σ‖η_1^‖_P_β^σ‖η_2^‖_P_β^σ,
‖ (Ψ^-[ψ,η_1,𝒢])^‖_P_β^σ≤ C‖ψ‖_P_β^σ‖η_1^‖_P_β^σ,
‖ (Ψ^-[𝒢,η_1,η_2])^‖_P_β^σ≤ C‖η_1^‖_P_β^σ‖η_2^‖_P_β^σ.
.
§.§ Control of Convolutional Nonlinearity for Theorem <ref>
In this section we provide the control of the E_β^0-norm (see Definition <ref>) of Ψ^+ and the E_β^-σ norm of Ψ^-, which is useful for Theorem <ref>.
For any parameters β>4,σ>0, functions ψ_1,ψ_2∈ E_β^0 and η^∈ E_β^-σ, we have the following upper bounds for the E_β^0 norm of various terms
{ ‖Ψ^+[η,ψ_1,ψ_2]‖_E_β^0≤ C e^σ t‖η^‖_E_β^-σ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0,
‖Ψ^+[η,ψ_1,𝒢]‖_E_β^0≤ C e^σ t‖η^‖_E_β^-σ‖ψ_1‖_E_β^0,
‖Ψ^+[𝒢,ψ_1,ψ_2]‖_E_β^0≤ C ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0.
.
The first inequality in (<ref>):
The Ψ_2^+ Term: We use the upper bound for the L_β^∞ norm of Ψ_2^+, which has been proved in (<ref>)
‖Ψ_2^+[η,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ s‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ) ‖_L_β^∞dτ.
Since the E_β^0 norm is defined as the supremum of L_β^∞ for different time 0≤ s≤ t, we have
‖Ψ_2^+[η,ψ_1,ψ_2]‖_E_β^0≤ C∫_0≤τ≤ te^σ(t-τ)‖η^‖_E_β^-σ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0dτ≤ C e^σ t‖η^‖_E_β^-σ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0.
The Ψ_1^+ term: For Ψ_1^+, we have
|Ψ_1^+[η,ψ_1,ψ_2](s,x,v)|
≤ C∫_0^se^-ν(v)(s-τ)e^σ (t-τ)ν(v)dτ (1+|v|)^-β(sup_0≤τ≤ se^-σ(t-τ)‖1/1+|v|𝒬_η(τ)(ψ_1(τ),ψ_2(τ))‖_L_β^∞)
≤ C ∫_0^se^-ν(v)(s-τ)e^σ(t-τ)ν(v)dτ (1+|v|)^-β(sup_0≤τ≤ se^-σ (t-τ)‖η(τ)‖_L_β^∞‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ)‖_L_β^∞)
≤ e^σ t∫_0^se^-ν(v)(s-τ)e^-στν(v)dτ (1+|v|)^-β‖η^‖_E_β^-σ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0,
The time integral is uniformly bounded for all s and v
∫_0^se^-ν(v)(s-τ)e^-στν(v)dτ≤ C.
Consequently
‖Ψ_1^+[η,ψ_1,ψ_2]‖_E_β^0≤ C e^σ t‖η^‖_E_β^-σ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0.
This concludes the proof of the first inequality.
The second inequality in (<ref>): It can be derived by replacing ψ_2 with 𝒢.
The third inequality in (<ref>):
The Ψ_2^+ term: We first use the following upper bound for the L_β^∞, which has been proved as (<ref>) in Lemma <ref>
‖Ψ_2^+[𝒢,ψ_1,ψ_2](s)‖_L_β^∞≤ C∫_0≤τ≤ se^-ν_*(s-τ)‖ψ_1(τ)‖_L_β^∞‖ψ_2(τ)‖_L_β^∞dτ.
Then due to the definition of the E_β^0 norm, we have
‖Ψ_2^+[𝒢,ψ_1,ψ_2]‖_E_β^0≤ C∫_0^te^-ν_*(s-τ)‖ψ_1‖_E_β^0‖ψ_2‖_E_β^0dτ≤ C‖ψ_1‖_E_β^0‖ψ_2‖_E_β^0.
This concludes the estimate of Ψ_2^+.
The Ψ_1^+ term: Similar to (<ref>), we obtain
|Ψ_1^+[𝒢,ψ_1,ψ_2](s,x,v)|
≤ C∫_0^se^-ν(v)(s-τ)ν(v)dτ (1+|v|)^-β(sup_0≤τ≤ s‖1/1+|v|𝒬_𝒢(ψ_1(τ),ψ_2(τ))‖_L_β^∞)
≤ C∫_0^se^-ν(v)(s-τ)ν(v)dτ (1+|v|)^-β‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0,
The time integral gives a uniformly bounded constant. As a consequence
‖Ψ_1^+[𝒢,ψ_1,ψ_2]‖_E_β^0≤ C ‖ψ_1‖_E_β^0‖ψ_2 ‖_E_β^0.
This, along with the estimate of Ψ_2^+, concludes the proof.
For any parameters β>4,σ>0, any functions ψ∈ E_β^0 and η_1^,η_2^∈ E_β^-σ, we have the following upper bounds for the norm of various terms
{ ‖ (Ψ^-[ψ,η_1,η_2])^‖_E_β^-σ≤ Ce^σ t‖ψ‖_E_β^0‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ,
‖ (Ψ^-[ψ,η_1,𝒢])^‖_E_β^-σ≤ C‖ψ‖_E_β^0‖η_1^‖_E_β^-σ,
‖ (Ψ^-[𝒢,η_1,η_2])^‖_E_β^-σ≤ Ce^σ t‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ.
.
The first inequality in (<ref>):
The Ψ_2^- term: Similar to (<ref>), we have
‖Ψ_2^-[ψ,η_1,η_2](s)‖_L_β^∞≤ C∫_s≤τ≤ t‖ψ(τ)‖_L_β^∞‖η_1(τ)‖_L_β^∞‖η_2(τ) ‖_L_β^∞dτ.
According to the definition (<ref>) of the relevant norms, the equation above implies
‖ (Ψ_2^-[ψ,η_1,η_2])^‖_E_β^-σ :=sup_0≤ s≤ te^-σ (t-s)‖Ψ_2^-[ψ,η_1,η_2](s)‖_L_β^∞
≤ Csup_0≤ s≤ te^-σ (t-s)∫_s≤τ≤ t‖ψ(τ)‖_L_β^∞‖η_1(τ)‖_L_β^∞‖η_2(τ) ‖_L_β^∞dτ
≤ Csup_0≤ s≤ te^-σ (t-s)∫_s≤τ≤ te^2σ(t-τ)‖ψ‖_E_β^0‖η_1‖_E_β^-σ‖η_2 ‖_E_β^-σdτ
After the integrating over time s≤τ≤ t, there is
‖ (Ψ_2^-[ψ,η_1,η_2])^‖_E_β^-σ ≤ Csup_0≤ s≤ te^σ (t-s)‖ψ‖_E_β^0‖η_1‖_E_β^-σ‖η_2 ‖_E_β^-σ
≤ Ce^σ t‖ψ‖_E_β^0‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ.
The Ψ_1^- term: For Ψ_1^-, we have
|Ψ_1^-[η,ψ_1,ψ_2](s,x,v)|
≤ C∫_s^te^-ν(v)(τ-s)e^2σ (t-τ)ν(v)dτ (1+|v|)^-β(sup_0≤τ≤ se^-2σ(t-τ)‖1/1+|v|𝒬_ψ(τ)(η_1(τ),η_2(τ))‖_L_β^∞)
≤ C ∫_s^te^-ν(v)(τ-s)e^2σ(t-τ)ν(v)dτ (1+|v|)^-β(sup_0≤τ≤ se^-2σ (t-τ)‖ψ(τ)‖_L_β^∞‖η_1(τ)‖_L_β^∞‖η_2(τ)‖_L_β^∞)
≤ C∫_s^te^-ν(v)(τ-s)e^2σ(t- τ)ν(v)dτ (1+|v|)^-β‖ψ‖_E_β^0‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ,
For the time integral, there is
∫_s^te^-ν(v)(s-τ)e^2σ(s- τ)ν(v)dτ=e^2σ(t-s)∫_0^se^-(ν(v)+2σ)(s-τ)ν(v)dτ≤ C e^2σ(t-s).
Consequently
‖Ψ_1^-[ψ,η_1,η_2](s)‖_L_β^∞≤ Ce^2σ(t-s)‖ψ‖_E_β^0‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ.
According to the definition (<ref>) of the E_β^-σ norm, we further have
‖Ψ_1^-[ψ,η_1,η_2]‖_E_β^-σ≤ Ce^σ t‖ψ‖_E_β^0‖η_1^‖_E_β^-σ‖η_2^‖_E_β^-σ.
This concludes the proof of the first inequality in (<ref>).
The second inequality in (<ref>):
The Ψ_2^- term: In a way similar to (<ref>), there is
‖Ψ_2^-[ψ,η_1,𝒢](s)‖_L_β^∞≤ C∫_s≤τ≤ t‖ψ(τ)‖_L_β^∞‖η_1(τ)‖_L_β^∞dτ,
which implies
‖ (Ψ_2^-[ψ,η_1,𝒢])^‖_E_β^-σ ≤ Csup_0≤ s≤ te^-σ (t-s)∫_s≤τ≤ te^σ(t-τ)‖ψ‖_E_β^0‖η_1^‖_E_β^-σdτ≤ C‖ψ‖_E_β^0‖η_1^‖_E_β^-σ.
The Ψ_1^- term: Similar to (<ref>), we have
|Ψ_1^-[η,ψ_1,𝒢](s,x,v)|≤ C∫_s^te^-ν(v)(τ-s)e^σ(t- τ)ν(v)dτ (1+|v|)^-β‖ψ‖_E_β^0‖η_1^‖_E_β^-σ.
Consequently
‖ (Ψ_1^-[ψ,η_1,𝒢])^‖_E_β^-σ≤ C‖ψ‖_E_β^0‖η_1^‖_E_β^-σ.
This concludes the proof of the second inequality.
The third inequality in (<ref>): It can be derived by replacing ψ with 𝒢.
§ SOLVING THE FIXED-POINT PROBLEM
In this section, we will prove the fixed-point map (,) defined in (<ref>) is a contraction map. Thus it has a unique fixed point in a certain function class, while a fixed point of (Γ^+,Γ^-) is consequently a mild solution of the coupled Boltzmann equations (<ref>). Under the assumptions of Theorem <ref>, it is a contraction w.r.t. ψ_p∈ P_β^σ and η_p^∈ P_β^σ with β>4,σ>1; Under the assumptions of Theorem <ref>, it is a contraction w.r.t. ψ_p∈ E_β^0 and η_p^∈ E_β^-σ with β>4,σ>0.
§.§ Fixed Point for Theorem <ref>
For any parameters β>4,σ>1 and assuming (<ref>), we have the following estimates of the fixed point map Γ: for the forward component there is
‖[ψ_p,η_p]‖_P_β^σ ≤ C(‖ψ_p(0)‖_L_β^∞+‖η_p^‖_P_β^σ‖ψ_p‖_P_β^σ^2+‖η_p^‖_P_β^σ‖ψ_p‖_P_β^σ+‖ψ_p‖_P_β^σ^2),
and for the backward component there is
‖([ψ_p,η_p])^‖_P_β^σ ≤ C(‖η_p(t)‖_L_β^∞+‖η_p^‖_P_β^σ^2‖ψ_p ‖_P_β^σ+‖η_p^‖_P_β^σ‖ψ_p‖_P_β^σ+‖η_p^‖_P_β^σ^2).
We will detail the proof for the forward component . The proof for the backward component is almost exactly the same.
According to the decomposition of [ψ_p,η_p] in (<ref>) and the assumption (<ref>) of ϕ≡ 0, we can use the triangle inequality for the P_β^σ norm
‖[ψ_p,η_p]‖_P_β^σ
≤ ‖ e^sψ_p(0)‖_P_β^σ+‖Ψ^+[η_p,ψ_p,ψ_p]‖_P_β^σ+‖Ψ^+[η_p,ψ_p,𝒢]‖_P_β^σ+‖Ψ^+[𝒢,ψ_p,ψ_p]‖_P_β^σ.
For the term associated with the initial perturbation ψ_p(0) in (<ref>), we have
‖ e^sψ_p(0)‖_P_β^σ=sup_0≤ s≤ t(1+s)^σ‖ e^sψ_p(0)‖_L_β^∞≤ Csup_0≤ s≤ t(1+s)^σe^-ν_*s‖ψ_p(0)‖_L_β^∞≤ C‖ψ_p(0)‖_L_β^∞,
where the first inequality is according to the assumption ψ_p(0)∈𝒦^⊥ and Lemma <ref>.
To control the other three terms involving the P_β^σ norm of Ψ^+ in (<ref>), we use Lemmas <ref>, <ref>, and <ref>
‖[ψ_p,η_p]‖_P_β^σ ≤ C‖ψ_p(0)‖_L_β^∞+C‖η_p^‖_P_β^σ‖ψ_p‖_P_β^σ^2+C‖η_p^‖_P_β^σ‖ψ_p‖_P_β^σ+C‖ψ_p‖_P_β^σ^2.
This concludes the estimate of [ψ_p,η_p]. The estimate of [ψ_p,η_p] is the same, which concludes the proof of the lemma.
To use the contraction principle to find the fixed-point, we work in the following function class Ω where a_* is a positive constant
Ω:={(ψ_p,η_p)|‖ψ_p‖_P_β^σ≤ a_*, ‖η_p^‖_P_β^σ≤ a_*, ψ_p(0)=(f^0-M)ℬ^-1, η_p(t)=(e^g(t)-ℰ)ℬ}.
It is equipped with the norm ‖ (ψ_p,η_p)‖_Ω:=‖ψ_p‖_P_β^σ+‖η_p^‖_P_β^σ. The goal is to prove Γ maps the region Ω into itself, and is a contraction map with respect to ‖·‖_Ω.
For the constant c in assumption (<ref>) and (<ref>) being small enough, there exists a constant a_*>0 such that the fixed-point map Γ=(,) maps the region Ω to itself.
Using Lemma <ref>, we have
‖[ψ_p,η_p]‖_P_β^σ≤ C(c+a_*^3+a_*^2+a_*^2) ‖([ψ_p,η_p])^‖_P_β^σ≤ C(c+a_*^3+a_*^2+a_*^2),
since (ψ_p,η_p)∈Ω. Thus to make sure Γ maps Ω into itself, we only need
C(c+a_*^3+a_*^2+a_*^2)≤ a_*.
It is convenient to assume C>1. If c is small enough, for example c<1/10C^-4, we can choose a_*=1/5C^-3. This concludes the proof.
For arbitrary β>4 and σ>1, suppose the functions f^0 and g satisfy the assumptions (<ref>)-(<ref>). Then for arbitrary terminal time 0<t<+∞, there exists a unique fixed point (ψ_p,η_p) of Γ, thus a mild solution of the coupled Boltzmann equations (<ref>), in the function class below with constant a_*>0 depending on c
‖ψ_p‖_P_β^σ<a_*, ‖η_p^‖_P_β^σ<a_*.
With Lemma <ref>, we only need to verify Γ is a contraction map on Ω.
We need to prove for arbitrary (ψ_p,η_p)∈Ω and (ψ_p^*,η_p^*)∈Ω, there is
‖[ψ_p,η_p]- [ψ_p^*,η_p^*]‖_P_β^σ≤1/4(‖ψ_p-ψ_p^*‖_P_β^σ+‖ (η_p-η_p^*)^‖_P_β^σ),
‖([ψ_p,η_p]- [ψ_p^*,η_p^*])^‖_P_β^σ≤1/4(‖ψ_p-ψ_p^*‖_P_β^σ+‖ (η_p-η_p^*)^‖_P_β^σ).
Since the two pair of functions share the same initial and terminal data, the difference between [ψ_p,η_p] and [ψ_p^*,η_p^*] would be
[ψ_p,η_p]- [ψ_p^*,η_p^*]= ∫_0^se^(s-τ)(𝒬_η_p(ψ_p,ψ_p)-𝒬_η_p^*(ψ_p^*,ψ_p^*))dτ
+ ∫_0^se^(s-τ)(𝒬_η_p(ψ_p,𝒢)-𝒬_η_p^*(ψ_p^*,𝒢))dτ
+ ∫_0^se^(s-τ)(𝒬_𝒢(ψ_p,ψ_p)-𝒬_𝒢(ψ_p^*,ψ_p^*))dτ.
Here we have ignored the variable τ for simplicity in notation. Now we consider the three terms in (<ref>) separately. For the first line in the RHS of (<ref>)
𝒬_η_p(ψ_p,ψ_p)-𝒬_η_p^*(ψ_p^*,ψ_p^*) =𝒬_η_p-η_p^*(ψ_p,ψ_p)+(𝒬_η_p^*(ψ_p,ψ_p)-𝒬_η_p^*(ψ_p^*,ψ_p^*))
=𝒬_η_p-η_p^*(ψ_p,ψ_p)+𝒬_η_p^*(ψ_p+ψ_p^*,ψ_p-ψ_p^*).
For the second line in the RHS of (<ref>)
𝒬_𝒢(ψ_p,ψ_p)-𝒬_𝒢(ψ_p^*,ψ_p^*)=𝒬_𝒢(ψ_p+ψ_p^*,ψ_p-ψ_p^*),
For the third line in the RHS of (<ref>)
𝒬_η_p(ψ_p,𝒢)-𝒬_η_p^*(ψ_p^*,𝒢)=𝒬_η_p-η_p^*(ψ_p,𝒢)+𝒬_η_p^*(ψ_p-ψ_p^*,𝒢).
According to the equations above as well as the definition <ref> of Ψ^+, we have
[ψ_p,η_p]-[ψ_p^*,η_p^*] =Ψ^+[η_p-η_p^*,ψ_p,ψ_p]+Ψ^+[η_p,ψ_p+ψ_p^*,ψ_p-ψ_p^*]
+Ψ^+[𝒢,ψ_p+ψ_p^*,ψ_p-ψ_p^*]
+Ψ^+[η_p-η_p^*,ψ_p,𝒢]+Ψ^+[η_p^*,ψ_p-ψ_p^*,𝒢].
To consider the P_β^σ norm of this difference, we first use the triangle inequality for the P_β^σ norm, and then use Lemmas <ref>, <ref>, and <ref>. These would imply
‖[ψ_p,η_p]-[ψ_p^*,η_p^*]‖_P_β^σ
≤ C(‖ (η_p-η_p^*)^‖_P_β^σ ‖ψ_p‖_P_β^σ^2+‖ (η_p^*)^‖_P_β^σ ‖ψ_p+ψ_p^*‖_P_β^σ ‖ψ_p-ψ_p^*‖_P_β^σ
+ ‖ψ_p+ψ_p^*‖_P_β^σ ‖ψ_p-ψ_p^*‖_P_β^σ
+ ‖ (η_p-η_p^*)^‖_P_β^σ ‖ψ_p‖_P_β^σ
+‖ (η_p^*)^‖_P_β^σ ‖ψ_p-ψ_p^*‖_P_β^σ).
Notice that each term in the RHS of (<ref>) consists of either ‖ψ_p-ψ_p^*‖_P_β^σ or ‖ (η_p-η_p^*)^‖_P_β^σ. Since the constant c in assumptions (<ref>)-(<ref>) is small enough, the constant a_* constructed in Lemma <ref> could also be small enough. This means the various terms ‖ψ_p‖_P_β^σ,‖η_p^‖_P_β^σ,‖ψ_p^*‖_P_β^σ,‖η_p^*,‖_P_β^σ are also small enough. Consequently
‖[ψ_p,η_p]-[ψ_p^*,η_p^*]‖_P_β^σ≤1/4(‖ψ_p-ψ_p^*‖_P_β^σ+‖ (η_p-η_p^*)^‖_P_β^σ).
Through exactly the same argument as above, we can show for c small enough and thus a_* small enough, there is
‖([ψ_p,η_p]-[ψ_p^*,η_p^*])^‖_P_β^σ≤1/4(‖ψ_p-ψ_p^*‖_P_β^σ+‖ (η_p-η_p^*)^‖_P_β^σ).
This shows Γ is a contraction map from Ω to Ω, which concludes the proof.
§.§ Fixed Point for Theorem <ref>
For any parameters β>4,σ>0 and assuming (<ref>), we have the following estimates of the fixed point map Γ. If ψ_p∈ E_β^0 and η_p^∈ E_β^-σ, then for there is
‖[ψ_p,η_p]‖_E_β^0
≤ C(‖ψ_p(0)‖_L_β^∞+‖ψ_p‖_E_β^0^2+_ϕ‖ψ_p‖_E_β^0)+Ce^σ t(‖η_p^‖_E_β^-σ‖ψ_p‖_E_β^0^2+‖η_p^‖_E_β^-σ‖ψ_p‖_E_β^0),
and for there is
‖([ψ_p,η_p])^‖_E_β^-σ
≤ C(‖η_p(t)‖_L_β^∞+_ϕ‖η_p^‖_E_β^-σ+‖η_p^‖_E_β^-σ‖ψ_p‖_E_β^0)+Ce^σ t(‖η_p^‖_E_β^-σ^2‖ψ_p ‖_E_β^0+‖η_p^‖_E_β^-σ^2).
Here _ϕ is the small positive constant introduced in assumption (<ref>).
The proof of this lemma is very similar to the proof of Lemma <ref>.
The Forward Component : First according to the decomposition (<ref>) of [ψ_p,η_p] and the triangle inequality for E_β^0 norm, we get
‖[ψ_p,η_p]‖_E_β^0
≤ ‖ e^sψ_p(0)‖_E_β^0+‖Ψ^+[η_p,ψ_p,ψ_p]‖_E_β^0+‖Ψ^+[η_p,ψ_p,𝒢]‖_E_β^0+‖Ψ^+[𝒢,ψ_p,ψ_p]‖_E_β^0
+ ‖∫_0^se^(s-τ)B^+ψ_p(τ)ϕ(τ)dτ‖_E_β^0.
Here we have the extra term involving ϕ since the assumption does not require the function to be constant 0.
To control the term involving ϕ, we write
‖∫_0^se^(s-τ)B^+ψ_p(τ)ϕ(τ)dτ‖_E_β^0≤sup_0≤ s≤ t(C∫_0^s‖ψ_p(τ)ϕ(τ)‖_L_β^∞dτ) ≤ C∫_0^t‖ψ_p(τ)ϕ(τ)‖_L_β^∞dτ
≤ C‖ψ_p‖_E_β^0∫_0^t‖ϕ(τ)‖_L^∞dτ
≤ C_ϕ‖ψ_p‖_E_β^0.
Then we use Lemma <ref> to control the E_β^0 norms of various Ψ^+ terms. This yields the desired estimate of .
The backward component : the proof for the backward component is quite similar. First we control the term in involving ϕ
‖(∫_s^te^(τ-s)B^+η_p(τ)ϕ(τ)dτ)^‖_E_β^-σ ≤sup_0≤ s≤ te^-(t-s)σC∫_s^t ‖η_p(τ)‖_L_β^∞‖ϕ‖_L^∞dτ
≤ e^-σ t_ϕsup_0≤ s≤ te^sσC∫_s^t e^(t-τ)σ‖η_p^‖_E_β^-σdτ
≤ Ce^-σ t_ϕsup_0≤ s≤ te^sσe^(t-s)σ‖η_p^‖_E_β^-σ≤ C_ϕ‖η_p^‖_E_β^-σ.
Next by Lemma <ref>, we can control the E_β^-σ norms of various Ψ^- terms in . This concludes the estimate of as well as the proof of the lemma.
To use the contraction principle to find the fixed-point, we will work in the function class Ω where a_* is a positive constant
Ω:={(ψ_p,η_p)|‖ψ_p‖_E_β^0≤ a_*, e^σ t‖η_p^‖_E_β^-σ≤ a_*, ψ_p(0)=(f^0-M)ℬ^-1, η_p(t)=(e^g(t)-ℰ)ℬ}.
It is equipped with the norm
‖ (ψ_p,η_p)‖_Ω:=‖ψ_p‖_E_β^0+e^σ t‖η_p^‖_E_β^-σ
In terms of the estimate of norms, the two components ψ_p and η_t are no longer symmetric w.r.t. the time-reversal. A factor e^σ t is needed before ‖η_p^‖_E_β^-σ in the definition of ‖·‖_Ω.
For the constants c and _ϕ in assumption (<ref>)-(<ref>) being small enough, there exists a constant a_*>0 such that the fixed-point map Γ=(,) maps the region Ω to itself.
According to Lemma <ref>, there is
‖[ψ_p,η_p]‖_E_β^0≤ C(c+a_*^2+_ϕa_*)+Ce^σ t(e^-σ ta_*^3+e^-σ ta_*^2)=C(c+a_*^2+a_*^3+(_ϕ+1)a_*),
which is the same case as in the proof of Lemma <ref> if _ϕ is small enough. Thus the forward component can be estimated in a similar way.
For the backward component , using Lemma <ref> we have
‖ ([ψ_p,η_p])^‖_E_β^-σ ≤ C(e^-σ tc+_ϕe^-σ ta_*+e^-σ ta_*^2)+Ce^σ t((e^-σ ta_*)^2a_*+(e^-σ ta_*)^2)
≤ Ce^-σ t(c+a_*^2+a_*^3+(_ϕ+1)a_*),
or equivalently
e^σ t‖ ([ψ_p,η_p])^‖_E_β^-σ≤ C(c+a_*^2+a_*^3+(_ϕ+1)a_*).
Consequently the estimate of e^σ t‖ ([ψ_p,η_p])^‖_E_β^-σ is the same as that of ‖[ψ_p,η_p]‖_E_β^0. This concludes the proof.
For any parameters β>4,σ>0, suppose the functions f^0 and g satisfy the assumptions (<ref>)-(<ref>). Then for arbitrary terminal time 0<t<+∞, there exists a unique fixed point (ψ_p,η_p) of Γ, thus a mild solution of the coupled Boltzmann equations (<ref>), in the function class below with constant a_*>0 depending on c
‖ψ_p‖_E_β^0<a_*, e^σ t‖η_p^‖_E_β^-σ<a_*.
With Lemma <ref>, we only need to verify Γ is a contraction map on Ω.
The method is to prove for arbitrary (ψ_p,η_p)∈Ω and (ψ_p^*,η_p^*)∈Ω that the following holds
‖[ψ_p,η_p]- [ψ_p^*,η_p^*]‖_E_β^0≤1/4(‖ψ_p-ψ_p^*‖_E_β^0+e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ),
e^σ t‖([ψ_p,η_p]- [ψ_p^*,η_p^*])^‖_E_β^-σ≤1/4(‖ψ_p-ψ_p^*‖_E_β^0+e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ).
The Forward Component : Thus the difference between [ψ_p,η_p] and [ψ_p^*,η_p^*] is
[ψ_p,η_p]-[ψ_p^*,η_p^*] =Ψ^+[η_p-η_p^*,ψ_p,ψ_p]+Ψ^+[η_p,ψ_p+ψ_p^*,ψ_p-ψ_p^*]
+Ψ^+[𝒢,ψ_p+ψ_p^*,ψ_p-ψ_p^*]
+Ψ^+[η_p-η_p^*,ψ_p,𝒢]+Ψ^+[η_p^*,ψ_p-ψ_p^*,𝒢]
+∫_0^se^(s-τ)B^+(ψ_p-ψ_p^*)ϕ dτ.
Modifying equation (<ref>), we obtain
‖∫_0^s e^(s-τ)B^+(ψ_p-ψ_p^*)ϕ dτ‖_E_β^0≤ C_ϕ‖ψ_p-ψ_p^*‖_E_β^0.
For the first three lines on the RHS of (<ref>), their E_β^0 norms can be controlled using Lemma <ref>. This implies
‖[ψ_p,η_p] -[ψ_p^*,η_p^*]‖_E_β^0
≤ C(e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ ‖ψ_p‖_E_β^0^2+e^σ t‖η_p^*,‖_E_β^-σ ‖ψ_p+ψ_p^*‖_E_β^0 ‖ (ψ_p-ψ_p^*)^‖_E_β^0
+ ‖ψ_p+ψ_p^*‖_E_β^0 ‖ψ_p-ψ_p^*‖_E_β^0
+ e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ ‖ψ_p‖_E_β^0
+e^σ t‖η_p^*,‖_E_β^-σ ‖ψ_p-ψ_p^*‖_E_β^0+_ϕ‖ψ_p-ψ_p^*‖_E_β^0),
which verifies the first equation in (<ref>) if we take a_*>0 small enough, similar to what is done in the proof of Theorem <ref>.
The Backward Component : Similar to the estimate of , the difference between [ψ_p,η_p] and [ψ_p^*,η_p^*] is
[ψ_p,η_p]-[ψ_p^*,η_p^*] =Ψ^-[ψ_p-ψ_p^*,η_p,η_p]+Ψ^-[ψ_p,η_p+η_p^*,η_p-η_p^*]
+Ψ^-[𝒢,η_p+η_p^*,η_p-η_p^*]
+Ψ^-[ψ_p-ψ_p^*,η_p,𝒢]+Ψ^-[ψ_p^*,η_p-η_p^*,𝒢]
+∫_s^te^(τ-s)B^-(η_p-η_p^*)ϕ dτ.
For the E_β^-σ norm for the fourth line of (<ref>), modifying equation (<ref>) we obtain
‖∫_s^t e^(τ-s)B^-(η_p-η_p^*)ϕ dτ‖_E_β^-σ≤ C_ϕ‖η_p-η_p^*‖_E_β^-σ.
For the first three lines on the RHS of (<ref>), their E_β^-σ norms can be controlled using Lemma <ref>. This implies
‖([ψ_p,η_p]-[ψ_p^*,η_p^*])^‖_E_β^-σ
≤ C(e^σ t‖ψ_p-ψ_p^* ‖_E_β^0 ‖η_p^‖_E_β^-σ^2+e^σ t‖ψ_p^*‖_E_β^0 ‖ (η_p+η_p^*)^‖_E_β^-σ ‖ (η_p-η_p^*)^‖_E_β^-σ
+ e^σ t‖ (η_p+η_p^*)^‖_E_β^-σ ‖ (η_p-η_p^*)^‖_E_β^-σ
+ ‖ψ_p-ψ_p^*‖_E_β^0 ‖η_p^‖_E_β^-σ
+‖ψ_p^*‖_E_β^0 ‖ (η_p-η_p^*)^‖_E_β^-σ+_ϕ‖(η_p-η_p^*)^‖_E_β^-σ).
This inequality can be reorganized by multiplying an e^σ t over the two sides
e^σ t‖([ψ_p,η_p]-[ψ_p^*,η_p^*])^‖_E_β^-σ
≤ C(‖ψ_p-ψ_p^* ‖_E_β^0 (e^σ t‖η_p^‖_E_β^-σ)^2+‖ψ_p^*‖_E_β^0 e^σ t‖ (η_p+η_p^*)^‖_E_β^-σ e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ
+ e^σ t‖ (η_p+η_p^*)^‖_E_β^-σ e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ
+ ‖ψ_p-ψ_p^*‖_E_β^0 e^σ t‖η_p^‖_E_β^-σ
+‖ψ_p^*‖_E_β^0 e^σ t‖ (η_p-η_p^*)^‖_E_β^-σ+_ϕe^σ t‖(η_p-η_p^*)^‖_E_β^-σ),
which verifies the second inequality in (<ref>) if we take a_* small enough. As a consequence Γ is a contraction from Ω to Ω, having a unique fixed point. This concludes the proof.
§ JUSTIFICATION OF THE MILD FUNCTIONAL SOLUTION
In this section, we show in Theorem <ref> that we can construct mild solutions of the Hamilton-Jacobi equation, using the mild solution of the coupled Boltzmann equations. The notion of a mild solution for the Hamilton-Jacobi equation (<ref>) has also been defined in (<ref>). Theorem <ref> has been proved in <cit.> under some analyticity assumptions, in the framework of the Cauchy-Kovalevskaya Theorem. In this paper we do not have these analyticity conditions, and our proof is a modification of the proof in <cit.>, with some additional analysis.
To construct a mild solution of the functional Hamilton-Jacobi equation (<ref>), we will consider the Hamiltonian system characterized by the associated Euler-Lagrange equation. Specifically given a terminal time t> 0, we consider the following Hamiltonian system defined on [0,t]
D_sφ_t=ℋ/ p(φ_t,p_t),
D_s(p_t-g)=-ℋ/φ(φ_t,p_t), .
Here the Hamiltonian ℋ has been defined in (<ref>) as
ℋ(φ,p)=1/2∫φ(x,v)φ(x,v_*)(e^Δ p (x,v,v_*)-1)((v_*-v)·ω)_+dω dv_* dvdx.
Given a mild solution of equation (<ref>) on [0,t], we define the functional ℐ(t,g) as
ℐ(t,g):=-1+⟨ f^0,e^p_t(0)⟩+ D_s(p_t(s)-g(s)),φ_t(s)+∫_0^tℋ(φ_t(s),p_t(s))ds.
Here the notation ⟨·,·⟩ refers to the inner product in L^2(T_x^d×R_v^d), while ·,· refers to the inner product in L^2([0,t]×T_x^d×R_v^d), with given terminal time t.
However as the readers have seen in this paper, we do not directly deal with the Euler-Lagrange system (<ref>). Instead, we have performed the change of variables in (<ref>) to make the system more symmetric
(ψ_t,η_t):=(φ_te^-p_t+α'|v|^2,e^p_t-α'|v|^2).
This change of variables provides a new Hamiltonian ℋ' from the original ℋ
ℋ'(ψ,η):=-1/4∫(ψ(v')ψ(v_*')-ψ(v)ψ(v_*))(η(v')η(v_*')-η(v)η(v_*))((v_*-v)·ω)_+dω dv_*dvdx.
The Hamiltonian ℋ'(ψ,η) can also be written equivalently as
ℋ'(ψ,η) =1/2∫ψ(v)ψ(v_*)(η(v')η(v_*')-η(v)η(v_*))((v_*-v)·ω)_+dω dv_*dvdx
=1/2∫(ψ(v')ψ(v_*')-ψ(v)ψ(v_*))η(v)η(v_*)((v_*-v)·ω)_+dω dv_*dvdx.
Replacing (φ_t,η_t) by (ψ_t,η_t) by the change of variables (<ref>), we have the evolution equation for (ψ_t,η_t) during the time interval [0,t]
D_s ψ_t =-ψ_tϕ+𝒬_η_t(ψ_t,ψ_t), ψ_t(0)=f^0ℬ^-1
D_s η_t =η_tϕ-𝒬_ψ_t(η_t,η_t), η_t(t)=e^g(t)ℬ.
This equation for (ψ_t,η_t) can also be written as
D_s ψ_t(s) =-ψ_t(s)ϕ(s)+ℋ'(ψ_t(s),η_t(s))/η, ψ_t(0)=f^0ℬ^-1,
D_s η_t(s) =η_t(s)ϕ(s)-ℋ'(ψ_t(s),η_t(s))/ψ, η_t(t)=e^g(t)ℬ.
In this case, the functional ℐ(t,g) constructed in equation (<ref>) is equivalent to
ℐ(t,g):=-1+⟨ f^0ℬ^-1,η_t(0) ⟩+ D_sη_t(s),ψ_t(s)-ϕ(s),ψ_t(s)η_t(s)+∫_0^tℋ'(ψ_t(s),η_t(s))ds.
Theorem <ref> shows that if the (ψ_t,η_t) in the definition of ℐ(t,g) is the mild solution given in Theorem <ref> or <ref>, then the functional ℐ(t,g) is a mild solution of the functional Hamilton-Jacobi equation.
Some ingredients are needed to prove Theorem <ref>, for example some continuity estimates. In Lemma <ref>, we will prove that the solution (ψ_t(s),η_t(s)) of the coupled Boltzmann equations, is continuous in 0≤ s≤ t under the L_β^∞ norm. This enables us to give a precise definition of D_sψ_t(s) and D_sη_t(s). Using Lemma <ref>, we are going to define D_sψ_t(s) and D_sη_t(s) as the following limits in L_x,v^2
D_sψ_t(s)=lim_τ→ 0(ψ_t(s+τ)-ψ_t(s)/τ-S_τψ_t(s)-ψ_t(s)/τ),
D_sη_t(s)=lim_τ→ 0(η_t(s+τ)-η_t(s)/τ-S_τη_t(s)-η_t(s)/τ).
Using the mild formulation of (ψ_t(s),η_t(s)), for each the difference is equal to a time integral. For example according to the mild formulation of the coupled Boltzmann equations
ψ_t(s+τ)=S_τψ_t(s)-∫_s^s+τS_u-sψ_t(u)ϕ(u)du+∫_s^s+τS_u-sℋ'(ψ_t(u),η_t(u))/ηdu.
This implies
lim_τ→ 0(ψ_t(s+τ)-ψ_t(s)/τ -S_τψ_t(s)-ψ_t(s)/τ)
= lim_τ→ 0(-τ^-1∫_s^s+τS_u-sψ_t(u)ϕ(u)du+τ^-1∫_s^s+τS_u-sℋ'(ψ_t(u),η_t(u))/ηdu).
Thus by the continuity of (ψ_t(s),η_t(s)) under L_β^∞ proved in Lemma <ref>, the integral is continuous with respect to τ and the limits in (<ref>) exist
D_s ψ_t(s)=-ψ_t(s)ϕ(s)+ℋ'(ψ_t(s),η_t(s))/η, D_s η_t(s)=η_t(s)ϕ(s)-ℋ'(ψ_t(s),η_t(s))/ψ.
Now the definition of ℐ(t,g) is rigorous.
If we take (ψ_t(s),η_t(s)) to be the mild solution of the coupled Boltzmann equations given in Theorem <ref> or <ref>, then the functional ℐ(t,g) defined in equation (<ref>) is a mild solution of the Hamilton-Jacobi equation (<ref>).
Take an arbitrary time t. We want to consider the difference ℐ(t+τ,g)-ℐ(t,g) with τ being a small positive number. We use δ_τ to refer to the variation with respect to the terminal time
δ_τψ_t(s)=ψ_t+τ(s)-ψ_t(s), δ_τη_t(s)=η_t+τ(s)-η_t(s).
Time differential of ℐ:
according to the definition (<ref>) of ℐ, there is
ℐ(t+τ)-ℐ(t,g)
= ⟨ f^0,δ_τη_t(0)⟩+ D_sη_t,δ_τψ_t+ D_sδ_τη_t,ψ_t+ D_sδ_τη_t,δ_τψ_t_+∫_t^t+τ⟨ D_sη_t+τ(s),ψ_t+τ(s)⟩ ds
-ϕ,δ_τψ_tη_t-ϕ,ψ_tδ_τη_t-ϕ,δ_τψ_tδ_τη_t_-∫_t^t+τ⟨ϕ(s),ψ_t+τ(s)η_t+τ(s)⟩ ds
+δ_τψ_t,ℋ'(ψ_t,η_t)/ψ+δ_τη_t,ℋ'(ψ_t,η_t)/η+∫_t^t+τℋ'(ψ_t+τ(s),η_t+τ(s))ds
+[∫_0^tℋ'(ψ_t+τ(s),η_t+τ(s))-∫_0^tℋ'(ψ_t(s),η_t(s))ds-δ_τψ_t,ℋ'(ψ_t,η_t)/ψ-δ_τη_t,ℋ'(ψ_t,η_t)/η]_
In the equation above, the first line on the RHS is the variation of ⟨ D_sη_t,ψ_t ⟩ with respect to t, and the second line is the variation of ⟨ϕ,ψ_tη_t⟩ with respect to t. The third and the fourth lines are the variation of ∫_0^t ℋ'(ψ_t(s),η_t(s))ds.
It will be proved in Lemma <ref> that for arbitrary 0≤ s≤ t, the variation is uniformly of order τ
‖δ_τψ_t(s)‖_L_β^∞=O(τ), ‖δ_τη_t(s)‖_L_β^∞=O(τ),
since each component of (ψ_t(s),η_t(s)) is continuous in 0≤ s≤ t under the L_β^∞ norm. Combine (<ref>) with (<ref>), we can show those higher-order remainders in (<ref>) are of order o(τ).
Performing integration by parts according to Lemma <ref>, we have
D_sδ_τη_t,ψ_t=-δ_τη_t,D_sψ_t+⟨δ_τη_t(t),ψ_t(t)⟩-⟨δ_τη_t(0),ψ_t(0)⟩.
Combine (<ref>) with (<ref>) and Lemma <ref>, the difference ℐ(t+τ,g)-ℐ(t,g) becomes
ℐ(t+τ,g)-ℐ(t,g)
= δ_τη_t,-D_s ψ_t(s)-ψ_t(s)ϕ(s)+ℋ'(ψ_t(s),η_t(s))/η
+δ_τψ_t,D_s η_t(s)-η_t(s)ϕ(s)+ℋ'(ψ_t(s),η_t(s))/ψ+∫_t^t+τℋ'(ψ_t+τ(s),η_t+τ(s))ds+o(τ).
Using the fact that (ψ_t,η_t) is also the mild solution of (<ref>) and equation (<ref>), there is
ℐ(t+τ,g)-ℐ(t,g)= ∫_t^t+τℋ'(ψ_t+τ(s),η_t+τ(s))ds+o(τ).
Since each component of (ψ_t(s),η_t(s)) is continuous in 0≤ s≤ t under the L_β^∞ norm, the Hamiltonian ℋ'(ψ_t(s),η_t(s)) is also continuous in 0≤ s≤ t. Consequently the functional ℐ(t,g) is differentiable in time
_tℐ(t,g)=ℋ'(ψ_t(t),η_t(t))=ℋ(φ_t(t),p_t(t)).
Differential of ℐ with respect to g:
Now we want to fix t and differentiate ℐ(t,g) against g(t). The rigorous proof is essentially the same as the proof of the differentiability in t. It is because if we have a variation of g, the terminal data η_t(t)=e^g(t)ℬ will be changed accordingly. This is the same case as the change of terminal data when we are considering the differentiability in t. Here we give the formal proof for simplicity. Using δ to denote the variation, we differentiate ℐ(t,g) against g(t)
ℐ(t,g)= ⟨ f^0ℬ^-1,δη_t(0)⟩+ D_sδη_t,ψ_t+ D_sη_t,δψ_t-ϕ,δψ_tη_t-ϕ,ψ_tδη_t
+δψ_t,ℋ'(ψ_t,η_t)/ψ+δη_t,ℋ'(ψ_t,η_t)/η.
Again we perform an integration by parts
D_sδη_t,ψ_t=-δη_t,D_sψ_t+⟨δη_t(t),ψ_t(t)⟩-⟨δη_t(0),ψ_t(0)⟩.
Using the same technique as in the study of _tℐ, with (ψ_t,η_t) being the mild solution of (<ref>), we get
ℐ(t,g)=⟨δ e^g(t)ℬ,ψ_t(t)⟩-δϕ,ψ_tη_t =⟨δ g(t) ,η_t(t)ψ_t(t)⟩-δϕ,ψ_tη_t
=⟨δ g(t) ,φ_t(t)⟩-δϕ,φ_t(t).
This shows that ℐ(t,g)/ g(t)=φ_t(t). Along with p_t(t)=g(t), (<ref>) is equivalently
_tℐ(t,g)=ℋ(ℐ(t,g)/ g(t),g(t)).
This concludes the proof.
Consider the mild solution (ψ_t(s),η_t(s)) derived in Theorem <ref> or Theorem <ref> of the coupled Boltzmann equations (<ref>). Each component is continuous w.r.t. s and t in the region 0≤ s≤ t under the L_β^∞ norm.
We only detail the proof for (ψ_t(s),η_t(s)) in Theorem <ref>. The proof for (ψ_t(s),η_t(s)) in Theorem <ref> is essentially the same. In the proof, we will use C(g) as a positive constant depending on g, C(f^0) as a positive constant depending on f^0, and C(f^0,g) as a positive constant dependent on f^0 and g.
Continuity in s: suppose we want to consider the difference between (ψ_t,p(s),η_t,p(s)) and (ψ_t,p(s+τ),η_t,p(s+τ)) for τ>0. We use the notation
(ψ_t,p^τ(s),η_t,p^τ(s)):=(ψ_t,p(s+τ),η_t,p(s+τ)), 0≤ s≤ t-τ.
The fixed-point (ψ_t,p,η_t,p) is derived as the limit of (ψ_t,p^(n),η_t,p^(n))_n≥ 0 under the iteration n→ +∞
ψ_t,p^(0):=e^s(f^0ℬ^-1-𝒢), η_t,p^(0):=e^(t-s)(e^g(t)ℬ-𝒢),
ψ_t,p^(n+1)=_t[ψ_t,p^(n),η_t,p^(n)], η_t,p^(n+1)=_t[ψ_t,p^(n),η_t,p^(n)], n≥ 0.
It also satisfies the following iteration relation
ψ_t,p^(n+1)(s)=e^sB^+(f^0ℬ^-1-𝒢)+∫_0^s e^(s-u)B^+𝒩[ψ_t,p^(n),η_t,p^(n)]du,
η_t,p^(n+1)(s)=e^(t-τ-s)B^-e^τ B^-(e^g(t)ℬ-𝒢)+∫_s^t-τ e^(u-s)B^-𝒩[η_t,p^(n),ψ_t,p^(n)]du+∫_t-τ^te^(u-s)B^-𝒩[η_t,p^(n),ψ_t,p^(n)]du_.
The fixed-point (ψ_t,p^τ,η_t,p^τ) can be derived as the limit of (ψ_t,p^τ,(n),η_t,p^τ,(n))_n≥ 0
(ψ_t,p^τ,(n)(s),η_t,p^τ,(n)(s)):=(ψ_t,p^(n)(s+τ),η_t,p^(n)(s+τ)), 0≤ s≤ t-τ.
It satisfies the iteration relation
ψ_t,p^τ,(n+1)(s)=e^sB^+e^τ B^+(f^0ℬ^-1-𝒢)+∫_0^s e^(s-u)B^+𝒩[ψ_t,p^τ,(n),η_t,p^τ,(n)]du+∫_0^τ e^(s-u)B^+𝒩[ψ_t,p^(n),η_t,p^(n)]du_,
η_t,p^τ,(n+1)(s)=e^(t-τ-s)B^-(e^g(t)ℬ-𝒢)+∫_s^t-τ e^(u-s)B^-𝒩[η_t-τ,p^*,(n),ψ_t-τ,p^*,(n)]du.
For now we use P_β^σ as the norm on the time interval [0,t-τ]. For the forward component, there is
‖ψ_t,p^τ,(n+1)-ψ_t,p^(n+1)‖_P_β^σ
≤ ‖∫_0^se^(s-u)𝒩[ψ_t,p^τ,(n),η_t,p^τ,(n)]du-∫_0^se^(s-u)𝒩[ψ_t,p^(n),η_t,p^(n)]du‖_P_β^σ+C(f^0)(1+t)^στ
≤ 1/4(‖ψ_t,p^(n)-ψ_t,p^τ,(n)‖_P_β^σ+‖(η_t,p^(n)-η_t,p^τ,(n))^‖_P_β^σ)+C(f^0)(1+t)^στ.
Here the term C(f^0)(1+t)^στ is due to assumption (<ref>) and Lemma <ref>
‖ e^sB^+e^τ B^+(f^0ℬ^-1-𝒢)-e^sB^+(f^0ℬ^-1-𝒢)‖_L_β^∞ ≤ C‖ e^τ B^+(f^0ℬ^-1-𝒢)-(f^0ℬ^-1-𝒢)‖_L_β^∞
≤ C(f^0)(1+t)^στ.
Similarly for the backward component, we get
‖(η_t,p^τ,(n+1)-η_t,p^(n+1))^‖_P_β^σ≤1/4(‖ψ_t,p^(n)-ψ_t,p^τ,(n)‖_P_β^σ+‖(η_t,p^(n)-η_t,p^τ,(n))^‖_P_β^σ)+C(g)(1+t)^στ.
Here the term C(g)(1+t)^στ is due to assumption (<ref>) and Lemma <ref>, in a way similar to (<ref>).
Equations (<ref>) and (<ref>) together imply the contraction relation with a C(f^0,g)(1+t)^στ error
‖ψ_t,p^(n+1)-ψ_t,p^τ,(n+1)‖_P_β^σ+‖(η_t,p^(n+1)-η_t,p^τ,(n+1))^‖_P_β^σ
≤ 1/2(‖ψ_t,p^(n)-ψ_t,p^τ,(n)‖_P_β^σ+‖(η_t,p^(n)-η_t,p^τ,(n))^‖_P_β^σ)+C(f^0,g)(1+t)^στ.
Taking n→ +∞, we have shown (ψ_t(s),η_t(s)) is right continuous in s
‖ψ_t,p-ψ_t,p^τ‖_P_β^σ+‖(η_t,p-η_t,p^τ)^‖_P_β^σ≤ C(f^0,g)(1+t)^στ→ 0, τ→ 0.
The same analysis also holds true if we take τ<0, which proves (ψ_t(s),η_t(s)) is left continuous in s. This concludes the proof of the continuity in s.
Continuity in t: to prove the lemma, we first notice
δ_τψ_t(s)=δ_τψ_t,p(s)=ψ_t+τ,p(s)-ψ_t,p(s), δ_τη_t(s)=δ_τη_t,p(s)=η_t+τ,p(s)-η_t,p(s).
Here the (ψ_t,p,η_t,p) is the fixed point of the map (<ref>) with terminal time t. Recalling Theorem <ref>, we assume ϕ≡ 0
{ _t[ψ_p,η_p](s):=e^s(f^0ℬ^-1-𝒢)+∫_0^se^(s-u)𝒩[ψ_p,η_p]du,
_t[ψ_p,η_p](s):=e^(t-s)(e^g(t)ℬ-𝒢)+∫_s^t e^(u-s)𝒩[η_p,ψ_p]du,
.
while (ψ_t+τ,p,η_t+τ,p) is the fixed point of the map (<ref>) with terminal time t+τ
{ _t+τ[ψ_p,η_p](s):=e^s(f^0ℬ^-1-𝒢)+∫_0^se^(s-u)𝒩[ψ_p,η_p]du,
_t+τ[ψ_p,η_p](s):=e^(t+τ-s)(e^g(t+τ)ℬ-𝒢)+∫_s^t+τ e^(u-s)𝒩[η_p,ψ_p]du.
.
The fixed-point (ψ_t,p,η_t,p) is derived as the limit of (ψ_t,p^(n),η_t,p^(n))_n≥ 0 under the iteration n→ +∞
ψ_t,p^(0):=e^s(f^0ℬ^-1-𝒢), η_t,p^(0):=e^(t-s)(e^g(t)ℬ-𝒢),
ψ_t,p^(n+1)=_t[ψ_t,p^(n),η_t,p^(n)], η_t,p^(n+1)=_t[ψ_t,p^(n),η_t,p^(n)], n≥ 0.
The fixed-point (ψ_t+τ,p,η_t+τ,p) is derived as the limit of (ψ_t+τ,p^(n),η_t+τ,p^(n)) under the iteration n→ +∞
ψ_t+τ,p^(0):=e^s(f^0ℬ^-1-𝒢), η_t+τ,p^(0):=e^(t+τ-s)(e^g(t+τ)ℬ-𝒢),
ψ_t+τ,p^(n+1)=_t+τ[ψ_t+τ,p^(n),η_t+τ,p^(n)], η_t+τ,p^(n+1)=_t+τ[ψ_t+τ,p^(n),η_t+τ,p^(n)], n≥ 0.
Regarding the involved functions and the norm P_β^σ as defined on the time interval [0,t], we want to prove
‖ψ_t,p^(n+1)-ψ_t+τ,p^(n+1)‖_P_β^σ+‖(η_t,p^(n+1)-η_t+τ,p^(n+1))^‖_P_β^σ≤ O(τ)+1/2(‖ψ_t,p^(n)-ψ_t+τ,p^(n)‖_P_β^σ+‖(η_t,p^(n)-η_t+τ,p^(n))^‖_P_β^σ).
Once (<ref>) is proved, we will have
‖ψ_t,p-ψ_t+τ,p‖_P_β^σ+‖(η_t,p-η_t+τ,p)^‖_P_β^σ≤ O(τ).
Now we prove (<ref>). For the forward component, according to (<ref>) there is
‖ψ_t+τ,p^(n+1)-ψ_t,p^(n+1)‖_P_β^σ =‖∫_0^se^(s-u)𝒩[ψ_t+τ,p^(n),η_t+τ,p^(n)]du-∫_0^se^(s-u)𝒩[ψ_t,p^(n),η_t,p^(n)]du‖_P_β^σ
≤1/2(‖ψ_t,p^(n)-ψ_t+τ,p^(n)‖_P_β^σ+‖(η_t,p^(n)-η_t+τ,p^(n))^‖_P_β^σ)
For the backward component, we have
η_t+τ,p^(n+1)(s)-η_t,p^(n+1)(s)= e^(t+τ-s)(e^g(t+τ)ℬ-𝒢)-e^(t-s)(e^g(t)ℬ-𝒢)
+∫_s^t+τe^(u-s)𝒩[η_t+τ,p^(n),ψ_t+τ,p^(n)]du-∫_s^te^(u-s)𝒩[η_t,p^(n),ψ_t,p^(n)]du.
To control the difference between these different terminal data, we decompose it as
‖ e^(t+τ-s)(e^g(t+τ)ℬ-𝒢)-e^(t-s)(e^g(t)ℬ-𝒢)‖_L_β^∞
≤ ‖(e^(t+τ-s)(e^g(t+τ)ℬ-𝒢)-e^(t+τ-s)(e^g(t)ℬ-𝒢))‖_L_β^∞
+‖(e^(t+τ-s)(e^g(t)ℬ-𝒢)-e^(t-s)(e^g(t)ℬ-𝒢))‖_L_β^∞
The term (I) is controlled using the boundedness of e^(t+τ-s)B^- in L_β^∞ (see Lemma <ref>) and assumption (<ref>)
≤ C‖(e^g(t+τ)ℬ-𝒢)-(e^g(t)ℬ-𝒢)‖_L_β^∞≤ C(g)τ.
According to Lemma <ref> and the boundedness of e^(t-s)B^- in L_β^∞, the term (II) is controlled as
=‖ e^(t-s)(e^τ B^-(e^g(t)ℬ-𝒢)-(e^g(t)ℬ-𝒢))‖_L_β^∞
≤ C‖ e^τ B^-(e^g(t)ℬ-𝒢)-(e^g(t)ℬ-𝒢)‖_L_β^∞≤ C(g)τ.
These imply
‖(η_t+τ,p^(n+1)-η_t,p^(n+1))^‖_P_β^σ ≤‖(_t[ψ_t+τ,p^(n),η_t+τ,p^(n)]-_t[ψ_t,p^(n),η_t,p^(n)])^‖_P_β^σ+C(g)(1+t)^στ,
≤ C(g)(1+t)^στ+1/2(‖ψ_t,p^(n)-ψ_t+τ,p^(n)‖_P_β^σ+‖(η_t,p^(n)-η_t+τ,p^(n))^‖_P_β^σ).
Now equation (<ref>) is proved, which concludes the proof of the continuity in t.
We have performed two integrations by parts in the formal proof, which are (<ref>) and (<ref>). Here we will justify the first one, while the second one can be justified in the same way.
Consider the solution (ψ_t,η_t) derived in Theorem <ref> or Theorem <ref> of the coupled Boltzmann equations (<ref>). For arbitrary t>0 and τ>0, we have
D_sδ_τη_t,ψ_t=-δ_τη_t,D_sψ_t+⟨δ_τη_t(t),ψ_t(t)⟩-⟨δ_τη_t(0),ψ_t(0)⟩.
Since the operator D_s is rigorously defined as the limit in (<ref>) whose convergence is uniform in s, we have
∫_0^t⟨ D_sδ_τη_t(s),ψ_t(s)⟩ ds
= lim_→ 0(∫_0^t ⟨δ_τη_t(s+)-δ_τη_t(s)/,ψ_t(s)⟩ ds-∫_0^t ⟨S_δ_τη_t(s)-δ_τη_t(s)/,ψ_t(s)⟩ ds).
For each >0, the first term in the RHS of (<ref>) equals to
∫_0^t ⟨δ_τη_t(s), ψ_t(s-)-ψ_t(s)/⟩ ds+∫_t^t+⟨δ_τη_t(s), ψ_t(s-)/⟩ ds-∫_0^⟨δ_τη_t(s), ψ_t(s-)/⟩ ds.
The second term in the RHS of (<ref>) equals to
∫_0^t ⟨S_δ_τη_t(s)-δ_τη_t(s)/,ψ_t(s)⟩ ds=∫_0^t ⟨δ_τη_t(s),S_-ψ_t(s)-ψ_t(s)/⟩ ds.
These equations together imply
∫_0^t⟨ D_sδ_τη_t(s),ψ_t(s)⟩ ds
= lim_→ 0[∫_0^t ⟨δ_τη_t(s), ψ_t(s-)-ψ_t(s)/-S_-ψ_t(s)-ψ_t(s)/⟩ ds
+∫_t^t+⟨δ_τη_t(s), ψ_t(s-)/⟩ ds-∫_0^⟨δ_τη_t(s), ψ_t(s-)/⟩ ds],
Each component of (ψ_t(s),η_t(s)) is continuous in 0≤ s≤ t under the L_β^∞ norm. As a consequence the limit exists and implies
D_sδ_τη_t(s),ψ_t(s) =-δ_τη_t(s),D_sψ_t(s) +⟨δ p_t(t),φ_t(t) ⟩-⟨δ p_t(0),ψ_t(0) ⟩.
This concludes the proof of the lemma.
To conclude this section, we give the postponed proof of a technical lemma used in the proof of Theorem <ref>.
Consider the mild solution (ψ_t(s),η_t(s)) derived in Theorem <ref> or Theorem <ref> of the coupled Boltzmann equations (<ref>), the following estimate holds
⟨δ_τη_t(t),ψ_t(t)⟩+∫_t^t+τ⟨ D_sη_t+τ(s),ψ_t+τ(s)⟩ ds-∫_t^t+τ⟨ϕ(s),ψ_t+τ(s)η_t+τ(s)⟩ ds=o(τ)
By the equivalent expression (<ref>) of D_sη, we have
∫_t^t+τ⟨ D_sη_t+τ(s),ψ_t+τ(s)⟩ ds-∫_t^t+τ⟨ϕ(s),ψ_t+τ(s)η_t+τ(s)⟩ ds =-∫_t^t+τ⟨𝒬_ψ_t+τ(η_t+τ,η_t+τ),ψ_t+τ⟩ ds.
Using the continuity (Lemma <ref>) of (ψ_t(s),η_t(s)) under the L_β^∞ norm, we further have
lim_τ→ 0τ^-1(-∫_t^t+τ⟨𝒬_ψ_t+τ(η_t+τ,η_t+τ),ψ_t+τ⟩ ds)=-⟨𝒬_ψ_t(t)(η_t(t),η_t(t)),ψ_t(t) ⟩.
The variation δ_τη_t(t) is decomposed as
δ_τη_t(t) =(η_t+τ(t)-η_t+τ(t+τ))+(η_t+τ(t+τ)-η_t(t))
=(η_t+τ(t)-η_t+τ(t+τ))+(e^g(t+τ)ℬ-e^g(t)ℬ).
Using the mild formulation of the coupled Boltzmann equations, there is
η_t+τ(t)-η_t+τ(t+τ)= S_-τe^g(t+τ)ℬ-e^g(t+τ)ℬ-∫_t^t+τS_-(s-t)η_t+τϕ ds
+∫_t^t+τ S_-(s-t)𝒬_ψ_t+τ(η_t+τ,η_t+τ) ds.
Based on the continuity of (ψ_t(s),η_t(s)) under the L_β^∞ norm (Lemma <ref>), the continuity of ϕ under the L_x,v^∞ norm (assumption (<ref>) or (<ref>)), as well as the continuity of the transport semigroup in L_x,v^2, we obtain the following convergence in L_x,v^2
lim_τ→ +∞η_t+τ(t+τ)-η_t+τ(τ)/τ
=lim_τ→ +∞(v·∇_xe^g(t+τ)ℬ-τ^-1∫_t^t+τη_t+τϕ ds+τ^-1∫_t^t+τ𝒬_ψ_t+τ(η_t+τ,η_t+τ) ds)
=v·∇_xe^g(t+τ)ℬ-η_t(t)ϕ(t)+𝒬_ψ_t(t)(η_t(t),η_t(t)).
Combine the equation above with (<ref>), we have
lim_τ→ 0δ_τη_t(t)/τ=D_se^g(t)ℬ-D_sg(t)e^g(t)ℬ+𝒬_ψ_t(t)(η_t(t),η_t(t))=𝒬_ψ_t(t)(η_t(t),η_t(t)).
Equations (<ref>) and (<ref>) together conclude the proof of the lemma.
§ UNIFORM BOUNDEDNESS, LONG-TIME BEHAVIOUR, AND STATIONARY SOLUTIONS
In Theorem <ref> we have constructed a solution ℐ(t,g) of the Hamilton-Jacobi equation. In this section we will give a description of its various properties. Recall that ⟨·,·⟩ refers to the inner product in L^2(T_x^d×R_v^d), and ·,· refers to the inner product in L^2([0,t]×T_x^d×R_v^d), with given terminal time t.
According to Theorem <ref>, the functional ℐ(t,g) defined as
ℐ(t,g):=⟨ f^0,η_t(0)-1 ⟩+ D_sη_t(s),ψ_t(s)-η_t(s),ψ_t(s)ϕ(s)+∫_0^tℋ'(ψ_t(s),η_t(s))ds.
is a mild solution of the Hamilton-Jacobi equation. The Hamiltonian ℋ' is defined as
ℋ'(ψ,η):=1/2∫η(v)η(v_*)(ψ(v')ψ(v_*')-ψ(v)ψ(v_*))((v_*-v)·ω)_+dω dv_*dvdx,
Here (ψ_t(s),η_t(s)) is the mild solution of the coupled Boltzmann equations (<ref>) during the time interval [0,t].
§.§ Functional Solution for Theorem <ref>
Under the assumptions (<ref>)-(<ref>), the functional solution ℐ(t,g) constructed in Theorem <ref> of the Hamilton-Jacobi equation is uniformly bounded for arbitrary time t≥ 0 and function g satisfying the assumptions (<ref>)-(<ref>).
First we perform an integration by parts
D_sη_t(s),ψ_t(s) =η_t(s),-D_sψ_t(s)+⟨η_t(t),ψ_t(t) ⟩-⟨η_t(0),ψ_t(0)⟩
=η_t(s),-D_sψ_t(s)+⟨η_t(t),ψ_t(t) ⟩-⟨η_t(0),f^0⟩,
where in the second equality we have used the fact that ψ_t(0)=f^0. The integration by parts can be justified in the spirit of Lemma <ref>.
This integration by parts, along with the fact that ∫ f^0dxdv=1, implies
ℐ(t,g)=-1+⟨η_t(t),ψ_t(t) ⟩+η_t(s),-D_sψ_t(s) -η_t(s),ψ_t(s)ϕ_t(s) +∫_0^tℋ'(ψ_t(s),η_t(s))ds.
According to the definition of ℋ', we have
ℋ'(ψ_t(s),η_t(s))=1/2⟨η_t(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s))⟩,
thus
ℐ(t,g)
= -1+⟨η_t(t),ψ_t(t) ⟩+η_t(s),-D_sψ_t(s) -η_t(s),ψ_t(s)ϕ_t(s) +1/2η_t(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s))
= -1+⟨η_t(t),ψ_t(t) ⟩+η_t(s),-D_sψ_t(s)-ψ_t(s)ϕ(s)+𝒬_η_t(s)(ψ_t(s),ψ_t(s))
-1/2η_t(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s)).
Since (ψ_t(s),η_t(s)) is the mild solution of the coupled Boltzmann equations (<ref>), we have
ℐ(t,g)=-1+⟨η_t(t),ψ_t(t) ⟩_-1/2η_t(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s))_.
Recall the perturbations ψ_t,p and η_t,p introduced in (<ref>). The estimate of term (I) is relatively immediate with L_β^∞ being a stronger topology than L^2 due to β>4
∫η_t(t)ψ_t(t)dvdx≤ C ‖η_t(t)‖_L_β^∞‖ψ_t(t)‖_L_β^∞ ≤ C (‖𝒢‖_L_β^∞+‖η_p,t(t)‖_L_β^∞)(‖𝒢‖_L_β^∞+‖ψ_p,t(t)‖_L_β^∞)
≤ C (‖𝒢‖_L_β^∞+a_*)(‖𝒢‖_L_β^∞+a_*).
To estimate the term (II), we first perform the perturbation decomposition for η_t
=𝒢,𝒬_η_t(s)(ψ_t(s),ψ_t(s))_+η_t,p(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s))_.
To estimate the (II.1) term, we first notice that Lemma <ref> implies 𝒬_η_t(s)(ψ_t(s),ψ_t(s))∈ L_β-1^∞. Consequently the integrand in (II.1) is absolutely integrable. The fact that (ψ_t,η_t) is the mild solution of the coupled Boltzmann equations (<ref>)
ψ_t(t)=S_tψ_t(0)-∫_0^tS_t-sψ_t(s)ϕ_t(s)ds+∫_0^tS_t-s𝒬_η_t(s)(ψ_t(s),ψ_t(s))ds.
Using the fact that 𝒢 is a function independent of time and space, we have
∫_0^t(∫𝒢𝒬_η_t(s)(ψ_t(s),ψ_t(s))dvdx)ds =∫_0^t(∫𝒢S_t-s𝒬_η_t(s)(ψ_t(s),ψ_t(s))dvdx)ds
=∫𝒢(∫_0^tS_t-s𝒬_η_t(s)(ψ_t(s),ψ_t(s))ds)dvdx.
Since by assumption (<ref>) ϕ≡ 0, the equation above eventually yields the (II.1) term is equal to
=∫𝒢(ψ_t(t)+∫_0^tS_t-sψ_t(s)ϕ_t(s)ds-S_tψ_t(0))dvdx=∫𝒢ψ_t(t)dvdx-∫𝒢ψ_t(0)dvdx.
For the (II.2) term, it has the upper bound as
= ∫_0^t[∫η_t,p(s)𝒬_η_t(s)(ψ_t(s),ψ_t(s))dvdx]ds
≤ C∫_0^t‖(1+|v|)η_p,t(s)‖_L_β-1‖(1+|v|)^-1𝒬_η_t(s)(ψ_t(s),ψ_t(s))‖_L_β^∞ds
≤ C∫_0^t ‖η_p,t(s)‖_L_β^∞‖η_t(s)‖_L_β^∞‖ψ_t(s)‖_L_β^∞‖ψ_t(s)‖_L_β^∞ds,
where in the last inequality we have used Lemma <ref>. This upper bound can be further written as
≤ C‖η_t,p^‖_P_β^σ∫_0^t (1+(t-s))^-σ‖η_t(s)‖_L_β^∞‖ψ_t(s)‖_L_β^∞‖ψ_t(s)‖_L_β^∞ds
≤ Ca_*(‖𝒢‖_L_β^∞+a_*)^3.
Now it has been proved that there is a uniform bound for all terms in the decomposition of ℐ. This yields the uniform boundedness of ℐ(t,g), and concludes the proof of the theorem.
With some additional effort, we can prove the proposition below for the mild solution ℐ(t,g) in Theorem <ref>.
Fix g as a function in C_x,v^0 which is time independent. For arbitrary function g satisfying the assumptions (<ref>)-(<ref>) and such that g(t)=g, there is
lim_t→ +∞ℐ(t,g)=ℐ_∞(g),
with
ℐ_∞(g):=-1+⟨ e^g,M⟩.
In this framework the function g is fixed at terminal time g(t)=g. According to assumption (<ref>), there is D_sg≡ 0. Consequently the function g is determined on the whole time interval [0,t] by g(t)=g.
According to equations (<ref>), (<ref>), and (<ref>) in the proof of Theorem <ref>, the functional ℐ(t,g) can be rewritten as
ℐ(t,g)=-1+⟨η_t(t),ψ_t(t) ⟩+1/2⟨𝒢, ψ_t(0)-ψ_t(t)⟩+1/2η_t,p(s),𝒬_η_t(s)(ψ_t(s),ψ_t(s))_.
We want to prove the (II.2) term converge to 0 as t→ +∞. This can be achieved by the decomposition of the collision operator
= 2η_t,p(s),𝒬_η_t(s)(ψ_t,p(s),𝒢)+η_t,p(s),𝒬_η_t(s)(ψ_t,p(s),ψ_t,p(s))
≤ 2∫_0^t ‖η_p,t(s)‖_L_β^∞‖η_t(s)‖_L_β^∞‖ψ_t,p(s)‖_L_β^∞‖ψ_t(s)‖_L_β^∞ds
+∫_0^t ‖η_p,t(s)‖_L_β^∞‖η_t(s)‖_L_β^∞‖ψ_t,p(s)‖_L_β^∞‖ψ_t,p(s)‖_L_β^∞ds
≤ C∫_0^t (1+s)^-σ(1+(t-s))^-σds≤ C(1+t)^-σ→ 0, t→ +∞,
where we have used Lemma <ref> and the fact according to Theorem <ref>
‖ψ_t,p‖_P_β^σ<a_*, ‖η_t,p^‖_P_β^σ<a_*.
In Theorem <ref>, the bound ‖ψ_t,p‖_P_β^σ<a_* is uniform for all t>0. According to the definition (<ref>) of P_β^∞, there is
(1+t)^σ‖ψ_t,p(t)‖_L_β^∞<a_*.
Consequently as t→ +∞, there is ψ_t(t)→𝒢 in the L_β^∞ norm. This implies
lim_t→ +∞⟨η_t(t),ψ_t(t)⟩=lim_t→ +∞⟨ e^g(t)ℬ,𝒢⟩=lim_t→ +∞⟨ e^g(t),M⟩=⟨ e^g,M⟩,
where we have used the fact that η_t(t)=e^g(t) and g(t)=g.
These imply
lim_t→ +∞ℐ(t,g)=-1+⟨ e^g,M⟩+1/2⟨𝒢,ℬ^-1f^0-𝒢⟩.
The orthogonality assumption (<ref>) means ⟨𝒢,ℬ^-1f^0-𝒢⟩=0, which concludes the proof of the lemma.
The relation between the result above and the Schrödinger problem has been discussed in Subsection <ref>.
In addition, we can further prove the long-time limit of ℐ(t,g) given in Proposition <ref>, is also a non-trivial stationary solution of the Hamilton-Jacobi equation.
The functional ℐ_∞(g) in (<ref>) is a stationary solution of the Hamilton-Jacobi equation.
It is sufficient to verify
ℋ(ℐ_∞(g)/g,g)=0.
By the definition (<ref>) of ℐ_∞(g), the derivative with respect to g is
ℐ_∞(g)/g=⟨ e^g,M⟩/g=e^gM.
Then according to the definition (<ref>) of ℋ, we have
ℋ(e^gM,g)
= 1/2∫ M(v)M(v_*)e^g(v)+g(v_*)(e^g(v')+g(v_*')-g(v)-g(v_*)-1)((v_*-v)·ω)_+dω dv_* dvdx
= 1/2∫ M(v)M(v_*)(e^g(v')+g(v_*')-e^g(v)+g(v_*))((v_*-v)·ω)_+dω dv_* dvdx=0,
where the last inequality is by the change of variables from (v,v_*,ω) to (v',v_*',-ω). This concludes the proof.
The two propositions above together states that, the solution ℐ(t,g) converges to a stationary solution ℐ_∞(g) as t→ +∞. The stationary solution ℐ_∞(g)=⟨ e^g,M ⟩-1 is the cumulant generating functional of a random gas with Poisson-distributed total number, and i.i.d. distribution of the variables (x,v).
§.§ Functional Solution for Theorem <ref>
Under the assumptions (<ref>)-(<ref>), the functional solution ℐ(t,g) constructed in Theorem <ref> of the Hamilton-Jacobi equation is uniformly bounded for arbitrary time t≥ 0 and function g satisfying Assumptions (<ref>)-(<ref>).
In the proof of Theorem <ref>, we have decomposed the functional ℐ into several terms: first it is decomposed as (<ref>), then the (II) term is decomposed into the summation of (II.1) term and (II.2) term in (<ref>). The same decomposition applies here. Except for the (II.2) term, the estimate of the others is exactly the same as in the proof of theorem <ref>. Thus we only detail the estimate of the (II.2) term here.
In equation (<ref>), it has been proved that
= ∫_0^t[∫η_t,p(τ)𝒬_η_t(τ)(ψ_t(τ),ψ_t(τ))dvdx]dτ
≤ C∫_0^t ‖η_t,p(τ)‖_L_β^∞‖η_t(τ)‖_L_β^∞‖ψ_t(τ)‖_L_β^∞‖ψ_t(τ)‖_L_β^∞dτ
According to the definition (<ref>) of the E_β^0 and the E_β^-σ norms, we get
≤ Ce^σ t‖η_t,p^‖_E_β^-σ‖η_t‖_E_β^0‖ψ_t‖_E_β^0‖ψ_t‖_E_β^0≤ Ce^σ ta_*e^-σ t(‖𝒢‖_L_β^∞+a_*)^3,
where we have used the condition ‖η_t,p^‖_E_β^-σ<a_*e^-σ t.
It shows the (II.2) term is also uniformly bounded. This concludes the proof of the uniform boundedness of ℐ(t,g).
§ DECOMPOSITION OF SEMIGROUP AND RELEVANT ESTIMATES
By Definition <ref>, the operator B^+ consists of a transport operator -v·∇_x, and a linearized collision operator 2𝒬_𝒢(·,𝒢). The operator 2𝒬_𝒢(·,𝒢) can be decomposed as the summation of a frequency multiplier -ν and a convolution operator K. This decomposition is initially due to <cit.>
2𝒬_𝒢(·,𝒢)=-ν+K,
with
ν f(v)=∫_R^d∫_S^d-1((v_*-v)·ω)_+f(v)𝒢^2(v_*)dω dv_*,
Kf(v)=∫_R^d∫_S^d-1((v_*-v)·ω)_+(f(v')𝒢(v_*')+f(v_*')𝒢(v')-f(v_*)𝒢(v))𝒢(v_*)dω dv_*.
The operator K can also be written using the related transition kernel, which is given explicitly on Page 19 of <cit.>. The following lemma about the convolution operator K is classical <cit.>.
The operator K is a self-adjoint compact operator on L^2. For any 2≤ p≤ r≤∞, it is also a bounded operator from L^p to L^r. If β≥ 0, then it is a bounded operator from L_β^∞ to L_β^∞.
For a detailed proof of the following lemma, the reader may see Section 2.2. of <cit.>.
The operator (resp. ) in Definition <ref> generates a strongly continuous semigroup e^s (resp. e^s) on L^2(T_x^d×R_v^d). Both semigroups decay exponentially in the L_x,v^2-norm if the initial data is orthogonal to the kernel 𝒦: there exists constants ν_*>0 and C>0 such that if f∈𝒦^⊥, then
‖ e^sf‖_L_x,v^2≤ Ce^-ν_* s‖ f‖_L_x,v^2, ‖ e^sf‖_L_x,v^2≤ Ce^-ν_* s‖ f‖_L_x,v^2.
If the function f belongs to the kernel 𝒦, then we have e^sf=e^sf=f.
Since B^+ is a bounded perturbation of A^+:=-v·∇_x-ν, according to Corollary 1.7 in Page 119 of <cit.>, the semigroup e^sB^+ can be written as
e^sB^+=e^sA^++(e^sA^+K)∗ e^sB^+.
Here ∗ refers to the convolution over [0,s],
(e^sA^+K)∗ e^sB^+ :=∫_0^s e^(s-τ)A^+Ke^τ B^+.
Iterate this and we will have
e^sB^+ =e^sA^++(e^sA^+K)∗ e^sA^++(e^sA^+K)∗(e^sA^+K)∗ e^sA^++...+(e^sA^+K)^∗ Ne^sB^+
=e^sA^++∑_j=1^N-1(e^sA^+K)^∗ j∗ e^sA^++(e^sA^+K)^∗ N∗ e^sB^+.
For the decomposition (<ref>) of e^sB^+=𝒟_1^+(s)+𝒟_2^+(s), we define 𝒟_1^+(s) and 𝒟_2^+(s) as
𝒟_1^+(s):=e^sA^+, D_2^+(s):=∑_j=1^N-1(e^sA^+K)^∗ j∗ e^sA^++(e^sA^+K)^∗ N∗ e^sB^+.
The operator 𝒟_1^+(s) has the explicit expression
𝒟_1^+(s)f(x,v)=e^-ν(v)sf(x-sv,v).
The operator 𝒟_2^+ has a decay estimate as a map from L_β-1^∞ to L_β^∞ (see Lemma <ref>) due to the smoothing effect (Lemma <ref>) of K.
The analysis above is also true for the backward component, where for the decomposition (<ref>) of e^sB^-=𝒟_1^-(s)+𝒟_2^-(s), we define 𝒟_1^-(s) and 𝒟_2^-(s) as
𝒟_1^-(s):=e^sA^-, 𝒟_2^-(s):=∑_j=1^N-1(e^sA^-K)^∗ j∗ e^sA^-+(e^sA^-K)^∗ N∗ e^sB^-.
The operator 𝒟_1^-(s) is explicitly written as
𝒟_1^-(s)f(x,v)=e^-ν(v)sf(x+sv,v).
The decomposition has been well established in the literature <cit.>, whose modification gives the proof of Lemma <ref>.
We only give the proof for the forward component 𝒟_2^+(s). The proof for 𝒟_2^-(s) defined in (<ref>) is the same.
The case of f∈𝒦^⊥: Using Lemma <ref> with p=2 and r=∞, we can prove (e^sA^+K)∗ e^sB^+ is a bounded operator from L^2 to L_0^∞=L^∞, with Ce^-ν_*s as the upper bound for the operator norm
‖∫_0^s e^(s-τ)A^+Ke^τ B^+fdτ‖_L_0^∞≤ C∫_0^se^-ν_*(s-τ)e^-ν_*τ‖ f‖_L^2≤ Ce^-ν_*s‖ f‖_L^2.
Having an additional convolution with e^sA^+K, the operator (e^sA^+K)^∗ 2∗ e^sB^+ is a bounded operator from L^2 to L_1^∞. Iterating this bootstrap argument and choosing N=⌈β⌉, we have (e^sA^+K)^∗ N∗ e^sB^+ is a bounded operator from L^2 to L_β^∞ with β>4, also with Ce^-ν_*s as the upper bound for the operator norm. This implies
‖(e^sAK)^∗ N∗ e^sBf‖_L_β^∞≤ Ce^-ν_*s‖ f‖_L^2≤ Ce^-ν_*s‖ f‖_L_β-1^∞=Ce^-ν_*s‖ (1+|v|)^-1f‖_L_β^∞.
Using the explicit expression of e^sA^+ and the smoothing effect of K, we can prove for any j≥ 1 that (e^sA^+K)^∗ j∗ e^sA^+ has the decay estimate
‖(e^sA^+K)^∗ j∗ e^sA^+f‖_L_β^∞≤ Ce^-ν_* s‖ (1+|v|)^-1f‖_L_β^∞.
The case of f∈𝒦: Now the estimate (<ref>) is still true since it only depends on the explicit expression of e^sA^+ and the smoothing effect of K. For the other term in 𝒟_2^+, it becomes
‖(e^sAK)^∗ N∗ e^sBf‖_L_β^∞=‖(e^sAK)^∗ N∗f‖_L_β^∞≤ C‖ (1+|v|)^-1f‖_L_β^∞.
This concludes the proof of the lemma.
Next we prove the continuity of e^τ B^+ (resp. e^τ B^-) with respect to the forward initial perturbation f^0ℬ^-1-𝒢 (resp. the backward terminal perturbation e^g(t)ℬ-𝒢). This lemma is crucial to the proof of the continuity of (ψ_t(s),η_t(s)) (see Lemma <ref>).
Under assumptions (<ref>)-(<ref>) or (<ref>)-(<ref>), for any parameter β>4 we have
‖ e^τ B^+(f^0ℬ^-1-𝒢)-(f^0ℬ^-1-𝒢)‖_L_β^∞≤ C(f_0)τ,
‖ e^τ B^-(e^g(t)ℬ-𝒢)-(e^g(t)ℬ-𝒢)‖_L_β^∞≤ C(g)τ,
with C(f_0) being a constant dependent on f_0, and C(g) being a constant dependent on g.
We only detail the proof of the second equation in (<ref>), since the other one is the same. For simplicity in notation and also in accordance with the choice of terminal data, we write G(x,v):=(e^g(t)ℬ-𝒢)(x,v). Using the bootstrap argument (<ref>), we have
‖ e^τ B^-G-G‖_L_β^∞
≤ ‖ e^τ A^-G-G‖_L_β^∞+∫_0^τ‖ e^u A^-Ke^(τ-u)B^-G‖_L_β^∞du
= ‖ G(x+τ v,v)e^-ν(v)τ-G(x,v)‖_L_β^∞+ ∫_0^τ‖ e^u A^-Ke^(τ-u)B^-G‖_L_β^∞du
The first term in the third line of (<ref>) is controlled as
‖ G(x+τ v,v)e^-ν(v)τ-G(x,v)‖_L_β^∞
≤ ‖ G(x+τ v,v)(1-e^-ν(v)τ)‖_L_β^∞+‖ G(x+τ v,v)-G(x,v)‖_L_β^∞
≤ ‖ (1+|v|)G(x+τ v,v)1-e^-ν(v)τ/(1+|v|)‖_L_β^∞+‖ e^g(t,x+τ v,v)ℬ(v)-e^g(t,x,v)ℬ(v)‖_L_β^∞
≤ C‖ G‖_L_β+1^∞τ+C‖ G‖_L_β+1^∞‖ g‖_C_t,x^1τ,
where the term C‖ G‖_L_β+1^∞τ is due to
1-e^-ν(v)τ/(1+|v|)≤ Cτ ,
and the term C‖ G‖_L_β+1^∞‖ g‖_C_t,x^1τ is because
e^g(t,x+τ v,v)ℬ(v)-e^g(t,x,v)ℬ(v)
=∫_0^τ[(v·∇_xg)e^gℬ](t,x+uv,v)du.
The norm ‖ g‖_C_t,x^1 is finite due to assumption (<ref>) or (<ref>), where we have assumed g has uniformly bounded derivatives in t and x. The control of the second term in the third line of (<ref>) is straightforward, since all the involved operators are bounded operators from L_β^∞ to L_β^∞
∫_0^τ‖ e^u A^-Ke^(τ-u)B^-G(x,v)‖_L_β^∞du≤ C‖ G‖_L_β^∞τ
This concludes the proof of the lemma.
In the end of this appendix, we give the proof of Lemma <ref> for completeness. The proof is elementary.
Given σ_1>1 and σ_2>1, we have the following inequality for a convolution,
∫_0≤ s≤ t(1+(t-s))^-σ_2(1+s)^-σ_1ds≤ C (1+t)^-min{σ_1,σ_2}
We split the integral
∫_0≤ s≤ t(1+(t-s))^-σ_2(1+s)^-σ_1ds
= ∫_0≤ s≤t/2(1+(t-s))^-σ_2(1+s)^-σ_1ds+∫_t/2≤ s≤ t(1+(t-s))^-σ_2(1+s)^-σ_1ds
≤ (1+t/2)^-σ_2∫_0≤ s≤t/2(1+s)^-σ_1ds+(1+t/2)^-σ_1∫_t/2≤ s≤ t(1+(t-s))^-σ_2ds
This is further less than
... ≤ C(1+t/2)^-σ_2+C(1+t/2)^-σ_1
≤ C(1+t)^-min{σ_1,σ_2}
This concludes the proof.
plain
|
http://arxiv.org/abs/2409.02277v1 | 20240903202227 | Attention-Based Reading, Highlighting, and Forecasting of the Limit Order Book | [
"Jiwon Jung",
"Kiseop Lee"
] | q-fin.CP | [
"q-fin.CP"
] |
1.05
theoremTheorem[section]
acknowledgement[theorem]Acknowledgment
axiom[theorem]Axiom
case[theorem]Case
claim[theorem]Claim
conclusion[theorem]Conclusion
condition[theorem]Condition
conjecture[theorem]Conjecture
corollary[theorem]Corollary
criterion[theorem]Criterion
definition[theorem]Definition
exercise[theorem]Exercise
hypothesis[theorem]Hypothesis
lemma[theorem]Lemma
notation[theorem]Notation
problem[theorem]Problem
proposition[theorem]Proposition
question[theorem]Question
remark
remark[theorem]Remark
remarks[theorem]Remarks
remark
example[theorem]Example
solution[theorem]Solution
summary[theorem]Summary
|
http://arxiv.org/abs/2409.02102v2 | 20240903175709 | Dimensionality Reduction Techniques for Statistical Inference in Cosmology | [
"Minsu Park",
"Marco Gatti",
"Bhuvnesh Jain"
] | astro-ph.CO | [
"astro-ph.CO"
] | |
http://arxiv.org/abs/2409.02479v1 | 20240904070723 | An ergodic theorem for the maximum of branching Brownian motion with absorption | [
"Fan Yang"
] | math.PR | [
"math.PR",
"Primary: 60J80, Secondary: 60G70"
] |
Branching Brownian motion with absorption]An ergodic theorem for the maximum of branching Brownian motion with absorption
F. Yang]Fan Yang
Fan Yang
School of Mathematical Sciences
Beijing Normal University
Beijing 100875
P. R. China
[email protected]
The research of this project is supported by the National Key R&D Program of China (No. 2020YFA0712900). The research of F. Yang is supported by China Postdoctoral Science Foundation (No. 2023TQ0033) and Postdoctoral Fellowship Program of CPSF (No. GZB20230068).
§ ABSTRACT
In this paper, we study branching Brownian motion with absorption, in which particles undergo Brownian motions and are killed upon hitting the absorption barrier. We prove that the empirical distribution function of the maximum of this process converges almost surely to a randomly shifted Gumbel distribution.
[2020]Primary: 60J80; Secondary: 60G70
[
[
=====
§ INTRODUCTION
A classical branching Brownian motion (BBM) in can be constructed as follows. Initially there is a single particle at the origin of the real line and this particle moves as a 1-dimensional standard Brownian motion denoted by B = {B(t), t≥ 0 }.
After an independent exponential time with parameter 1, the initial particle dies and gives birth to L offspring, where L is a positive integer-valued random variable with distribution {p_k: k≥ 1 }. Here we assume that the expected number of offspring is 2 (i.e., ∑_k=1^∞ kp_k = 2) and the variance of the offspring distribution is finite (i.e., ∑_k=1^∞ k(k-1)p_k < ∞). Each offspring starts from its creation position and evolves independently, according to the same law as its parent. We denote the collection of particles alive at time t as N_t. For any u∈ N_t and s≤ t, let X_u(s) be the position at time s of particle u or its ancestor alive at that time.
McKean <cit.> established a connection between BBM and the Fisher-Kolmogorov-Petrovskii-Piskounov (F-KPP) equation
∂ u/∂ t = 1/2∂^2 u/∂ x^2 + ∑_k=1^∞ p_k u^k - u.
The F-KPP equation has received entensive attention from both analytic techniques (see, for instance, Kolmogorov et al. <cit.> and Fisher <cit.>) and probabilistic methods
(see, for example, McKean <cit.>, Bramson <cit.>, Harris <cit.> and Kyprianou <cit.>).
Let's recall some classical results on BBM and F-KPP equation. Define
𝐌_t := max{X_u(t): u∈ N_t }.
Bramson <cit.> established that
lim_t→∞ℙ(𝐌_t - m_t ≤ z)=lim_t→∞ u(t,m_t+z)=w(z), z∈,
where m_t:= √(2) t-3/2√(2)log t and w solves the ordinary differential equation 1/2w”+√(2)w'+∑_k=1^∞ p_k w^k - w=0.
Such a solution w is known as the traveling wave solution.
Lalley and Sellke <cit.> provided the following representation of w for dyadic BBM
w(z):=[e^-C_* e^-√(2) zZ_∞],
where C_* is a positive constant and Z_∞ is the limit of the derivative martingale of BBM.
Specifically, define
Z_t = ∑_u∈ N_t (√(2)t - X_u(t)) e^√(2)(X_u(t)-√(2)t) ,
then Z_t serves as the derivative martingale of the BBM. We denote Z_∞ as the limit of Z_t ℙ_x-almost surely, as established by Lalley and Sellke <cit.> or Kyprianou <cit.>. In the same paper <cit.>, they conjectured that the empirical (time-averaged) distribution of maximal displacement converges almost surely, that is,
lim_T→∞1/T∫_0^T 1_{𝐌_s - m_s ≤ x}d s = exp{-C_* Z_∞ e^-√(2)x},
This conjecture was later confirmed by Arguin, Boiver and Kistler <cit.>.
In this paper, we consider the similar problems of BBM with absorption, where the particle is killed when it hits the absorbing barrier. The process can be defined as follows. Initially there is a single particle at x>0 and this particle evolves as the classical BBM with branching rate 1. We also assume that the number of offspring L has distribution {p_k,k≥ 1} with L = 2 and L^2 < ∞. In addition, we add an absorbing barrier at the line {(y,t):y= ρ t } for some ρ∈, i.e. particles hitting the barrier are instantly killed without producing offspring (see Figure <ref>).
We use N_t to denote the set of the particles of the BBM with absorption that are still alive at time t. For any particle u∈N_t and any time s≤ t, we continue to use X_u(s) to represent the position of either particle u itself or its ancestor at time s.
The extinction time of the BBM with absorption is defined as
ζ:=inf{t>0: N_t = ∅}.
Additionally, we define 𝐌_t as the maximum position among all particles u∈N_t. The law of the BBM with absorption, starting from a single particle at position x is denoted by ℙ_x, and its expectation is denoted by _x.
The asymptotic behavior of BBM with absorption has been extensively studied in the literature.
Kesten <cit.> demonstrated that the process dies out almost surely when ρ≥√(2) while there is a positive probability of survival, that is, ℙ_x(ζ=∞)>0, when ρ < √(2). Therefore, ρ = √(2) is the critical drift separating the supercritical case ρ < √(2) and the subcritical ρ > √(2).
In the subcritical case, Harris and Harris <cit.> provided the large time asymptotic behavior for the survival probability.
In the critical case, Kesten <cit.> obtained upper and lower bounds on the survival probability, which were subsequently improved by Berestycki et al. <cit.>. Maillard and Schweinsberg <cit.> have further enhanced these results and investigated the behavior conditioned to survive. For the BBM with absorption in the near-critical case, Berestycki et al. <cit.> and Liu <cit.> are good references.
In the supercritical case, Harris et al. <cit.> studied properties of the right-most particle and the one-sided F-KPP traveling wave solution using probabilistic methods in the case of binary branching. Specifically, they proved that
lim_t→∞𝐌_t/t=√(2) on {ζ=∞}, ℙ_x
and g(x) := ℙ_x(ζ<∞) is the unique solution to the one-side F-KPP traveling wave solution
{ 1/2g”-ρ g'+ g^2 -g = 0, x>0,
g(0+)=1, g(∞)=0.
.
Louidor and Saglietti <cit.> showed that the number of particles inside any fixed set, normalized by the mean population size, converges to an explicit limit almost surely.
In this paper, we focus on the supercritical case. In the remainder of this paper, we always assume ρ<√(2).
In <cit.>, we studied the maximal displacement and the extremal proccess of BBM with absorption. More precisely, we established the following result:
lim_t→∞ℙ_x(𝐌_t - m_t≤ z) = _x(e^-C_*Z_∞ e^-√(2)z),
where _∞ is defined as the limit of _t and
_t := ∑_u∈N_t (√(2)t-X_u(t)) e^√(2)(X_u(t)-√(2)t) .
It's important to note that {_t, t≥ 0, ℙ_x } is not a martingale. However, according to <cit.>, the limit _∞ := lim_t→∞_t exists ℙ_x-almost surely for any x>0 and ρ < √(2).
Similar to (<ref>), our paper focuses on the empirical distribution function of the maximum of branching Brownian motion with absorption. We prove that the limit of this empirical distribution converges almost surely to a Gumbel distribution with a random shift. Here's the statement of the main result.
For any x>0, ρ<√(2) and z∈, we have
lim_T↑∞1/T∫_0^T 1_{𝐌_t -m_t ≤ z}dt = exp{-C_* _∞ e^-√(2) z}, ℙ_x,
where the positive constant C_* is given by (<ref>).
§ PROOF OF THEOREM <REF>
We can put a BBM and a BBM with absorption in the same probability space. More precisely, we can construct a BBM as described in Section <ref>. By considering only the particles that are never killed by the line {(y,t): y=ρ t }, we obtain a BBM with absorption. Therefore,
N_t = {u∈ N_t: ∀ s≤ t, X_u(s) > ρ s }.
Furthermore, we define
N_t^s := {u∈ N_t: ∃ v∈N_s, u>v },
where u>v indicates that u is a descendant of v (see Figure <ref>). Notice that the set N^s_t contains all the particles alive at time t that do not hit the line segment {(y,r):y =ρ r, 0≤ r≤ s }.
For convenience, define
M_t := max{X_u(t): u∈ N_t } - m_t M_t := max{X_u(t): u∈N_t } - m_t.
Then M_t = 𝐌_t - m_t and M_t = 𝐌_t - m_t. Similarly, define
M_t^s := max{X_u(t): u∈N^s_t} - m_t.
In the proof of Theorem <ref>, we need the following two lemmas, whose proofs are postponed to Sections <ref> and <ref>. First, as in <cit.>, consider a compact interval = [d,D] with -∞<d<D<∞ and the time R_T>0. However, in this paper, we require that there exists some l>0 such that R_T/T^l↑∞ and R_T/√(T)↓ 0, as T↑∞. We truncate the absorption barrier at time R_T and will show that the empirical distribution of M_t^R_T converges almost surely.
Let R_T/T^l ↑∞ as T↑∞ for some l>0 but with R_T = o(√(T)). Then for any x>0 and z∈,
lim_T↑∞1/T∫_0^T 1_{M_t^R_T≤ z}dt = exp{ -C_* _∞ e^-√(2)z}, ℙ_x
Next, we consider the difference between M_t and M_t^R_T. For s<t, define N_t^[s,t] = N_t^s - N_t, which represents the set of particles at time t whose ancestors are not absorbed before time s but hit the absorption barrier between time s and t. Define
M_t^[s,t] := max{ X_u(t): u∈N_t^[s,t]} - m_t.
The following lemma provides the convergence of the empirical distribution of M_t^[s,t].
For ϵ>0, and R_T as in Lemma <ref>,
lim sup_T↑∞1/T∫_ϵ T^T 1_{M_t^[R_T,t] > z}dt = 0, ℙ_x
Note that N_t ⊂N_t^R_T. Hence we have M_t ≤M^R_T_t and then 1_{M_t ≤ z }≥1_{M^R_T_t ≤ z }. By Lemma <ref>, it holds that
lim inf_T↑∞1/T∫_0^T 1_{M_t≤ z}dt ≥exp{ -C_* _∞ e^-√(2)z}, ℙ_x
Since 1_{M_t≤ z} + 1_{M_t > z} = 1, we have
lim sup_T↑∞1/T∫_0^T 1_{M_t > z}dt ≤ 1-exp{ -C_* _∞ e^-√(2)z}, ℙ_x
To prove Theorem <ref>, it suffices to show that
lim inf_T↑∞1/T∫_0^T 1_{M_t > z}dt ≥ 1-exp{ -C_* _∞ e^-√(2)z}, ℙ_x
Since M_t^s = max{M_t, M_t^[s,t]}, we have
1_{M_t^s > z}≤1_{M_t > z} + 1_{M_t^[s,t] > z},
that is,
1_{M_t > z}≥1_{M_t^s > z} - 1_{M_t^[s,t] > z}.
Set s=R_T, we have
lim inf_T↑∞1/T∫_ϵ T^T 1_{M_t > z}dt ≥lim inf_T↑∞1/T∫_ϵ T^T 1_{M_t^R_T > z}dt - lim sup_T↑∞1/T∫_ϵ T^T 1_{M_t^[R_T,t] > z}dt.
For any ϵ>0, by (<ref>) and Lemma <ref>, we get that
lim inf_T↑∞1/T∫_0^T 1_{M_t > z}dt ≥lim inf_T↑∞1/T∫_ϵ T^T 1_{M_t > z}dt
≥lim inf_T↑∞1/T∫_ϵ T^T 1_{M_t^R_T > z}dt.
Therefore,
lim inf_T↑∞ 1/T∫_0^T 1_{M_t > z}dt ≥lim inf_T↑∞(1/T∫_0^T 1_{M_t^R_T > z}dt - 1/T∫_0^ϵ T1_{M_t^R_T > z}dt )
≥lim inf_T↑∞1/T∫_0^T 1_{M_t^R_T > z}dt - ϵ≥ 1 - exp{ -C_* _∞ e^-√(2)z} - ϵ,
where the last inequality follows from Lemma <ref>.
Let ϵ↓ 0, then the inequality (<ref>) holds. This completes the proof.
§ PROOF OF LEMMA <REF>
In this section, we prove Lemma <ref>, which states that for any x>0 and z∈ℝ,
lim_T↑∞1/T∫_0^T 1_{M_t^R_T≤ z}dt = exp{ -C_* _∞ e^-√(2)z}, ℙ_x
Recall that = [d,D] is a compact set. Similar to <cit.>, this lemma follows from the following two lemmas.
For ϵ>0, and R_T as in Lemma <ref>. Then for any s∈ [ϵ,1],
lim_T↑∞ℙ_x[ M^R_T_T· s∈ | _R_T] = ∫_d( exp{-C_*_∞ e^-√(2)z}), ℙ_x
For ϵ>0, and R_T as in Lemma <ref>,
lim_T↑∞1/T∫_ϵ T^T ( 1_{M^R_T_t ∈} - ℙ_x [ M^R_T_t∈ | _R_T] ) dt = 0, ℙ_x
First, we write
ℙ_x[ M^R_T_T· s∈ | _R_T] = ℙ_x[ M^R_T_T· s≤ D | _R_T] - ℙ_x[ M^R_T_T· s≤ d | _R_T].
We only need to show the almost surely convergence of the first term. Recall that the definitions (<ref>), (<ref>) and (<ref>), then
ℙ_x [ M^R_T_T· s≤ D | _R_T] = ∏_u∈N_R_Tℙ_x [ X_u(R_T) + M_T· s - R_T(u) + m_T· s - R_T≤ D + m_T· s | _R_T]
= ∏_u∈N_R_T (1 - ℙ_x [ M_T· s - R_T(u) > D - X_u(R_T) + √(2)R_T + o_T(1) | _R_T])
where given _R_T, {M_T· s - R_T(u), u∈N_R_T} are independent and have the same distribution as {M_t, ℙ_0}. Note that o_T(1)→ 0 as T→∞. After time R_T, there is no absorption barrier when M^R_T_T· s is considered. Therefore, we can use a similar argument as in <cit.>. By <cit.>,
lim_R_T→∞min_N_R_T (√(2)R_T - X_u(R_T)) = + ∞
Let f(D, R_T) := D - X_u(R_T) + √(2)R_T + o_T(1). By <cit.>, we have
ℙ_x [ M_T· s - R_T(u) > f(D, R_T) | _R_T]
= C_* (1+o_r(1)) (1+o_T(1))f(D, R_T) e^-√(2) f(D, R_T) ,
and by (<ref>), this probability tends to zero uniformly for u∈N_R_T as T→∞.
Hence,
ℙ_x [ M^R_T_T· s≤ D | _R_T]
= exp(∑_u∈N_R_Tlog(1 - ℙ_x [ M_T· s - R_T(u) > f(D, R_T) | _R_T]) )
= exp( - ∑_u∈N_R_T C_* (1+o_r(1)) (1+o_T(1))^2f(D, R_T) e^-√(2) f(D, R_T) )
By <cit.> and <cit.>, we know that
lim_T↑∞∑_u∈N_R_T D e^-√(2)(√(2)R_T-X_u(R_T))≤lim_T↑∞∑_u∈ N_R_T D e^-√(2)(√(2)R_T-X_u(R_T)) = 0
and
lim_T↑∞∑_u∈N_R_T (√(2)R_T-X_u(R_T)) e^-√(2)(√(2)R_T-X_u(R_T)) = _∞.
Therefore, we get
lim_T↑∞ℙ_x [ M^R_T_T· s≤ D | _R_T] = exp( - C_* (1+o_r(1)) lim_T↑∞∑_u∈N_R_T f(D, R_T) e^-√(2) f(D, R_T) )
= exp( - C_* (1+o_r(1)) e^-√(2)D_∞).
Letting r↑∞ yields that
lim_T↑∞ℙ_x [ M^R_T_T· s≤ D | _R_T] = exp( - C_* e^-√(2)D_∞).
This completes the proof of Lemma <ref>.
Since after time _T, the particle behaves as a branching Brownian motion, most of the proof of <cit.> is valid for Lemma <ref>. Now we provide the details.
Fix x>0. For γ>0, 0≤ s ≤ t, define
F_γ,t(s) := x + s/tm_t - min{s^γ, (t-s)^γ}.
Choose 0<α<1/2<β<1. We say that a particle u∈ N_t is localized in the time t-tube during the interval (r,t-r) if and only if
F_β,t(s) ≤ X_u(s) ≤ F_α,t(s), ∀ s∈ (r,t-r).
Otherwise, we say that it is not localized. By <cit.>, for given =[d,D]. There exist r_0, δ>0 depending on α,β and such that for r≥ r_0
sup_t≥ 3rℙ_x [ ∃ u∈ N_t: X_u(t) - m_t ∈ (r,t-r) ] ≤exp{-r^δ}.
Choose r_T = (20ln T)^1/δ. For any t∈ (R_T,T), define
M_t,loc^R_T := max{X_u(t): u∈N_t^R_T, u localized (r_T,t-r_T) } - m_t.
Then, we have
ℙ_x ( {M_t^R_T∈}∖{M_t,loc^R_T∈})
≤ℙ_x [ ∃ u∈N_t^R_T: X_u(t) - m_t ∈ (r_T,t-r_T) ]
≤ℙ_x [ ∃ u∈ N_t: X_u(t) - m_t ∈ (r_T,t-r_T) ].
Hence, (<ref>) implies that
ℙ_x ( {M_t^R_T∈}∖{M_t,loc^R_T∈}) ≤1/T^20.
Since {M_t^R_T≥ D }∩{M_t,loc^R_T∈}≠∅, the inequality ℙ_x (M_t^R_T∈) - ℙ_x (M_t,loc^R_T∈) ≥ 0
is not true. The same issue exists in <cit.>. However, we have the following claim with a slight modification from their paper <cit.>.
Claim A: There exist r_0, δ>0 depending on α,β and D such that for r≥ r_0
sup_t≥ 3rℙ_x [ ∃ u∈ N_t: X_u(t) - m_t ≥ D (r,t-r) ] ≤exp{-r^δ}.
We prove this claim in Appendix <ref>.
Therefore,
ℙ_x ( {M_t,loc^R_T∈}∖{M_t^R_T∈})
≤ℙ_x [ ∃ u∈ N_t: X_u(t) - m_t ≥ D (r_T,t-r_T) ] ≤1/T^20.
Let
Rest_ϵ,(T) := 1/T∫_ϵ T^T ( 1_{M^R_T_t ∈} - ℙ_x [ M^R_T_t∈ | _R_T] ) dt,
and
Rest_ϵ,^loc(T) := 1/T∫_ϵ T^T ( 1_{M^R_T_t,loc∈} - ℙ_x [ M^R_T_t,loc∈ | _R_T] ) dt.
Notice that
_x | 1/T∫_ϵ T^T ( 1_{M^R_T_t ∈} - 1_{M^R_T_t,loc∈}) dt | ≤1/T∫_ϵ T^T _x | 1_{M^R_T_t ∈} - 1_{M^R_T_t,loc∈}| dt
≤1/T∫_ϵ T^T max{ℙ_x ( {M_t^R_T∈}∖{M_t,loc^R_T∈}),ℙ_x ( {M_t,loc^R_T∈}∖{M_t^R_T∈}) }dt
≤1/T^20.
Using a similar argument to the proof of <cit.>, it holds that
lim_T↑∞( Rest_ϵ,(T) - Rest_ϵ,^loc(T) ) = 0, ℙ_x
It is sufficient to prove that lim_T↑∞Rest_ϵ,^loc(T) = 0, ℙ_x
Define
X_t^(D) := 1_{M^R_T_t,loc≤ D } - ℙ_x [ M^R_T_t,loc≤ D | _R_T] C_T(t,t') := 𝔼_x[X_t^(D)· X_t'^(D)].
By <cit.>, it suffices to verify the validity of <cit.> for this C_T(t,t'). Define
ĉ_T(I,J) := ℙ_x [ M^R_T_I,loc≤ D, M^R_T_J,loc≤ D | _R_T] - ℙ_x [ M^R_T_I,loc≤ D | _R_T] ℙ_x [ M^R_T_J,loc≤ D | _R_T].
Here we use I and J to denote the two times t,t'. Then C_T(I,J) := 𝔼_x ĉ_T(I,J).
Notice that R_T>r_T and ρ<√(2). A key observation is that for s>R_T and sufficiently large T
F_β,t(s) - ρ s = x + (√(2)-ρ)s - 3/2√(2)s/tlog t - min{s^β, (t-s)^β} > 0.
Hence, the absorption barrier does not affect the particle localized during (r_T,t-r_T) after time R_T. Therefore, conditioned on _R_T, the estimation of the joint distribution of M^R_T_I,loc and M^R_T_J,loc is the same as that in <cit.>.
Compared with M_loc(I) in <cit.>, our M^R_T_I,loc has an additional restriction before time R_T, namely, that the particles have not been absorbed. Therefore, conditioned on R_T, the sum of the terms is less than that in <cit.>. Furthermore, <cit.> is also valid for our ĉ_T(I,J) and <cit.> is also an upper bound for the ĉ_T(I,J). (Although <cit.> should be -a-a^2≤ln(1-a) ≤ -a (0≤ a≤ 1/2), <cit.> is still true because of the inequality -a≤-a+a^2/2.) The results of <cit.> yield Lemma <ref>.
Now we turn to the proof of Lemma <ref> and the argument is similar to that in <cit.>.
Consider a compact interval = [d,D] with -∞<d<D<∞. In order to prove (<ref>), it suffices to show that
lim_T↑∞1/T∫_0^T 1_{M_t^R_T∈𝒟}dt = ∫_d( exp{-C_*_∞ e^-√(2)z}), ℙ_x
Note that
1/T∫_0^T 1_{M_t^R_T∈𝒟}dt = 1/T∫_0^ϵ T1_{M_t^R_T∈𝒟}dt + 1/T∫_ϵ T^T 1_{M_t^R_T∈𝒟}dt
= 1/T∫_0^ϵ T1_{M_t^R_T∈𝒟}dt + 1/T∫_ϵ T^T ℙ_x [ M^R_T_t∈ | _R_T] dt
+ 1/T∫_ϵ T^T ( 1_{M^R_T_t ∈} - ℙ_x [ M^R_T_t∈ | _R_T] ) dt.
By Lemma <ref> and the dominated convergence theorem, it holds that
lim_ϵ↓ 0 lim_T↑∞1/T∫_ϵ T^T ℙ_x [ M^R_T_t∈ | _R_T] dt = lim_ϵ↓ 0lim_T↑∞∫_ϵ^1 ℙ_x [ M^R_T_T· s∈ | _R_T] ds
= lim_ϵ↓ 0∫_ϵ^1 lim_T↑∞ℙ_x [ M^R_T_T· s∈ | _R_T] ds = ∫_d( exp{-C_*_∞ e^-√(2)z}), ℙ_x
Combining this with Lemma <ref> and lim_ϵ↓ 0lim_T↑∞1/T∫_0^ϵ T1_{M_t^R_T∈𝒟}dt = 0, we get (<ref>). This completes the proof.
§ PROOF OF LEMMA <REF>
For any u∈ N_t, define
τ(u) := inf{s∈ [0,t]: X_u(s) ≤ρ s }.
and inf∅ is defined by +∞. Then τ(u) represents the time when the particle u or its ancestor first hits the absorption barrier. Then
N_t^[s,t] = {u∈ N_t: τ(u) ∈ [s,t] }.
Similarly, for 0<s<s'<t, define
N_t^[s,s'] = {u∈ N_t: τ(u) ∈ [s,s'] }.
And let M_t^[s,s'] = max{X_u(t): u∈N_t^[s,s']} - m_t.
Let p∈ (0,1). Notice that N_t^[R_T,t] = N_t^[R_T,pt]∪N_t^[pt,t]. Therefore,
M_t^[R_T,t]≤max{M_t^[R_T,pt], M_t^[pt,t]},
and moreover
1_{M_t^[R_T,t] > z}≤1_{M_t^[R_T,pt] > z} + 1_{M_t^[pt,t] > z}.
The proof of <cit.> gave an upper bound for both ℙ_x(M_t^[pt,t] > z) and ℙ_x(M_t^[R_T,pt] > z). In <cit.>, let A=-z, then I = ℙ_x(M_t^[pt,t] > z) and II = ℙ_x(M_t^[R_T,pt] > z).
By the proof of <cit.>, we have
ℙ_x(M_t^[pt,t] > z) ≤C/p^3/2∫_pt^∞ e^-( ρ/√(2) - 1)^2 r d r
and
ℙ_x(M_t^[R_T,pt] > z) ≤ C Π_x[τ_0^√(2)-ρ1_{τ_0^√(2)-ρ≥ R_T }]
= C ∫_R_T^∞rx/√(2π r^3) e^-(x-(√(2)-ρ)r)^2/2rd r,
where the positive constant C changes line by line and depends only on x,ρ,z. Here, {B_t, Π_x } is a standard Brownian motion starting from x, and τ_0^√(2)-ρ := inf{s≥ 0: B_s ≤ (√(2)-ρ)s }.
For any δ>0, by Markov's inequality and Fubini's theorem,
ℙ_x( 1/T∫_ϵ T^T 1_{M_t^[pt,t] > z}dt ≥δ) ≤1/δ1/T∫_ϵ T^T ℙ( M_t^[pt,t] > z) dt
≤C/δ T p^3/2∫_ϵ T^T ∫_pt^∞ e^-( ρ/√(2) - 1)^2 r d r dt
≤C/δ T p^5/2 e^-( ρ/√(2) - 1)^2 pϵ T,
which is summable over T∈ℕ. Therefore, by Borel-Cantelli lemma,
ℙ_x[ {1/T∫_ϵ T^T 1_{M_t^[pt,t] > z}dt ≥δ}] = 0.
Hence,
lim sup_T↑∞1/T∫_ϵ T^T 1_{M_t^[pt,t] > z}dt = 0, ℙ_x
Using the same argument as above,
ℙ_x( 1/T∫_ϵ T^T 1_{M_t^[R_T,pt] > z}dt ≥δ) ≤C/δ T∫_ϵ T^T ∫_R_T^∞rx/√(2π r^3) e^-(x-(√(2)-ρ)r)^2/2rd r dt
≤C/δ∫_R_T^∞x/√(2π r) e^-(√(2)-ρ)^2/2r + (√(2)-ρ)xd r,
which is summable in T∈ℕ when R_T/T^l→∞ as T→∞ for some l>0. Therefore,
lim sup_T↑∞1/T∫_ϵ T^T 1_{M_t^[R_T,pt] > z}dt = 0, ℙ_x
By (<ref>), (<ref>) and (<ref>), we get (<ref>). This completes the proof.
§ PROOF OF CLAIM A
We slightly modified the proof of <cit.>. By equations (5.5), (5.54), (5.62) and (5.63) in <cit.>, (<ref>) holds for any compact interval 𝒟. However, we need (<ref>) to hold when the compact interval is replaced by [D,∞), and the proof of <cit.> remains valid with a slight modification.
For any x∈, t>r>0 and interval 𝒜, define
P(x,t,r,𝒜) = ℙ_x [ ∃ u∈ N_t: X_u(t) - m_t ∈𝒜 (r,t-r) ].
Let 𝒟_b = [b,b+1] and θ∈(0,α). We write ⌊ s ⌋ for the largest integer less than or equal to s. First, we will show that there exist r_1,δ_1>0 depending on α,β and D such that for r≥ r_1, b∈ [⌊ D ⌋,r^θ], the following holds:
sup_t≥ 3rP(x,t,r,_b)≤exp{-r^δ_1}.
To prove (<ref>), we repeat the proof of <cit.> on pages 1668-1674 when the compact set 𝒟 is replaced by _b = [b,b+1].
When _b = [b,b+1], we have D = b+1 and D = b in <cit.>. Notice that for b≥ D, <cit.> implies that
e^t Π[B_t∈ m_t+_b] ≤κ t e^-√(2)D.
The upper bound of this probability does not depend on b. In <cit.>, we need to choose r large enough so that F≤ 0 on [r,t-r] where F is given by <cit.>. For any δ,t>0, define f_t,δ(s) = min{s^δ,(t-s)^δ} for 0≤ s ≤ t. For b<r^θ<r^α and any s∈ [r,t-r],
F(s) = bt-s/t - f_t,α(s) ≤ b - r^α < 0,
which satisfies the condition. The remaining proof of <cit.> remains unchanged.
In the proof of <cit.>, for sufficiently large r, we have
diam(𝒟) = |D| + |D| ≤ 2b+1 ≤ 3r^θ.
Notice that 0<θ<α<1/2<β<1 and let 0<a<1 such that 2aβ-1>0. We can find r = r(α,β,θ,D,a) such that for r≥r the following holds:
3r^θ - f_t,α(s) ≤ 0 3r^θ - f_t,β(s) ≤ -f_t,aβ(s) r≤ s≤ t-r.
Therefore, the proof of <cit.> is also valid and (<ref>) holds for b∈ [D,r^θ].
By <cit.>, for sufficiently large r and t≥ 3r, we have
P(x,t,r,[r^θ,∞))
≤ℙ_x [ M_t ≥ m_t + r^θ]
≤ C r^θ e^-√(2)r^θ+1,
for some constant C>0. Therefore, combining (<ref>) with (<ref>), we get that
sup_t≥ 3rP(x,t,r,[D,∞))
≤ ∑_b = ⌊ D ⌋^⌊ r^θ⌋P(x,t,r,𝒟_b) + P(x,t,r,[r^θ,∞))
≤ (|D|+r^θ) e^-r^δ_1 + C r^θ e^-√(2)r^θ+1.
Choose δ < min{δ_1,θ}, there exists a sufficiently large r_0 such that (<ref>) holds for r≥ r_0. This completes the proof.
Acknowledgment:
We thank Professor Xinxin Chen for many constructive suggestions, and, in particular, for providing the proof strategy of Claim A.
99
ABK11 Arguin, L.-P., Bovier, A. and Kistler, N.: The genealogy of extremal particles of branching Brownian motion. Comm. Pure Appl. Math. 64, (2011), 1647–1676.
ABK12 Arguin, L.-P., Bovier, A. and Kistler, N.: Poissonian statistics in the extremal process of branching
Brownian motion. Ann. Appl. Probab. 22, (2012), 1693–1711.
ABK13b Arguin, L.-P., Bovier, A. and Kistler, N.: An ergodic theorem for the frontier of branching Brownian motion. Electron. J. Probab. 18, (2013), no. 53, 25 pp.
Berestycki11 Berestycki, J., Berestycki, N. and Schweinsberg, J.:
Survival of near-critical branching Brownian motion. J. Stat. Phys. 143, (2011), 833–854.
Berestycki14 Berestycki, J., Berestycki, N. and Schweinsberg, J.:
Critical branching Brownian motion with absorption: survival probability. Probab. Theory Related Fields 160, (2014), 489–520.
Bramson78 Bramson, M.: Maximal displacement of branching Brownian motion. Common. Pure Appl. Math. 31, (1978), 531–581.
Bramson83 Bramson, M.: Convergence of solutions to the Kolmogorov equation to travelling waves. Mem. Amer. Math. Soc. 44, (1983), iv+190 pp.
Fisher37 Fisher, R. A.: The wave of advance of advantageous genes. Ann. Eugenics 7, (1937),
355–369.
Harris99 Harris, S. C.: Travelling-waves for the FKPP equation via probabilistic arguments. Proc. Roy. Soc. Edinburgh Sect. A 129, (1999), 503–517.
Harris06 Harris, J. W., Harris, S. C. and Kyprianou, A. E.: Further probabilistic analysis of
the Fisher-Kolmogorov-Petrovskii-Piscounov equation:
one sided travelling-waves. Ann. Inst. Henri Poincaré Probab. Stat. 42, (2006), 125–145.
Harris07 Harris, J. W. and Harris, S. C.:
Survival probabilities for branching Brownian motion with absorption. Electron. Comm. Probab. 12, (2007), 81–92.
Kesten78 Kesten, H.: Branching Brownian motion with absorption. Stochastic Process. Appl. 7, (1978), 9–47.
Kolmogorov37 Kolmogorov, A., Petrovskii, I. and Piskounov, N.: Étude de I'équation de la diffusion avec croissance de la quantité de la matière at son application a un problèm biologique. Moscow Univ. Math. Bull. 1, (1937), 1–25.
Kyprianou04 Kyprianou, A. E.: Travelling wave solution to the K-P-P equation: Alternatives to Simon Harris' probabilistic analysis. Ann. Inst. Henri Poincaré Probab. Stat.
40, (2004), 53–72.
Lalley87 Lalley, S. and Sellke, T.: A conditional limit theorem for the frontier of a branching Brownian motion. Ann. Probab. 15, (1987), 1052–1061.
Liu21 Liu, J.:
A Yaglom type asymptotic result for subcritical branching Brownian motion with absorption. Stochastic Process. Appl. 141, (2021), 245–273.
Louidor20 Louidor, O. and Saglietti, S.: A strong law of large numbers for super-critical branching Brownian motion with absorption. J. Stat. Phys. 181, (2020), 1112–1137.
Maillard22 Maillard, P. and Schweinsberg, J.:
Yaglom-type limit theorems for branching Brownian motion with absorption. Ann. H. Lebesgue 5, (2022), 921–985.
McKean75 McKean, H. P.: Application of Brownian motion to the equation of Kolmogorov-Petrovskii-Piskunov. Comm. Pure Appl. Math. 28, (1975), 323–331.
YZ23 Yang, F. and Zhu, Y.: The extremal process of branching Brownian motion with absorption. Preprint (2023), available at arXiv:2310.04976.
|
http://arxiv.org/abs/2409.02968v1 | 20240904064750 | A Comprehensive Survey of Blockchain Scalability: Shaping Inner-Chain and Inter-Chain Perspectives | [
"Baochao Chen",
"Liyuan Ma",
"Hao Xu",
"Juncheng Ma",
"Dengcheng Hu",
"Xiulong Liu",
"Jie Wu",
"Jianrong Wang",
"Keqiu Li"
] | cs.DB | [
"cs.DB",
"cs.CR"
] |
1]Baochao Chenfn
[email protected]
1]Liyuan Mafn
[email protected]
1]Hao Xufn
[email protected]
1]Juncheng Mafn
[email protected]
1]Dengcheng Hufn
[email protected]
1]Xiulong Liucor
[email protected]
2]Jie Wu
[email protected]
1]Jianrong Wang
[email protected]
1]Keqiu Li
[email protected]
[1]organization=Tianjin University,
addressline=Haihe Education Park, Jinnan District,
city=Tianjin,
postcode=300354,
country=China
[2]organization=Temple University,
addressline=1801 N. Broad Street,
city=Philadelphia,
state=PA,
postcode=19122,
country=USA
[cor]Corresponding author
[fn]The first five authors contribute equally to this survey.
§ ABSTRACT
Blockchain is widely applied in logistics, finance, and agriculture. As single blockchain users grow, scalability becomes crucial. However, existing works lack a comprehensive summary of blockchain scalability. They focus on single chains or cross-chain technologies. This survey summarizes scalability across the physical and logical layers, as well as inner-chain, inter-chain, and technology dimensions. The physical layer covers data and protocols, while the logical layer represents blockchain architecture. Each component is analyzed from inner-chain and inter-chain perspectives, considering technological factors. The aim is to enhance researchers' understanding of blockchain's architecture, data, and protocols to advance scalability research.
Blockchains Scalability Inner-chain Inter-chain
§ INTRODUCTION
§.§ Background
Blockchain technology has gained significant attention in recent years as a promising solution to various application issues, such as security, trust, and transparency. As a distributed ledger technology, it enables peer-to-peer transactions without intermediaries. However, scalability remains a critical challenge for blockchain technology in many real-world applications.
The scalability of a blockchain refers to its ability to maintain a certain level of performance and security while handling a growing volume of transactions. It depends on its architecture, consensus mechanism, block size, and transaction throughput. Therefore, it is necessary to explore the concept of blockchain scalability, evaluate different scalability methods and technologies, and analyze the performance of existing blockchain solutions in terms of scalability.
The scalability of a blockchain remains a significant concern, as blockchain networks are intended to process a substantial volume of transactions and accommodate an expanding user population. Applications such as financial services, supply-chain management, and decentralized platforms place increasing emphasis on the need for high transaction throughput and low latency. Consequently, addressing scalability has become imperative. In domains such as finance, efficient real-time handling of transactions is crucial to meet the requirements of swift payment and settlement processes. Similarly, in supply-chain management, seamless processing of cross-organizational transactions is indispensable. Moreover, decentralized platforms, which are significant application areas of blockchain systems, impose even greater scalability demands to support extensive user interactions and facilitate smart contract execution.
Multiple studies have long focused on the scalability of blockchain, considering it as a key factor in improving performance. These studies primarily concentrate on the scalability within a single blockchain. But as the number of blockchains increases, cross-chain interoperability should also be considered as an aspect of scalability. Some research has summarized the technologies that enable scalability, including architecture, storage, and cross-chain capabilities. These studies largely emphasize technological advancements but do not deeply explore the importance of scalability at the system level. Therefore, a comprehensive approach is needed to address the scalability challenges faced by blockchain systems. Different from existing work, this survey summarizes the scalability of blockchain on two levels: the physical layer and the logical layer, and three dimensions: inner-chain, inter-chain, and technology. The physical layer consists of data and protocols, while the logical layer represents the architecture of the blockchain. Within each layer, we elaborate on each part from both inner-chain and inter-chain perspectives and incorporate the technology dimension into it. The detailed description is shown in Fig. <ref>.
§.§ Related Work
Blockchain system scalability has been a subject of scrutiny in the research community for a long time.
Numerous surveys <cit.> extensively analyzed blockchain systems, highlighting the importance of scalability and regarding it as an important research direction for enhancing the performance of blockchain systems.
<cit.> concentrates on scalability as its core theme, with an emphasis on innovative techniques and mechanisms aimed at improving the scalability of blockchain systems. Additionally, it evaluates numerous blockchain solutions in terms of scalability.
<cit.> similarly focuses on scalability and portrays scalability from three perspectives: throughput, storage, and networking.
These surveys primarily concentrate on describing scalability within a single blockchain system.
However, from our perspective, as the number of blockchain systems increases, facilitating interactions between different blockchains should also be considered as an aspect of scalability.
Achieving scalability within the blockchain requires the utilization of diverse technologies, and numerous studies have provided comprehensive summaries of these technologies.
Sharding and DAG have always been the core solutions for scalability from the architectural perspective.
Xi et al. <cit.> focused on sharding as their core concept, emphasizing the latest research developments in sharded blockchain systems, encompassing shard configuration and cross-shard transaction processing.
The study in <cit.>, focused on DAG as a fundamental technology, analyzing the trade-offs between distinct factors, discussing open challenges, and examining the potential of using DAG-based solutions to advance scalability, and suggesting promising future research directions.
As the amount of data stored in blockchain systems increases steadily, ensuring efficient storage and retrieval is a crucial component of blockchain scalability.
Therefore, this article focused on addressing the prevention of excessive space consumption while maintaining rapid data retrieval.
Some related studies have targeted these issues <cit.>, emphasizing ways to reduce storage costs and improve query efficiency in blockchain systems.
To clarify, prior studies on scalability had predominantly focused on single-chain scenarios.
Nevertheless, as blockchain technology continues to see constant advancements, instances of data silos between chains are becoming increasingly prevalent. To address this issue, cross-chain technology aims to eliminate data barriers so as to facilitate interoperability between diverse blockchain systems <cit.>. Therefore, cross-chain solutions may be viewed as a means to achieve inter-chain scalability.
These surveys offer a comprehensive account of specific technological advancements and serve as a valuable reference. However, their focus remains primarily on technical aspects, rather than delving into the significance of scalability at the system level.
In conclusion, we consider that the current related surveys offer only a limited overview of scalability, underscoring the necessity for a more comprehensive and inclusive approach toward addressing the scalability challenges faced by blockchain systems.
§.§ Overview of Paper Structure
This paper provides a unique perspective to analyze the scalability of the existing blockchain approaches.
We deconstruct the traditional six-layer blockchain architecture (data, network, consensus, incentive, contract, and application), from the perspective of scalability.
We propose that the three most important components that affect the scalability of a blockchain system are: the architecture at the logical layer and data and protocols at the physical layer.
Furthermore, we innovatively analyze the architecture, data, and protocols of blockchain systems from the inner-chain, inter-chain, and technology perspectives.
The contributions of this paper are mainly in the following three aspects:
* We innovatively analyze and sort out the existing state-of-the-art work from the three perspectives of architecture at the logical layer and data and protocols at the physical layer.
* We classify the existing work from the inner-chain, inter-chain, and technology perspectives to systematically analyze the approaches that improve blockchain scalability.
* Based on a comprehensive summary of the work on blockchain scalability, and through open discussions, this survey gives a landscape of the future development of blockchain scalability.
The organization of this survey is shown in Fig. <ref>. Section <ref> introduces the basic notations used in this paper. Section <ref> analyzes the technologies that improve scalability in terms of architecture-mainly, inner-chain technologies such as sharding and DAG, and inter-chain technologies such as blockchain of blockchains, sidechain and relays, and plasma. Section <ref> introduces technologies to improve data scalability, including inner- and inter-chain storage and querying. Section <ref> presents the technologies related to protocol scalability improvement, including inner-chain propagation protocols, transaction parallelism, inter-chain notaries, payment channels, and atomic swaps.
Section <ref> gives an open discussion of blockchain scalability and gives a landscape of potential blockchain scalability improvements.
Section <ref> concludes this survey.
§ BASIC NOTATIONS
In recent years, blockchains have been applied in many fields such as logistics, finance, and agriculture. With the increase in the number of users, the scalability of blockchain has become an inevitable issue and may become a bottleneck to further development. Therefore, in this survey, we focus on the existing works on improving the scalability of blockchains. Based on the summary and analysis of the literature, we innovatively divide the existing works based on three aspects from the inner-chain and inter-chain perspectives: architecture scalability, data scalability, and protocol scalability. Before delving into the details of each aspect, to help understand this survey better, we present some notations that we use in the following sections.
§.§ Inner-chain and Inter-chain
Scalability is a pivotal factor in shaping blockchain technology's success and widespread adoption. Innovatively, we analyze its scalability from both inner-chain and inter-chain perspectives, which encompass the majority of research efforts in existing blockchain systems. Inner-chain scalability primarily pertains to enhancing the performance and capacity of a single blockchain. This can be achieved through the adoption of architectures like sharding, and DAG, the design of efficient storage and query strategies, and the development of propagation protocols and transaction parallelism strategies. Inter-chain scalability, on the other hand, refers to the ability to coordinate and process transactions across different blockchains. Technologies such as sidechain/relay chain, off-chain access, and atomic swap facilitate interoperability among diverse blockchains, facilitating asset and data exchange and achieving broader scalability. By comprehensively considering these two aspects, the blockchain ecosystem can adapt to the growing demands, providing users with an efficient, secure, and scalable distributed ledger technology, thereby establishing a robust foundation for future decentralized applications.
§.§ Architecture Scalability
As the number of users and their transaction volumes increase, it is necessary to speed up the processing of transactions in blockchains and achieve an overall increase in the network throughput. Therefore, we define the system's ability to grow and adapt to the increasing transaction demand as the architecture scalability. There are two aspects to architecture scalability: inner-chain scalability and inter-chain scalability. Inner-chain scalability of architecture refers to the means of parallelly processing user transactions through a distributed network of nodes within the blockchain, and mainly includes sharding and DAG techniques. The fundamental concept behind sharding is to partition the network into multiple groups, referred to as committees, that work concurrently to handle transactions. DAG breaks the chain structure of the blockchain and constructs transaction blocks into a graph topological structure, allowing the verification of multiple transactions simultaneously. Compared to single-chain expansion, we define the inter-chain scalability of the architecture as a means of introducing a homomorphic/heterogeneous blockchain based on the existing blockchain structure. Its purpose is to accelerate the transaction processing and realize more extensive and complex application-scenario requirements. There are several methods for achieving inter-chain scalability, such as blockchain of blockchains, sidechain and relay, and plasma. Blockchain of blockchains is a cross-chain technology that can connect different blockchain networks to form a larger and stronger blockchain network. A sidechain is a blockchain parallel to the main chain, which can perform functions and applications different from that of the main chain. A relay is a mechanism that bridges different blockchains, enabling transaction interoperability between different blockchains. Plasma is an Ethereum on-chain scaling technology. The core idea is to perform complex calculations on a sidechain to improve the performance and throughput of the Ethereum main chain. This achieves higher processing capacity and lower costs.
§.§ Data Scalability
In a blockchain network, nodes must maintain a complete ledger of data, and an increase in the transaction volume will inevitably increase the cost of data storage for the nodes. Additionally, when users need to retrieve a certain part of the data, the cost of querying will also increase. Many studies have attempted to increase the throughput of the blockchain network. However, improving the throughput will increase the burden on the blockchain nodes, in terms of storing and querying massive amounts of data. This hinders some resource-limited devices (such as mobile phones and tablets) from joining the blockchain network. In extreme cases, the number of nodes that can bear the burden of storing and querying data in the network gradually decreases, causing the blockchain network to evolve from a distributed network to a centralized network. This clearly violates the decentralized nature of blockchains. Therefore, we define the concept of data scalability, which refers to the ability of the nodes in a blockchain network to reduce data storage and querying costs while satisfying the availability requirements of the blockchain. We analyze the data scalability from both inner-chain (on-chain) and inter-chain (off-chain and multi-chain) perspectives. Here, on-chain nodes optimize key information such as data storage, data indexing, and related hash values. This helps to reduce the node resource consumption and improve the storage and querying capabilities of the overall system. We refer to this process as inner-chain scalability. Inter-chain scalability involves storing detailed data and its data structure in off-chain nodes. Inner-chain indexing and hash values are then used to achieve fast location and verification of inter-chain data. This process improves the transaction execution speed and overall network throughput, addressing the scalability challenges of blockchain data storage and querying. In addition, we extend the inter-chain node to the concept of multi-chain nodes to more comprehensively cover data scalability. For a better description, the on-chain is collectively referred to as the inner-chain, and the off-chain and multi-chain interactions are called inter-chain.
§.§ Protocol Scalability
The operation of a blockchain system relies on multiple protocols, each of which significantly impacts the system's reliability and performance.
To maintain the efficiency and reliability of transaction processing in the blockchain system when expanding, it is necessary to extend the various protocols supporting its operation.
Thus, we define protocol scalability as the ability of a blockchain system to meet large-scale and high-concurrency requirements without compromising transaction execution speed and system reliability.
This scalability encompasses inner-chain protocol scalability and inter-chain protocol scalability.
Specifically, inner-chain protocol scalability pertains to optimizing protocols such as broadcasting and transaction parallelism in the blockchain system to enhance reliability and performance.
The efficiency of network data transmission is crucial for performance, with the broadcasting protocol facilitating information transmission between nodes.
An efficient broadcasting protocol can prevent malicious attacks, improve the reliability and performance of the blockchain system, and enhance system scalability.
Transaction execution involves each node adding new transaction data to its local ledger according to specific rules.
Multiple transactions are processed and verified simultaneously by nodes, known as transaction parallelism. By processing transactions in parallel, the speed of transaction confirmation can be accelerated, the processing time can be reduced, and the overall performance of the blockchain system can be enhanced.
Protocol inter-chain scalability refers to the interoperability and stability of interaction between blockchains, and includes techniques such as notary systems, payment channels, and atomic swaps.
Notary systems introduce multiple notaries to act as trusted intermediaries for cross-chain transactions, ensuring transaction reliability and tamper resistance.
Payment channels are peer-to-peer transaction models based on blockchain technology, which achieve fast and low-cost transaction processing by establishing direct connection channels between participants.
Atomic swaps are peer-to-peer cross-chain transaction methods that guarantee the security, reliability, and irreversibility of transactions between two chains.
§ ARCHITECTURE SCALABILITY
The current business environment presents a challenge for the blockchain system to be widely embraced as a feasible alternative to traditional databases, primarily because its transaction throughput capacity falls short of meeting the essential transaction processing demands.
To tackle this limitation, various Inner-chain and Inter-chain architectural scaling techniques have gained prominence.
In this Section, we showcase the architecture scalability of blockchain from Inner-chain and Inter-chain perspectives. As shown in Table <ref>, Inner-chain concludes sharding and DAG. At this stage, technologies of sharding and DAG are mainly used to improve the scalability of the architecture and this survey provides a detailed study.
We categorize technologies of Inter-chain into three types: sidechain & relay, plasma, and blockchain of blockchains (BoBs).
The following subsections will demonstrate them one by one.
§.§ Inner-chain Solutions
§.§.§ Sharding
The scalability limitations of blockchain technology hinder its application in various scenarios, such as e-commerce and freight transport. Low throughput is a pressing challenge that needs to be addressed. Sharding is one of the effective solutions for scalability issues in blockchain technology. Therefore, both academic and industrial communities have made significant efforts toward sharding scalability. As shown in Fig. <ref>, the central tenet of sharding entails the segmentation of the network into distinct committees that operate independently, enabling simultaneous processing of transactions. This mechanism enhances the system's overall throughput and efficiency.
Many novel sharding protocols have emerged in recent years.
The sharding work has some novel architecture designs, such as network sharding, multi-layer design, double-chain architecture, etc., providing a source of innovation for the continuous innovation of sharding work. Luu et al. <cit.> studied a new distributed agreement protocol in permissionless blockchains. Their method scaled the transaction rates almost linearly with respect to the capability of miners by uniformly partitioning the mining network into shards. Omniledger <cit.> designed a scale-out distributed ledger that maintained long-term security within a permissionless operation. This method introduced atomic commit protocol (Atomix) to commit transactions atomically across shards. RapidChain <cit.> presented a novel public blockchain protocol designed to address the scalability and security constraints. This protocol incorporates an efficient consensus algorithm within each committee, ensuring optimal throughput by leveraging block pipelining techniques. Monoxide <cit.> provided a scalable blockchain system that allows for the independent representation and parallel execution of workloads in communication, computation, and storage. It implemented Chu-ko-nu mining as a countermeasure to maintain the security threshold even in situations where mining power is dispersed across multiple zones. Pyramid <cit.> is a novel layered sharding system that departs from the traditional approach of isolating shards completely. Instead, it allows shards to overlap, facilitating collaboration and coordination among them. To achieve this, a layered sharding consensus protocol is introduced, enabling the seamless commitment of cross-shard blocks within each shard. This protocol leverages collective efforts and cooperation across different shards, resulting in improved scalability and enhanced system performance.
RepChain <cit.>, is a reputation-based blockchain system that utilizes sharding to enhance security and speed. It introduces a unique double-chain architecture, combining CSBFT consensus for reputation tracking and Raft consensus for high throughput. This approach encourages node collaboration and improves scalability, making it a significant advancement in sharding-based systems.
Many studies have proposed various sharding strategies, including sharding reconfiguration, dynamic node joining and leaving, and selection of the best shard node. OptChain <cit.> optimizes transaction placement within shards to enhance the performance of existing sharding approaches. It is designed to be flexible and compatible with various sharding methods, allowing for efficient allocation of transactions and ultimately improving the overall system performance.
Huang et al. <cit.> studied the allocation of budget-limited network resources to shards in a practical Byzantine fault tolerance (PBFT) based permissioned blockchain.
Their work proposed a novel algorithm based on the drift-plus-penalty approach, aiming to achieve a resource-allocation solution that is close to optimal. Crain et al. <cit.> proposed the Red Belly Blockchain (RBBC), the first secure blockchain that could be scaled to hundreds of geo-distributed consensus nodes. This system offers a new balancing method that totally orders all transactions while assigning them to distinct roles. PolyShard <cit.>, is an innovative approach that addresses the challenges of achieving linear scalability in terms of throughput, storage efficiency, and security. Drawing inspiration from coded computing, specifically Lagrange Coded Computing, PolyShard introduces a novel concept. Rather than storing and processing a single uncoded shard, as traditionally done, each node in PolyShard operates on a coded shard of equivalent size, enabling enhanced storage and computational capabilities.
Metosis <cit.> addresses a new class of problems, namely, when and how nodes dynamically join and leave the system, to achieve optimal system performance. Metosis implements dynamic effects of chains by creating, adding, splitting, and merging chains, and has been experimentally deployed in Fabric Chaincode. Gearbox <cit.> addresses the monolithic outage issue by dynamically adjusting the number of commission nodes. Specifically, the Gearbox consensus protocol runs a minimum number of nodes per shard, and the chain of control periodically receives "heartbeat transactions" sent by each shard to monitor shard activity. Once the shard becomes inactive, Gearbox immediately increases the number of nodes in the shard until it is activated again. Zhang et al. proposed a deterministic and fast transaction allocation scheme TxAllo <cit.>. By transforming a transaction assignment problem into a community detection problem in graph-structured data, TxAllo dynamically deduces the association between an account assignment and its transactions. The TxAllo protocol consists of global TxAllo and dynamic TxAllo.
Cross-shard transactions are a natural consequence of sharding systems, and the uneven distribution of cross-shard transactions can undermine the performance of the sharding system. Brokerchain <cit.> is a cross-shard protocol for account/balance-based state sharding. It aims to generate fewer cross-shard transactions and ensure workload balance for all shards. Sharper <cit.> facilitates concurrent transaction processing through node clustering and the sharding of both data and the ledger, enabling parallelized operations. LBChain <cit.> proposes a new method to alleviate the problem of load imbalance by periodically migrating active accounts from heavily loaded shards to lightly loaded ones, achieving a dynamic balance of transactions between shards. LBChain uses LSTM for transaction prediction and account allocation. Based on the prediction results, the node network migrates the accounts along with their transactions, as a whole, to the new shard.
A trusted execution environment (TEE) can provide trusted security guarantees for a system by securely isolating sensitive computations and performing secure authentication of the execution environment. TEE has been adopted by some sharding systems to provide security services. The proposed scheme <cit.> focuses on enhancing BFT consensus protocols by utilizing Trusted Execution Environments (TEE) to effectively eliminate equivocation in scenarios involving Byzantine failures. Benzene <cit.> effectively addresses the risk of a single-shard outage because of decentralized computing and reduces the node storage and validation overhead by using a collaborative consensus protocol in conjunction with a secure TEE. The overall architecture adopts a double-chain structure. The proposal chain is responsible for recording transactions, and the voting chain is responsible for the collaborative consensus of fragments.
In addition to classic transfer transactions, smart contract transactions, with their complex code logic, are capable of handling more complex scenarios, such as smart metering, complex voting, and privacy-protected banking transactions. A large number of systems can now support smart contract transactions. Chainspace <cit.> introduces a scalable system that exhibits unlimited scalability as the number of nodes grows. George et al. <cit.> introduced Cosplit, an advanced static analysis tool designed to accurately extract ownership and commutativity information from smart contract source code. This valuable information is then utilized to generate sharding signatures for enhanced performance and efficiency. Jenga <cit.> proposed a system that orchestrates the state storage, logic storage, and execution of smart contracts, instead of treating the contract as an indivisible entity. Meepo <cit.> introduces a novel methodology that encompasses a partial cross-call merging strategy, enabling smart contracts to facilitate flexible and concurrent invocations across multiple shards. This approach effectively addresses the requirements posed by intricate business models within a consortium-based blockchain system.
The industry has also seen the emergence of high-performance sharding systems, which are primarily designed around security and atomicity considerations. Harmony <cit.> is a next-generation sharding-based blockchain, which is fully scalable, provably secure, and energy efficient. Elrond <cit.> presents a novel architecture that introduces a genuine state-sharding scheme for practical scalability, eliminating energy and computational waste while ensuring distributed fairness through an SPoS consensus. With scalability as the main goal, Zilliqa <cit.> proposes a new smart contract language, Scilla, which scales much better for a multitude of applications that range from automated auctions and shared economy to financial modeling. NEAR <cit.> is a decentralized application platform aiming at creating future open networks and empowering their economies. It employs the core foundational technology of Bitcoin and combines it with cutting-edge advancements in community consensus, database sharding, and availability. The main feature of Eth2.0 <cit.> is the transition from PoW to PoS, which improves upon PoW by being much more scalable and accessible. Sharding will help in scaling up Eth2.0 throughput exponentially, by breaking down data verification into smaller shards and enabling parallel processing.
§.§.§ DAG
The high latency and low scalability of traditional blockchain systems limit their wide application in a variety of scenarios. DAG is an effective technique that can overcome this limitation. The core principle of DAG is to construct transactions in the form of graph topologies, replacing the traditional linear block structure, which allows multiple transactions to be validated simultaneously, thus improving the transaction processing speed and overall network throughput. In recent years, a variety of DAG blockchains have emerged; however, there is still a lack of a systematic summary of the works on DAG implementation scalability.
Based on the DAG formed by the graph topology, it can be summarized into three types: divergence, parallel, and convergence.
To elaborate, divergence refers to a network expanding in an
unpredictable direction, where the order cannot be predetermined.
In a parallel topology, multiple chains are maintained concurrently.
Convergence implies that the blockchain network tends to be organized in a
sequential order.
We first explain divergence.
Divergence DAGs provide higher parallelism as blocks can be added in any order.
This makes divergent DAGs well-suited for high-throughput applications such as large-scale transaction processing or distributed storage systems.
IOTA <cit.> is a permissionless network where each node can freely participate or leave.
The main innovation is a distributed ledger structure based on DAG, called Tangle, which is a blockchain with no blocks or chains.
Graphchain <cit.> is formed by executing each transaction to confirm its ancestry.
In contrast, Graphchain is different from IOTA in terms of the incentive mechanism.
IOTA operates under tail-selection rules and is not affected by excitation.
Instead, Graphchain introduces an incentive mechanism for maintaining graphs.
Meshcash <cit.> is a hierarchical DAG system where an honest node generates new blocks via PoW, referencing all the end blocks in its view. Each block contains a level number.
Meshcash provides a simple, but limited security approach that is complex and attack-resistant, using an inter-chain asynchronous Byzantine protocol.
Spectre <cit.> uses a DAG structure for faster block generation and larger block capacity.
The performance improvement mainly comes from two aspects: first, the system structures the blocks to form a topological network.
Transactions can be added to the network simultaneously, which makes the system scalable. Second, increasing the block generation rate helps improve the performance because Spectre requires only paired sorting between two blocks, avoiding the obstacle of many conflicting states between blocks.
Avalanche <cit.> is a public chain system with a new consensus.
Unlike the BFT class and Nakamoto mechanism, Avalanche uses Slush, a CFT fault-tolerant mechanism, as its underlying protocol.
Finally, an enhancement algorithm is applied to the whole topological DAG network.
The second class covers parallel DAGs.
Parallel DAGs provide high security as multiple chains remain parallel.
This means that even if one chain is attacked, other chains can maintain integrity.
Additionally, parallel DAGs are suitable for processing transactions or data with the same priority.
Nano innovatively <cit.> adopts the method of one user, one chain. It records only its own transactions, and only it can modify its records. It does not share data with other accounts; thus, all transactions can be executed in parallel, providing transaction speeds in the range of seconds and infinite scalability.
Hashgraph <cit.> is a permissioned network.
Hashgraph has been pioneering for asynchronous BFT consensus in the public chain environment. A major problem of traditional BFT is the high complexity of messages, which results in high network bandwidth consumption and failure to cope well with the dynamic network.
DLattice <cit.> uses the so-called DPOS-BA-DAG protocol to reach consensus. DPOS provides a way for committees to form, and BA shows how consensus can be achieved in DAGs.
Jointgraph <cit.> simplifies the voting process to one round by introducing supervisory nodes. These nodes replace the misbehaving node with an honest node, monitor the nodes, and periodically take snapshots of the system status to release memory. Specifically, each transaction in Jointgraph is broadcast to its peers through the gossip protocol.
Chainweb <cit.> is a permissionless system that attempts to scale the Nakamoto consensus by maintaining multiple parallel chains.
Aleph <cit.> enables each node to publish messages equally and concurrently, and is represented as a cell designed to transmit asynchronously and efficiently across the network.
Each cell is independent and free to create, propagate, and vote. The core is to establish an overall ranking among these cells.
Vite <cit.> follows the basic structure of Nano, but introduces a global snapshot chain, a consistent storage structure, to achieve a total sequential sequence. Each account in Vite creates separate parallel transactions.
The purpose of the snapshot block is to store the state of the Vite ledger.
Dexon <cit.> has several parallel blockchains, each of which agrees independently.
The consensus part is mainly divided into two modules. One is the single-chain consensus protocol. The other is the butyl parallel chain to sort blocks.
The last class involves convergence DAGs.
Convergence DAGs provide ordered transaction processing as blocks are added in a certain order.
This makes convergent DAGs well-suited for applications that require temporal ordering, such as log recording and timestamping.
Furthermore, owing to the ordering of blocks, convergent DAGs can also provide better data compression and storage efficiency.
Byteball <cit.> is a permissionless network. The concept of a main chain and witness is innovatively introduced to encourage the verification of multiple parent transaction units, forming a digital signature Hash network with the growth of transactions, mutual verification, and enhanced security.
Conflux <cit.> inherits the Ghost design to achieve high performance without compromising on security. The main contribution is to decouple block confirmation from transactions.
TIPS <cit.> proposes a transaction inclusion detection protocol with a tagging signal. By generating transaction and block association information through a Bloom filter located in the block header, the block header is broadcast with a tagging signal priority, and other miners adjust their transaction packaging strategy accordingly upon receiving the signal. Through this signaling mechanism, the number of transaction conflicts in a block can be reduced. Furthermore, this approach can resist denial-of-service attacks and delay attacks.
NEZHA <cit.> proposes an efficient concurrency control scheme for blockchains based on DAG, which resolves the problems arising from conflicting concurrent read and write operations to the same address during the parallel processing of transactions. Specifically, first, a conflict graph is built based on the transaction addresses, to generate the overall order of transactions. Then, a hierarchical sorting algorithm is designed, which deduces the sorting level of each address from the conflict graph and sorts the transactions at each address.
§.§ Inter-chain Solutions
§.§.§ Sidechain and Relay
Sidechains are autonomous blockchains that are not standalone platforms; instead, they are linked to the main chain in a specific manner.
Interoperability is a key feature of sidechains, allowing assets to move freely between the main chain and the sidechain.
Various methods can be employed to ensure seamless fund transfers.
For instance, it is possible to deposit funds into a designated address and subsequently shift assets from the main chain to the side chain. During this process, the funds remain locked at the address, while the side chain reflects the corresponding amount.
Alternatively, a more direct approach involves sending funds to a custodian who then carries out the exchange, converting the assets into the corresponding margin on the sidechain.
The work by Gaži P et al. <cit.> marked a significant milestone by introducing the first formal definition of a sidechain system.
Their research elucidated the safe transfer of assets between sidechains and introduced a novel security definition.
This security definition extended the conventional transaction ledger properties of liveness and safety to encompass multiple ledgers.
Importantly, it introduced a "firewall" security property, fortifying each blockchain against potential sidechain failures.
Singh A et al. <cit.> made notable contributions by conducting an exhaustive analysis of sidechains and platforms.
Their comprehensive review encompassed recent developments in the field and offered a multi-faceted perspective on their impact.
Additionally, they critically highlighted the limitations of existing solutions and proposed innovative approaches to enhance the overall blockchain system.
Kiayias A et al. <cit.> advanced the field by creating the first sidechain architecture that enables direct communication between Proof of Work (PoW) blockchains.
They introduced the concept of a "two-way peg" to facilitate the transfer of assets between different chains.
Their work emphasized the prerequisites for inter-chain communication, which include a PoW blockchain as the source and a smart contract-capable blockchain as the destination.
Moreover, they provided detailed insights into the required smart contracts for the implementation of these interlinked sidechains.
The relay serves as an intermediary or bridge between different blockchains or blockchain layers within a multi-chain ecosystem.
BTC-Relay <cit.> stands as a groundbreaking achievement, serving as the inaugural bridge between the Bitcoin blockchain and Ethereum smart contracts.
Frauenthaler P et al. <cit.> have significantly reduced the operational costs associated with Ethereum-based blockchain relays, with potential cost reductions of up to 92%.
Their pioneering approach combines a validation-on-demand pattern with an incentive structure, making decentralized interoperability between blockchains, such as Ethereum and Ethereum Classic, a practical reality.
In order to create and communicate with various types of sidechains without being aware of their fundamental structure, Garoffolo A et al. <cit.> provided a construction technique for blockchain systems, similar to that of Bitcoin.
They have implemented a universally verifiable transfer mechanism for sidechains, leveraging zk-SNARKs and sidechain nodes.
Importantly, this mechanism allows sidechain nodes to directly witness the mainchain, while mainchain nodes only need to verify cryptographically validated certificates provided by sidechain maintainers.
This innovative approach enhances security and trust in the cross-chain ecosystem.
§.§.§ Plasma
The plasma <cit.> establishes a network of plasma subchains linked to the root chain, with the Merkel root of all transactions in all blocks of each subchain published to the root chain as a tool for verifying the data on the sidechain subsequently.
This minimizes trust while allowing verifiable proof of fraud and an enforceable state.
By performing the aforementioned operation, a significant portion of the root chain's transaction load is transferred to the side chain for processing, requiring the root chain to only carry out verifiable forgery of transactions in the sidechain.
This greatly improves the performance of the blockchain while processing transactions.
M. H. Ziegler et al. <cit.> proposed a brand-new system architecture that uses the plasma framework to combine fog computing and blockchain technology and assess the effectiveness of its prototype.
§.§.§ Blockchain of Blockchains
Different from the design of sidechains and subchains, BoBs attempt to restructure the existing blockchain inter-chain architecture to create a cross-chain Internet, as shown in Fig. <ref>.
Based on the types of blockchains within the ecosystem, BoBs can be categorized as public BoBs and consortium BoBs.
As a forerunner of public BoBs, Polkadot <cit.> standardizes the method for passing messages between parallel chains of homogenous relay networks.
It acknowledges the interoperability of parallel chains, which may engage in cross-chain interactions to reinforce the performance of one another.
A standard created by Polkadot 2.0 called XCM <cit.> enables protocol designers to specify the types of data and sources from which their chains can send and receive data.
It comes with one virtual machine that enables flexible execution and one virtual machine allows for adaptable execution.
Another pioneer in the BoBs ecosystem is Cosmos.
According to the IBC <cit.>, Cosmos is an end-to-end, connection-oriented, stateful protocol used to provide authenticated, reliable communication across diverse blockchains arranged in a dynamic topology.
These advancements in the Polkadot ecosystem enhance the versatility and efficiency of cross-chain communication and execution, ultimately contributing to the broader development of blockchain technology.
Chainlink created CCIP <cit.>, focusing on end-to-end security, futuristic interoperability, and an easy development process.
The CCIP infrastructure consists of three layers: a message layer (programmable pass bridge), a transport layer (CCIP core), and a decentralized prophet network (DON) based on the OCR2.0 protocol.
External developers only need to develop sender and receiver contracts, all other components are included in the CCIP service, and the developers can easily interact across chains through a unified interface.
LayerZero <cit.> is a trustless interoperability protocol, which provides a powerful, low-level communication primitive upon which a diverse set of cross-chain applications can be built.
Aion <cit.> enables the decentralized internet and supports a public cross-chain with the Aion Virtual Machine.
Komodo <cit.> designs a three-in-one product that combines a wallet, cross-chain bridge, and decentralized exchange designed to be accessed through any Internet browser and connects to more than 60 blockchains, including Ethereum, Polygon, Avalanche, BNB Chain, and Cosmos.
As for consortium blockchains, ChainMaker <cit.> uses several components to complete a cross-chain operation, including a cross-chain agent, SPV, and transaction contract.
The business contract needs to provide interfaces for forward and reverse operations.
The forward interface is the contract portal to be invoked when the business is completed; the forward interface operation fails for business rollback.
XuperCross <cit.> solves the interoperability problem between heterogeneous blockchain systems (including public, private, and consortium chains) through the XIP protocol, which describes the cross-chain problem in an abstract manner and designs a standard solution.
XIP contains a series of sub-protocols, including Naming Protocol, Cross Chain Transaction Consistency Protocol, and Data Authentication and Communication Protocol.
BitXHub <cit.> is the first cross-chain platform to support the W3C standard DID protocol, which is composed of three parts: relay chain, cross-chain gateway, and application chain.
A common inter-chain transfer protocol, InterBlockchain Transfer Protocol, has been designed to allow heterogeneous assets, data, and services to be called across chains.
The BSN Interchain Communications Hub <cit.> adopts a double-layer structure, utilizing relay chains as cross-chain coordinators, multiple heterogeneous chains as cross-chain transaction executors, and a relayer as a carrier of cross-chain data.
Each application chain can verify the legitimacy of the cross-chain transactions on its own, thus ensuring the security of cross-chain interactions.
The source-oriented interoperability protocol known as Luyu <cit.> is a collection of adaptable, dependable, and unified interoperability protocols that enable the simple access and dependable operation of various reliable sources.
Developers only need protocol-oriented programming to realize secure interaction with different trustworthy sources.
Wecross <cit.> proposed four core technologies: UBI universal block link port, HIP heterogeneous chain interconnection protocol, TTM trusted transaction mechanism, and MIG multi-lateral cross-domain governance, which realizes efficient availability, security, and trustworthiness, and convenient governance of cross-chain interactions.
§ DATA SCALABILITY
In this Section, we showcase the scalability of blockchain data from inner-chain and inter-chain perspectives. Within these two aspects, the existing work is categorized into data storage and data query. As shown in Fig. <ref>, inner-chain nodes primarily store partial data, data indexes, and hash values to reduce their resource consumption. Inter-chain nodes are responsible for storing detailed data and their data structure for transaction execution and speedy location. The following subsections will demonstrate them one by one.
§.§ Inner-chain Solutions
§.§.§ Storage
Typically, blockchain inner-chain nodes store the entire ledger data, which reduces scalability in terms of blockchain data. To this end, we survey the existing works to alleviate the amount of inner-chain node data storage. Existing solutions are based on the following aspects: use of light nodes, pruning, sharding, data encoding, and optimizing.
Nakamoto <cit.> divided nodes into light nodes and full nodes. The full node saves the data of the entire blockchain network, while the light node saves only the block header data of the longest PoW chain. When the light node needs to query the detailed transaction, it must request the data from the full nodes and compare it with the block header data saved by itself to verify whether the data sent by the full node is correct.
Through the pruning operation, the amount of data stored in the node can be reduced. Dai et al. <cit.> proposed a puzzle-like data reduction method called the jidar. This method allows each node in the blockchain to store only the transactions they are interested in and the relevant Merkle branches. The complete block data can be put together like a puzzle with data fragments. Nodes can securely maintain and verify all their relevant data locally without trust assumptions. Experimental results show that jidar can reduce the storage cost of a node to about 1.03% of the Bitcoin.
As a typical storage sharding solution, CUB <cit.> defines the concept of the Consensus Unit (CU), which allows multiple nodes in the blockchain network to form a single unit. Nodes within the unit cooperate with each other to jointly store at least one complete blockchain ledger. To determine the optimal allocation strategy for existing blocks, this paper models the allocation problem of blocks to nodes as an NP-hard Block Allocation Optimization problem and proposes three effective heuristic algorithms to solve the static allocation problem. At the same time, the corresponding strategies are formulated to meet the needs of dynamic scenarios such as new block generation, node joining, or leaving the CU. Li et al. <cit.> introduced a cluster-based multi-node collaborative storage strategy called ICIStrategy, wherein the participants in the blockchain network are divided into multiple clusters, with each cluster of nodes jointly maintaining a complete ledger. This strategy reduces the amount of data each participant needs to store, thus alleviating the storage pressure. Nodes within each cluster collaboratively store and verify blocks to reduce the storage pressure and communication overheads. Simulation results show that the ICIStrategy requires only 25% of the storage space used by Rapidchain, effectively solving the storage limitation problem and improving blockchain performance. Inspired by CUB, Yin et al. <cit.> proposed the block storage framework, EBSF, and mathematically modeled the allocation of block data based on the characteristics of nodes (such as storage capacity, cost, and ability to respond to queries), with each block assigned to at least one node. The goal was to minimize the total cost of storing the entire blockchain ledger while meeting the query ability threshold of each block. As the optimization problem is NP-hard, the authors proposed three heuristic algorithms to solve it. Additionally, the paper extended the three methods to dynamic scenarios such as node joining and leaving, new block allocation, and old block pruning.
As an effective solution, Qi et al. <cit.> proposed a new storage engine, BFT-store, based on data encoding. It divides a block into n-2f sub-blocks and encodes them into n-coding blocks using erasure coding. Each node stores one of the coding blocks to reduce the data storage on a single node. BFT-store overturns the traditional full replication strategy. It reduces the storage consumption of each block from O(n) to O(1). To ensure system scalability, an efficient four-phase re-encoding protocol is designed, and a multi-replica scheme is adopted to improve the read performance. BFT-store is implemented on the open-source licensed blockchain Tendermint and through experiments, its scalability, availability, and efficiency are demonstrated. Another work <cit.> showcases BFT-Store which combines erasure coding with BFT consensus protocols and breaks the full replication strategy. The demonstration shows (i) how BFT-Store partitions data across all nodes and (ii) how BFT-Store recovers coding blocks in a Byzantine scenario.
For optimization, Ruan et al. <cit.> proposed a traceability system called LineageChain, which can effectively capture the fine-grained sources of the blockchain. It securely stores the source and provides a simple access interface to smart contracts. In addition, LineageChain provides a new skip-list index to support efficient source query processing. The experimental results show that this traceability system has benefits for new blockchain applications, efficient queries, and smaller storage costs. CUB <cit.> and EBSF <cit.> combine sharding and optimizing technologies to distribute data to appropriate nodes for storage, thereby reducing the burden on nodes.
§.§.§ Query
To improve the functioning and efficiency of the blockchain query service, some of the initial works mainly synchronized the blockchain data and organized them in an easy-to-search form through a plug-in query layer or database, such as EtherQL <cit.>, BigchainDB <cit.>, FlureeDB <cit.>, etc.
These solutions usually rely on a trusted third party for the correctness and integrity of query results.
In response to the problem that the service provider may return incorrect or incomplete query results, light node users need additional verification mechanisms to authenticate the results.
This paper presents a comprehensive analysis of the various blockchain-based verifiable query schemes and provides a comparative study, which is presented in Table <ref>.
Moreover, the verifiable query research is mainly divided into the following categories: enrich query methods, improve query efficiency, reduce query overhead, and improve query decentralization.
For enriching query methods on the blockchain,
vChain <cit.> and Gem^2-Tree <cit.> have opened up the research on verifiable queries of blockchains.
vChain proposes a verifiable data structure based on accumulators, and designs two index structures from intra-block and inter-block, thereby ensuring the soundness and completeness of the boolean range query results.
This approach can ensure that light node users can still safely use blockchain data without saving the complete blockchain data.
This further reduces the requirements of the blockchain system on user resources and further increases the scalability of the blockchain system.
Gem^2-Tree implements a verifiable data structure based on smart contracts to provide verifiable range queries, thus avoiding modification of the underlying data structure.
Pei et al. <cit.> proposed a verifiable semantic query solution for hybrid blockchain systems.
They proposed the Merkle Semantic Trie, which provides schemes such as keyword query, range query, fuzzy query, etc., and has good compatibility.
In order to query historical transactions, Ruan et al. proposed LineageChain <cit.>.
LineageChain provides a simple interface for smart contracts to query the historical data of account-based blockchain systems.
Xu et al. <cit.> proposed a new verifiable kNN and range query scheme based on a hybrid blockchain system for Spatial-Temporal-Keywords (STK) transactions.
Among them, MRK-Tree (replacing the Merkle Hash Tree of the traditional blockchain system) organizes STK transactions and supports verifiable kNN and range queries.
Zhang et al. <cit.> proposed efficient new certifiable data structures, including the Suppressed Merkle inverted index and Chameleon inverted index, to realize verifiable keyword queries in hybrid blockchain systems.
SEBDB <cit.> designed a new type of blockchain database, which brings the advantages of traditional databases to the blockchain and improves the scalability of the blockchain.
SEBDB introduces relational semantics to blockchain transactions and supports SQL-like language for general data operations.
The multi-layer structure helps SEBDB efficiently organize blockchain data, and SEBDB supports verifiable rich queries as an extension of the blockchain's verifiable query capabilities.
LedgerDB <cit.> proposed a centralized database to provide high audibility, low storage overhead, and high throughput, starting from the performance of current blockchain data audibility and real-world needs.
LedgerDB introduced the TSA two-way anchor protocol to resist the malicious behaviors of users or SPs.
Furthermore, LedgerDB provides verifiable data deletion operations to clean up obsolete or hidden data, thereby improving storage scalability.
On this basis, Yang et al. <cit.> set the three-element verification factor of what-when-who, so that the data ledger was classified in a standardized manner (Dasein-complete) for practical use in the real world.
To improve the query efficiency, LineageChain <cit.> converted the Merkle Tree into Merkle DAG and introduced the Deterministic Append-only Skip List (DASL) based on the skip table to improve query efficiency.
In order to accelerate the query efficiency, Xu et al. <cit.> designed an efficient block pruning (EBP) algorithm for multi-block querying and the Authenticated kNN/Range Query (AK/RQ) algorithm for single-block querying.
Aiming at the problem of low efficiency of large-scale blockchain data querying, Xu et al. <cit.> proposed a scheme for estimating the number of blockchain transactions.
The program could intelligently adjust the query efficiency and precision according to the user settings.
Xu et al. proposed the MCT data structure to store bitstrings for efficient cardinality estimation.
Furthermore, they designed a DOSE algorithm to terminate the estimation protocol while guaranteeing the estimation accuracy, dynamically.
BF-DOSE further improves the efficiency of the DOSE algorithm by pruning non-target blocks.
Linoy et al. <cit.> proposed a new verifiable data structure, Authenticated Multi-Version Skip List (AMVSL), to support a range query of blockchain historical data.
AMVSL supports efficient data maintenance and is identified according to the version of data.
At the same time, AMVSL can also query across multiple versions.
Based on AMVSL, Linoy et al. implemented three types of range queries, which were used for single data version range querying, multiple data versions range querying, and multiple data versions all-key querying.
To reduce the query overhead, Gem^2-Tree <cit.> proposed new data structures Merkle B-Tree (MBT) and Suppressed MBT (SMBT) to reduce the consumption of queries and data updating.
On the basis of MBT and SMBT, Gem^2-Tree further proposed a two-layer index structure and optimized the cost of maintaining data according to the data distribution.
Although the smart contract-based approach is more flexible, the storage and computing overhead of the contract must be addressed.
Pei et al. <cit.> offloaded the inner-chain overhead to the inter-chain.
The MST they proposed can be used to organize inner-chain transactions efficiently, and store information such as retrievable semantics and locations on the chain in the form of transactions.
The inter-chain part stores the original data and maintains a mapping relationship with the inner-chain data.
LVQ <cit.> proposed a verifiable query scheme based on the Bloom Filter (BF), which provides an efficient storage scheme for light nodes.
LVQ improves the Bitcoin system by adding the hash of BF to the block header, which certifies the correctness of BF.
LVQ introduces a novel data structure Bloom filter integrated Merkle Tree (BMT) for merging BF to further reduce the communication overhead of light nodes.
At the same time, LVQ's Stored Merkle Tree (SMT) data structure makes up for the false positive problem of BF and can reduce communication overhead by proving the absence of data.
VQL adds an intermediate layer (which can be provided by cloud service providers) between the blockchain system and the application layer to provide trusty querying services.
VQL also synchronizes data from underlying databases and reorganizes the data in cloud servers.
In order to ensure the verifiability of the query service, VQL uses encrypted fingerprints to verify the data in the middle layer.
The encrypted fingerprint is embedded in the blockchain system to ensure that it cannot be tampered with.
The Suppressed Merkle inverted index in <cit.> is beneficial for light nodes with logarithmic maintenance costs.
The Chameleon inverted index further reduces the maintenance costs.
When considering the improvement of query decentralization, FalconDB <cit.> is a blockchain-based database designed to balance the performance of shared databases, in terms of security, efficiency, and compatibility.
FalconDB tolerates (N-1)/3 malicious participants while ensuring that clients can check historical data and recover from malicious tampering.
FalconDB builds a verifiable data structure to support light nodes in verifying the results returned by the full nodes.
The corresponding incentive model of FalconDB promotes the positive behavior of each node in the system.
Li et al. <cit.> proposed a TEE-based decentralized search scheme, DeSearch, that simultaneously ensures query verifiability and privacy protection.
DeSearch's witness mechanism ensures the correctness and integrity of query results and reduces the computational overhead of query and proof by reusing historical queries.
DeSearch has designed a public information service that helps executors share data and agree on a data snapshot.
At the same time, DeSearch can also protect query privacy, which includes the protection of the query methods and returned data volume.
§.§ Inter-chain Solutions
§.§.§ Storage
Another way to improve the scalability of blockchain data is through inter-chain storage. The core idea is to store the data on third-party servers, and only store the data summary on the chain. The current research mainly falls into two categories. The first designs verification strategies based on blockchain and implements inter-chain processing and inner-chain verification of data. The second integrates the blockchain into the database, to achieve access control and data management of a hybrid distributed database. The main relevant work is presented in Table <ref>.
For the first type, Xu et al. <cit.> proposed a stateless blockchain system, SlimChain, which can scale transactions through inter-chain storage and parallel processing. The main idea is to use inter-chain storage nodes to store ledger states and simulate smart contract execution. It maintains short-term commitments to ledger states only inner-chain, and includes inter-chain smart contract execution, inner-chain transaction verification, and state commitments. SlimChain optimizes network transmission and further improves the system scalability through sharding. Zhang et al. <cit.> proposed a hybrid storage model in which only a small amount of metadata is stored on the chain; the original data is outsourced to inter-chain storage service providers. A new ADS scheme for certified keyword search in hybrid storage blockchains was also proposed in the paper. This scheme maintains only a part of the ADS structure and can perform secure updating using logarithmic-sized cryptographic proofs. Experimental results show that this hybrid storage model using the new ADS scheme effectively reduces the average maintenance cost on the chain without sacrificing too much query performance. Poon et al. proposed a decentralized system called the Lightning Network <cit.>. Its core idea is to send transactions, which should originally have been settled on the chain through a micro-payment channel network (payment or transaction channel) off the chain, with the value transferred outside the blockchain. The channel only needs to communicate with the Bitcoin network during "creation" and "closure", and maintains peer-to-peer communication at other times. The transaction content need not be put on the chain. When a dispute arises between the two parties in the transaction, it is arbitrated on the chain. The fairness and security of inner-chain arbitration ensure that malicious users of inter-chain transactions will not act maliciously. This way, the network can be scaled. Kalodner et al. <cit.> designed a cryptocurrency system called Arbitrum that supports smart contracts. In Arbitrum, users can code and implement smart contracts, and run them as virtual machines (VMs). Arbitrum uses incentivization mechanisms that allow users to reach a consensus on what the VMs will do inter-chain. In other words, VMs can create and complete execution without leaking the VM execution process. Therefore, Arbitrum validators need to track only the hash value of the VM state, and not the entire state. If a party acts maliciously, the validators will identify and punish the dishonest party through a challenge-based protocol. Moving the verification of VM behavior inter-chain significantly improves the scalability and privacy.
Regarding the second type, Ge et al. <cit.> conducted research and qualitatively compared five existing hybrid blockchain database systems. The first system is Veritas based on Apache Kafka, which targets CFT application scenarios; the second one is Veritas based on Tendermint, which targets BFT application scenarios; the third one is BlockchainDB, which uses Ethereum as the underlying blockchain system; and the final two systems are the default version of BigchainDB which uses Tendermint and its optimized version. The default version has a blockchain pipeline function, while the optimized version adds parallel transaction verification based on the pipeline function. Experimental analyses show that Veritas (Kafka) performs better than the other systems and CFT applied to distributed databases performs better than BFT applied to blockchain-specific scenarios. El-Hindi et al. <cit.> proposed a shared database on the blockchain called BlockchainDB, which uses the blockchain as the storage layer and introduces a database layer on top of it. BlockchainDB extends the blockchain through classical data management techniques and standardized query interfaces, to promote the adoption of the blockchain in data-sharing use cases. Experimental results show that BlockchainDB can provide a throughput that is two orders of magnitude higher than that of the native blockchain, improving the performance and scalability of the blockchain. HDFS is vulnerable to attacks from malicious users and participating nodes, and cannot provide a trusted lineage mechanism. As a remedy, Konsta et al. <cit.> proposed Clouseau, a system that integrates HDFS with the Ethereum blockchain. The blockchain provides verifiable integrity on top of HDFS and acts as a security coordinator, supplementing the existing HDFS Namenode. In addition, to ensure performance, Clouseau maintains minimal information on the chain, so that the system will not incur significant overhead on the critical paths of read/write operations. During the system demonstration, attendees can interact with Clouseau, disrupt data, and witness how Clouseau detects malicious behavior. Grabis et al. <cit.> proposed an efficient method for distributed data storage and data sharing. The main idea was to use blockchain to control access to personal data and use a knowledge base to improve retrieval efficiency. The conceptual model and data management process were elaborated, and a prototype was developed. This paper compared this prototype with inner-chain storage techniques, and experimental results showed that this approach consumed less storage space and allowed faster data retrieval. Peng et al. <cit.> proposed FalconDB, which enables efficient and secure collaboration in all aspects of the database in the case of limited hardware resources. FalconDB uses a database server with a validation interface accessible to the client, and stores digests on the blockchain for query/update authentication. Using blockchain as a consensus platform and distributed ledger, FalconDB can work in situations where there is mutual distrust. At the same time, FalconDB incurs minimal storage costs for each client and provides any available, real-time, and concurrent access to the database. Therefore, FalconDB ensures that individual users can participate in collaborations with high efficiency, low storage costs, and blockchain-level security guarantees.
§.§.§ Query
In order to further improve the scalability of the blockchain query, some works have started exploring multi-chain queries.
Han et al. <cit.> proposed the Vassago system to achieve fast traceability of cross-chain transactions.
The basic idea of Vassago is to save the dependencies of cross-chain transactions and then perform further queries based on the dependencies.
Vassago consists of a two-tier architecture, including the dependency blockchain and transaction blockchain, which store dependency and transaction information, respectively.
The transaction dependencies ensure the verifiability of the query results and provide the possibility of executing query tasks in parallel.
Qanaat <cit.> is a multi-enterprise-oriented blockchain system that ensures the privacy and security sharing of multi-enterprise business data.
Qanaat designs layered data models and stores data collections separately.
Data collections are shared only when necessary (enterprise collaboration), which is also the smallest subset of data.
To ensure data consistency, including local and global data consistency, Qanaat builds a DAG-based data structure for each enterprise.
Such a scheme efficiently organizes internal and cross-enterprise data and improves the scalability of the blockchain on the basis of protecting data confidentiality.
§ PROTOCOL SCALABILITY
Blockchains involve multiple protocols during operation.
In a blockchain system, as shown in Fig. <ref>, the broadcasting protocol is responsible for the transmission of information between nodes.
The scalability of the broadcasting protocol has a significant effect on the reliability and performance of the blockchain system.
If the broadcasting protocol fails to meet the demands of high concurrency and large-scale transactions, it can affect the efficiency and reliability of transactions.
Transaction execution refers to the process where each node updates new transaction data in its local ledger.
The speed and efficiency of transaction execution are important factors in the scalability of the blockchain system, especially under high load and frequency.
The scalability of inter-chain protocols actually refers to the interoperability of cross-chain protocols.
The scalability of the protocol, built upon data scalability, further provides performance, liquidity, and security support for architecture scaling.
In the end, they collectively ensure the scalability of the entire blockchain system ecosystem.
In the process of interoperation between blockchains, the notary mechanism ensures the reliability and tamper-proofing of cross-chain transactions by introducing multiple notaries.
The notary mechanism can enhance the stability of cross-chain interaction and provide reliable support for cross-chain systems.
The payment channel technology facilitates high-frequency and small-value transactions by building point-to-point channels between blockchains.
The payment channels can reduce the confirmation time and transaction cost of cross-chain transactions and significantly improve the transaction speed and scalability.
Atomic swaps can ensure the safety of exchanges between two chains and effectively reduce the transaction cost in the intermediate process.
Similar to payment channel technology, atomic swaps can reduce the confirmation time and transaction cost of cross-chain transactions while enhancing the scalability of cross-chain interaction.
§.§ Inner-chain Solutions
§.§.§ Propagation Protocols
Using more efficient broadcasting protocols can prevent malicious attacks, and improve the system's trustworthiness and scalability <cit.>. Therefore, optimizing broadcasting protocols is an important direction to enhance the scalability of blockchain systems. As a typical distributed network, blockchain architecture creates a significant amount of communication between its nodes. Communication mainly occurs through two methods: network messaging, which enables the nodes in the network to achieve consensus, and block delivery, which is critical for competitive blockchain platforms like Bitcoin that rely on faster block delivery to gain a competitive advantage. In blockchain systems, the efficiency of network message transmission is a key determinant of performance. Researchers have proposed many methods to speed up the spread of blockchain networks, as shown in Table <ref>. These methods, including block compression, optimizing broadcasting protocols, and using intermediaries and reputation values, aim to reduce the time taken for messages to be confirmed and the network bandwidth they use. This can make the blockchain network more effective and efficient.
Hu et al. <cit.> introduced a novel peer-to-peer block transmission mechanism for inter-node communication in the blockchain network. This approach leveraged Dino blocks that, when received by a network node, allow the node to recover the original block by utilizing transactions from the node's local transaction pool, thereby reducing the block's network transmission capacity requirements and propagation time. Additionally, Dino communicates block-building rules, instead of compressed block data, thus offering greater scalability to handle blocks containing a large number of transactions.
In competitive blockchain systems, such as Bitcoin, efficient block propagation is critical to performance. The ability to propagate blocks quickly determines the effectiveness of block out in such systems. Bi et al. <cit.> suggested a methodology for selecting the nearest neighbor nodes in a blockchain network by estimating the transmission delay between nodes. By selecting the broadcasting node based on the message transmission distance, the messages could be efficiently transmitted throughout the network in a timely manner.
Zhang et al. <cit.> proposed a new approach to boost the throughput of the Bitcoin system without altering the core consensus protocol components. The proposed technique involves replacing the store-and-forward relaying system with a more efficient cut-through strategy and improving the block propagation efficiency during transmission by utilizing erasure code techniques. The most significant advantage of this protocol is its ability to enhance the performance of Bitcoin without making any alterations to the data structure or cryptographic functional components of the system. Consequently, the proposed protocol can be easily integrated into the existing Bitcoin blockchain.
In an attempt to enhance the scalability of blockchain networks, Chen et al. <cit.> proposed the GVScheme, which introduces a guarantor function to ensure that blocks are propagated in a reliable manner. When a node receives a block from a guarantor, the order of block confirmation and propagation is determined by the trust value of the guarantor. Thus, this technique minimizes both block validation and propagation delays in the network. As noted in <cit.>, the proposed strategy requires minimal modifications to the existing protocol and can be seamlessly integrated into the current blockchain networks.
In distributed systems, communication bandwidth and propagation latency are usually the two main physical network properties that make it difficult to use blockchain protocols. Prism, a new blockchain technology, that is dependent on PoW, was introduced by Bagaria et al. <cit.>. It maximizes the physical bandwidth consumption and optimizes system performance by utilizing a structured DAG model. The DAG model separates the consensus process into different blocks and arranges them based on their roles. Prism optimizes the system performance and maximizes the physical bandwidth consumption.
According to Wang et al. <cit.>, enhancing broadcasting performance can significantly improve the performance of blockchain systems from the perspective of blockchain broadcasting protocols. Unfortunately, the current broadcast protocols in blockchain, such as Gossip and distributed hash table, fail to meet the requirements of low redundancy and low propagation delay. As a result, they proposed a new broadcasting mechanism, named Swift, which optimizes the P2P topology building and broadcasting algorithm in structured networks through unsupervised learning and greedy algorithms. Swift efficiently minimizes the propagation latency of blockchain P2P networks while reducing the waste of redundant bandwidth.
Ayinala et al. <cit.> proposed the PiChu method for improving the scalability of blockchain networks by accelerating block propagation through pipeline technique and the design of verification blocks. The acceleration of block propagation decreases the probability of mining intervals and forks, leading to an increase in throughput. The PiChu architecture can be applied immediately to the existing blockchain networks.
Zhao et al. <cit.> introduced a transaction selection, sorting, and synchronization algorithm that accelerates consensus among nodes. However, transactions that rely on the coinbase address, cannot be pre-executed or pre-verified because the coinbase address of the next block miner is unpredictable. The authors proposed an algorithm to handle unresolvable transactions to attain a consistent and high TPS scheme. This scheme adopted a transmission process similar to that of PiChu, wherein most transactions are not required to be verified and transmitted during block propagation, removing the dependence of propagation time on the number of transactions in a block, and fully enabling the system to be TPS scalable.
Zhang et al. <cit.> introduced the concept of reputation and proposed a unique relaying protocol called RepuLay to accelerate the transmission of network transactions. The reputation system was intended to aid nodes in identifying unreliable and inactive neighbors. Each node maintains a local list of all its neighbors' reputations and processes transactions using a probabilistic technique that relies on a reputation mechanism. More precisely, a relay node examines a transaction with a defined probability, upon receiving it. Subsequently, the relay node transmits both legal and unvalidated transactions to multiple neighbors, with each neighbor having a chance of being selected as a recipient.
§.§.§ Transaction Parallelism
Transaction execution is of vital importance in the operation of blockchain systems. Slow transaction speeds may severely limit the system's scalability. To address this issue and improve transaction execution speed and system scalability, researchers have proposed a combination of parallel and concurrent techniques. Furthermore, to mitigate the issue of high concurrent transaction conflict rates in smart contract scenarios, relevant research has incorporated concurrency control and graph analyses to avoid transaction conflicts while enhancing parallel execution efficiency.
There are mainly two cases of transaction execution model, order-execute model, and execute-order model. Various solutions have been proposed in related studies to address the problem of transaction parallelism, as shown in Table <ref>.
Daniël et al. <cit.> contended that apart from consensus, transaction execution is the second-most critical module that influences blockchain performance and security. The authors collected historical data from seven blockchain systems, including Ethereum, Bitcoin, and Zilliqa, and analyzed them using two metrics (single-transaction conflict rate per block and group conflict rate per block). They found that UTXO-based blockchains have more concurrency than account-based ones. Analytical models were proposed for single transaction concurrent execution and group concurrent execution to estimate the transaction execution speed for a given level of concurrency. The models were validated on the seven blockchain systems.
Asynchronous and Concurrent Execution of Complex Smart Contracts (ACE) was developed by Wüst et al. <cit.> with the goal of enabling complex smart contract execution on permissionless blockchains through an improved concurrency control mechanism and flexible trust model. ACE employs an inter-chain execution method whereby the contract creator specifies a group of service providers to independently execute the contract code, separate from the consensus layer. ACE distinguishes itself from prior solutions as it enables secure smart contract initiation of contract execution across different service providers whilst allowing secure concurrency control. ACE is the first of its kind to exhibit the capability of supporting the inter-chain execution of interactive smart contracts with flexible trust assumptions.
Dickerson et al. <cit.> introduced a new technique by which miners and verifiers could run smart contracts that did not conflict, in parallel. This method used a deterministic fork-join <cit.> program that stores a serializable concurrent scheduling sequence. The verifiers use the scheduling sequence to execute and verify the contracts.
Garamvölgyi et al. <cit.> conducted a thorough analysis of the historical transaction execution in Ethereum and found that smart contracts often face obstacles in achieving concurrent execution. To overcome these obstacles, they proposed a conflict resolution technique that involves using partition counters and swappable instructions. This approach can enhance the execution speed of contract transactions. Furthermore, they introduce a new scheduling scheme, OCC-DA, which is an optimistic concurrency control scheduler with deterministic aborts, designed to enable the use of OCC scheduling in permissionless blockchains.
Amiri et al. <cit.> contended that most existing blockchains are inadequate in addressing the potential issues of distributed system applications and have serious architectural limitations. To address these concerns, they proposed the OXII paradigm, which is an approach that allows concurrency control of smart contract transactions by constructing a dependency graph to identify transaction conflicts and determine the order of execution. This strategy allows permissioned blockchains to support concurrent transaction execution. They also proposed the ParBlockchain prototype under the OXII paradigm, which was experimentally verified to be suitable for smart contract transaction scenarios with varying levels of competition.
SlimChain <cit.> utilizes the concept of inter-chain parallel execution and inner-chain state confirmation. This approach moves transactions inter-chain to be executed in parallel, while also ensuring secure inter-chain execution through the use of TEE. To address the challenge of arbitrary commit orders, SlimChain applies OCC along with Serializable Snapshot Isolation (SSI), employing the heuristic approach presented in <cit.> to achieve efficient concurrency control.
RainBlock <cit.> improves the performance of public blockchains without changing the original PoW consensus logic by eliminating the Input/Output (I/O) bottleneck in transaction processing, which enables miners to process more transactions simultaneously. The main contributions of RainBlock are twofold: 1) the proposal of the RainBlock architecture to eliminate I/O from the critical path of transaction processing, and 2) the design of a distributed multi-version DSM-Tree-based data structure that efficiently stores the system state.
Chen et al. <cit.> identified two challenges associated with transaction parallelism in the existing blockchain systems: 1) differences in the order of concurrent execution across different nodes, and 2) the inability of the state tree to support efficient concurrent updates. To address these challenges, they propose the Parallel Execution Engine Protocol (PEEP), which utilizes a deterministic concurrency mechanism for parallel execution with a predefined serial execution order for fetches. PEEP provides parallel update operations for the state tree and can guarantee compatibility with various Merkle tree-based state trees.
Jin et al. <cit.> introduced a novel two-stage concurrency control protocol that optimized the two-stage-style concurrent execution process of smart contracts. To execute transactions, the system generated a transaction-dependent graph with high parallelism for the verifier and designed a graph partitioning algorithm to split the graph into several subgraphs. This maintains parallelism and substantially reduces communication costs. Additionally, a deterministic replay protocol was proposed to facilitate faster concurrent scheduling. Integration of the proposed two-stage protocol with the Practical Byzantine Fault Tolerance (PBFT) was suggested to further improve optimization.
Xiao et al. <cit.> aimed to enhance the system throughput and decrease processing latency by investigating address dependencies among transactions. To achieve this goal, they proposed an efficient DAG-based blockchain concurrency control scheme, named NEZHA. NEZHA intelligently builds an address-based conflict graph (ACG), using address dependencies as edges, to capture all conflicting transactions. The authors apply a hierarchical ranking algorithm to generate a total order among transactions by ranking the transactions on each address based on the ACG and derived ranking hierarchy.
§.§ Inter-chain Solutions
§.§.§ Notary
As an inter-chain scaling protocol, the notary mechanism can increase the cross-chain contact stability and offer trustworthy support for the cross-chain system.
The notary mechanism is classified into two approaches: centralized notary and decentralized notary.
Designed by Ripple <cit.>, the Interledger Protocol enables two distinct ledger systems to seamlessly exchange currencies with one another via an intermediary known as a "connector," which, in practice, operates as a centralized notary.
This protocol eliminates the need for trust between the parties involved in the transaction, and importantly, ensures that the connector neither loses nor misappropriates funds.
As for centralized notary, PalletOne <cit.> supports multiple chains in smart contracts, through jury consensus and adapters to operate on different blockchains, eliminating the need for parallel chains.
Users can utilize PalletOne passes as transaction fees to incentivize the jury in driving the PalletOne technology.
UniswapV3 <cit.> is an unregulated, automated market-making protocol built on the Ethereum blockchain.
It overcomes the naturally low metallic profitability of constant-function market makers, improves the accuracy and convenience of price forecasting, and provides a more flexible fee structure.
On the contrary, Corda <cit.> designed a highly available notary cluster that could include multiple worker nodes distributed to multiple data centers, with a database cluster on the back end to hold transaction information.
This notary cluster as a whole provides services to the public.
As a decentralized notary, it is composed of multiple working nodes that form a distributed architecture, which elects master nodes to provide services and ensure data consistency through a distributed consensus mechanism.
Tokrex <cit.> brings a completely decentralized approach to the interoperability of blockchain systems.
It is a meta-system that enables the exchange of assets between different blockchains (cross-chain) as well as within a blockchain (intra-chain) in a real-time setting.
0x <cit.> designed a protocol that facilitated low-friction peer-to-peer exchange of ERC20 tokens on Ethereum.
It was intended to serve as an open standard and common building block, driving interoperability among decentralized applications that incorporate exchange functionality.
An intermediary role in the 0x protocol, called a relayer, helps broadcast orders and can choose to charge a fee per facilitated transaction.
§.§.§ Payment Channel
Payment channel technology, also known as micropayment channels, reduces transaction fees and improves system throughput by establishing inter-chain peer-to-peer channels to aggregate small transactions with high frequency.
The concept of payment channels was introduced by LN <cit.> as a decentralized system where transactions are sent through a network of micropayment channels.
Raiden Network <cit.> is an implementation of payment channel technology specifically designed for Ethereum.
The Raiden Network preserves the security mechanism that the blockchain system has through peer-to-peer payments and margin deposits in the Ethernet network.
Raiden nodes interact with Ether nodes to facilitate transfers and communicate with other Raiden nodes, as well as with the Ethereum blockchain for managing margin deposits.
From the perspective of performance and resource overhead, Guo et al. <cit.> conducted a comprehensive evaluation of newly proposed protocols aimed at enhancing the performance of Lightning Networks (LNs) based on data collected over a 15-month period.
The study focused on analyzing the success rate and dispersion of payment routing.
The analysis encompassed the success rate of payment routing, and the level of decentralization, and provided a comprehensive understanding of network mechanisms.
To mitigate the increased consumption of inner-chain resources caused by the gradual exhaustion of a substantial portion of payment channels within payment channel networks, Xu et al. <cit.> introduced OPRE, an optimal inter-chain recovery protocol for payment channels.
The designed protocol includes privacy-preserving features to address user privacy concerns, ensuring that the user's balance information remains undisclosed.
The protocol achieved optimal restoration of payment channels while ensuring robust privacy guarantees utilizing cryptography.
Seo et al. <cit.> proposed a two-layer structured aggregated payment request scheme to extend the bandwidth in response to the limited scalability provided by the LN, i.e., constrained by channel mobility and payment request bandwidth (currently 483 in each direction).
Wu et al. <cit.> introduced the notion of supernodes and the supernodes-based pooling to enhance the scalability of micropayments within a large Lightning Network (LN).
The supernodes, along with a subset of their neighboring non-super nodes, pool together to facilitate network partitioning within the LN.
To enhance the scalability of micropayments, the set of involved nodes is reduced, with only supernodes taking part in the search for and payment to other supernodes.
From a security perspective, Malavolta et al. <cit.> introduced a new attack on the existing payment channel network called a wormhole attack.
They also proposed a new encryption structure called the anonymous multi-hop lock (AMHL).
It started from the security analysis of the existing payment channel network and reported a new attack applicable to all major payment channel networks, which allows attackers to steal fees from honest middlemen along the way.
Additionally, the Lightning Network (LN) developers have implemented the authors' ECDSA-based AMHL in their payment channel network, thereby exemplifying the practicality, security, and privacy of this method in contemporary cryptocurrencies.
Furthermore, the team conducted a performance evaluation using a commercial machine, wherein experiments demonstrated the strong practicability of AMHL, with all operations completed in under 100 ms and introducing a communication overhead of less than 500 bytes.
In a separate study, Kappos et al. <cit.> provided a thorough analysis of the privacy of the LN and analyzed several attacks that exposed privacy, including the number of tokens owned by nodes and the recipients and payers in the state channel.
Biryukov et al. <cit.> developed a precise probing model that accounts for parallel channels, enabling comprehensive balance information extraction in multi-channel hops.
The model also quantifies the information gained by attackers and proposes an optimized algorithm for selecting probe amounts in multi-channel hops.
This paper showcases the efficiency of their approach using real-world data obtained from their own LN simulator focused on probing.
§.§.§ Atomic Swap
The atomic swap protocol was originally conceived to facilitate asset exchanges between distinct blockchain networks.
Its significance has grown substantially in the realm of protocol design for inter-chain scaling, owing to its inherent attributes.
Indeed, cross-chain atomic swaps are the result of a fusion of cryptographic technology, smart contracts, and specialized role design.
These swaps encompass several pivotal elements, encompassing incentive mechanisms, security considerations, and formalization aspects.
Herlihy et al. <cit.> pioneered the introduction of the atomic cross-chain asset swap protocol.
It constructs an interactive directed graph with designated leading nodes and employs hash time lock contracts.
A hash lock can only be unlocked if not timed out, with the provided secret "s," and sequential signatures from all nodes along the path to the leader.
Asset transfer is deemed complete when all hash locks unlock;
otherwise, assets are returned. However, this protocol assumes fixed, known elements, such as the exchange graph, leader node, and hash lock details, limiting flexibility.
Subsequently, the atomic swap protocol underwent continuous optimization in the context of serving inter-chain scalability.
From a design and performance optimization perspective,
Zamyatin et al. <cit.> conceived a trustless and efficient cross-chain atomic transaction framework, XCLAIM, along with its formal definition.
This framework aims to tackle existing issues associated with slow, inefficient, and costly cross-chain atomic transactions.
It facilitates cost-effective token exchange between Bitcoin and Ether, leveraging self-designed cryptocurrency-backed assets.
Furthermore, it offers flexibility for migration to other established systems and their cross-chain applications.
Zakhary et al. <cit.> introduced AC3WN, the first decentralized all-or-nothing atomic cross-chain commitment protocol.
This protocol achieves atomicity and commitment in AC2T by employing cryptographic commitment schemes based on hash locks for smart contract exchange and refund.
It ensures that all smart contracts either execute entirely or result in full refunds.
Notably, this is accomplished through the utilization of a decentralized witness network to coordinate AC2T, thus addressing the vulnerabilities of centralized solutions.
Thyagarajan et al. <cit.> devised a universal cross-chain atomic exchange protocol that enables the secure exchange of tokens between any target and source chains by relying only on transaction signature verification without resorting to any scripting language.
The protocol also supports secure exchange of tokens between multiple parties.
Tao et al. <cit.> presented a new mechanism called Unity, which ensures the atomicity and confidentiality of cross-chain transactions in the event of read or write failures by utilizing permission-controlled blockchains.
Specifically, when data is not the latest version, the 4PC protocol is employed to guarantee the confirmation or abort of cross-chain transactions.
When data is the latest version, enforcement of transactions is achieved using SSC-based smart contracts.
Glabbeek et al. <cit.> introduced a cross-chain payment protocol with guaranteed success by employing the formal specification of Asynchronous Timed Automata Networks (ANTA).
This approach is highly motivated as ensuring the success of payments is crucial for the reliability of cross-chain transactions.
Xue et al. <cit.> combined two alternative protocols aimed at creating more expressive and fault-tolerant cross-chain exchanges.
These protocols enable participants to propose multiple swaps simultaneously and complete a portion of them based on their individual requirements.
Participants express their needs using predicates, with each predicate capturing acceptable payout conditions for each participant.
The authors constructed redundant payment paths in a multi-path routing scheme, allowing for tolerance of deviations and failures among participants while ensuring the reliability and security of transactions.
Imoto et al. <cit.> designed and implemented a new cross-chain atomic transaction protocol with the help of signature information and improved the space complexity and the local time complexity.
Lys et al. <cit.> proposed a new protocol, R-SWAP, formalized for relays and adapters. The correctness of R-SWAP was demonstrated, and its performance in terms of cost and latency was analytically evaluated. The atomic exchange between Ethereum and Bitcoin and between Ethereum and Tendermint was implemented.
From the standpoint of protocol and security analysis,
Herlihy et al. <cit.> introduced the concept and practical implementation of cross-chain transactions as a solution to managing assets in complex distributed computing environments, particularly in adversarial business scenarios.
Additionally, the paper proposes a proof mechanism utilizing BFT consensus and Proof of Work consensus.
However, it's worth noting that the paper does not provide explicit experimental results or metrics but rather offers an overview of the methodology and principles.
Pillai et al. <cit.> presented the Burn-to-Claim protocol, which leverages a three-phase Proof-of-Burn protocol for asset transfer and interoperability.
It achieves asset transfer by generating transfer proofs on the source network and verifying proofs on the target network.
It has been empirically demonstrated that this method incurs lower computational costs and is integrated into the core blockchain protocol.
Xu et al. <cit.> proposed a game-theoretic model to study the strategic behavior of agents implementing cross-chain atomic trading based on HTLCs by representing the success rate of the transaction as a function of variables such as exchange rates, token prices, and their volatility.
It is found that collateralized deposits and agents dynamically adjusting exchange rates can improve transaction success rates.
Manevich et al. <cit.> introduced the "MPC in the head" technique into the cross-chain atomic exchange protocol, implementing a new cross-chain atomic exchange protocol that operated without the concept of global time and could be terminated by both parties at any time, further testing the practical performance of this zero-knowledge proof protocol.
A game-theoretic analysis MAD-HTLC is used by Tsabary et al. <cit.> to demonstrate the security and to analyze its overhead by instantiating it on running blockchains of Bitcoin and Ether.
Furthermore, the study explores the potential for miners to serve as the primary enforcers by modifying the standard Bitcoin client.
Li et al. <cit.> designed ZeroCross, a privacy-preserving cross-chain solution based on sidechains, designed to address issues such as the need for multiple payers to make simultaneous payments or fixed transaction amounts.
Leveraging sidechain mechanisms and state-of-the-art zero-knowledge proof protocols, this paper ensures the correctness of exchanges and protects transaction privacy.
It also designs key exchange and verification mechanisms to achieve fairness and confidentiality.
§ DISCUSSION
§.§ Architecture Scalability
Inner-chain.
Expanding the architecture of blockchain is a crucial research direction
in the blockchain domain.
On one hand, sharding involves cross-shard interaction and communication mechanisms, where researchers can explore efficient and secure ways between different shards. This will encompass research areas such as executing smart contracts across shards and transferring data across shards to ensure the overall consistency and integrity of the blockchain. On the other hand, considering the integration of sharding technology and DAG structures, leveraging the advantages of both to construct a more efficient and secure blockchain architecture. This integration may bring about entirely new possibilities for future blockchain systems.
Inter-chain.
As one of the most prominent blockchain technologies currently, BoBs achieve the scaling of blockchain architecture through inter-chain interoperability.
Firstly, the scalability of BoBs is severely constrained by security considerations.
For instance, while the IBC of Cosmos is designed to be more flexible than the XCMP of Polkadot, with each zone capable of independent validation, it also entails higher security risks than Polkadot.
Secondly, there is a notable absence of systematic research regarding the number of honest nodes and validators in BoBs.
This significantly hinders BoBs' scaling into the field of inter-chain information transmission.
Furthermore, differences in smart contract languages and execution environments in inter-chain scenarios make it challenging to achieve smart contract state migration.
This lack of generality affects the applicability of BoBs.
§.§ Data Scalability
Inner-chain.
The capacity of blockchain network nodes is limited. Methods such as block pruning and cooperative storage can indeed alleviate the storage burden on nodes, but this can lead to an increase in query cost. Therefore, in future research on inner-chain data, a trade-off between storage cost and query cost needs to be considered. By designing more efficient storage strategies and query structures, the goal is to achieve storage effectiveness within the tolerance of query cost. Additionally, considering that the participants in a blockchain network are people, there are bound to be social attributes and relationships among users. Transferring knowledge from the field of social networks to the blockchain may provide clever solutions to some challenging problems.
Inter-chain.
Accessing data inter-chain can indeed greatly reduce the storage cost of blockchain nodes, but it increases the risk of data security. For inner-chain nodes, it is impossible to ensure the security and integrity of the data they have not stored, especially when the data is stored inter-chain. Therefore, in the research on inter-chain data, it is necessary to pay more attention to the security and integrity of the data. This requires the introduction of cryptographic knowledge to guarantee that the data stored inter-chain cannot be tampered with. When nodes retrieve data, proof of data integrity should be provided to ensure that the retrieved content is complete. Additionally, the introduction of privacy computing can ensure the privacy and security of users, making the data storage and retrieval process of blockchain more secure.
§.§ Protocol Scalability
Inner-chain.
Enhancing the performance and scalability of blockchain systems relies significantly on the optimized parallel execution of transactions and propagation protocols.
Future research could focus on the following areas.
In terms of parallel execution, increasing the degree of parallelism in parallel execution adds complexity to maintaining database consistency, making the resolution of read/write conflicts, and ensuring consistency for concurrent transactions of utmost importance.
Additionally, parallel execution of transactions can introduce uncertainty and challenges in ensuring the consistency and correctness of transaction results.
Thus, it is imperative to research effective methods for accurately and sequentially processing concurrently executed transactions.
For propagation protocols, the current solution still faces challenges such as excessive bandwidth usage and high latency.
Future research efforts should focus on optimizing the broadcasting protocol in terms of transmission methods, content, and other aspects.
These optimizations can help reduce the network transmission burden and enhance the speed and reliability of transaction broadcasting.
Inter-chain.
Although atomic swap protocols have emerged as a pivotal technology for expanding inter-chain functionality due to their trustlessness and practicality, they continue to confront a series of challenges.
From the perspective of capital flow, extant atomic swap protocols exhibit inherent vulnerabilities stemming from the design of escrow contracts.
This vulnerability leads to a pronounced risk of fund immobilization and transaction unfairness.
A promising research avenue involves the exploration of non-interactive cryptographic techniques to ensure the high liquidity of capital and low latency in transactions.
From the standpoint of malicious behavior tolerance, the current research has less engagement.
Achieving an elevated transaction success rate while preserving absolute atomicity represents a vital topic.
As for scalability, the current atomic swap protocols predominantly center on pairwise exchanges between two parties, significantly limiting the inherent scalability of these protocols.
Devising equitable and efficient multi-party atomic swap protocols becomes an intriguing challenge.
The combined potential of distributed signatures and multi-party secret sharing offers a compelling avenue for resolution.
§ CONCLUSION
This survey provides a novel summary of the existing works on blockchain scalability from the architecture, data, and protocol perspectives. To analyze the techniques for improving blockchain scalability more clearly, we classified the existing works innovatively into inner-chain and inter-chain categories within each section. Finally, we summarized the existing efforts in evaluating scalability to validate the effectiveness of the scalability improvements. We hope that this survey can help readers gain a comprehensive understanding of blockchain scalability, encourage further exploration of strategies to enhance blockchain scalability, and contribute to the development of blockchain technology.
elsarticle-harv
|
http://arxiv.org/abs/2409.02640v1 | 20240904121522 | Linear Convergence in Hilbert's Projective Metric for Computing Augustin Information and a Rényi Information Measure | [
"Chung-En Tsai",
"Guan-Ren Wang",
"Hao-Chung Cheng",
"Yen-Huan Li"
] | math.OC | [
"math.OC",
"cs.IT",
"math.IT"
] |
Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects
Kyungmin Jo
KAIST
Daejeon, Korea
[email protected]
Jaegul Choo
KAIST
Daejeon, Korea
[email protected]
September 9, 2024
====================================================================================================================
§ ABSTRACT
Consider the problems of computing the Augustin information and a Rényi information measure of statistical independence, previously explored by Lapidoth and Pfister (IEEE Information Theory Workshop, 2018) and Tomamichel and Hayashi (IEEE Trans. Inf. Theory, 64(2):1064–-1082, 2018).
Both quantities are defined as solutions to optimization problems and lack closed-form expressions.
This paper analyzes two iterative algorithms: Augustin’s fixed-point iteration for computing the Augustin information, and the algorithm by Kamatsuka et al. (arXiv:2404.10950) for the Rényi information measure.
Previously, it was only known that these algorithms converge asymptotically.
We establish the linear convergence of Augustin’s algorithm for the Augustin information of order α∈ (1/2, 1) ∪ (1, 3/2) and Kamatsuka et al.’s algorithm for the Rényi information measure of order α∈ [1/2, 1) ∪ (1, ∞), using Hilbert’s projective metric.
^∗Both authors contribute equally to this work.
§ INTRODUCTION
Denote by Δ ( [d] ) the set of probability distributions over the finite set [d] 1, …, d.
For any α∈[open right]01∪[open]1∞, the order-α Augustin information is defined by the following optimization problem <cit.>:
min_x∈Δ([d]) f_Aug(x),
f_Aug(x):= _p∼ P[ D_α( p ∥ x ) ],
where P is a given probability distribution over Δ([d]), and
D_α(p ∥ q) 1/α - 1log∑_s∈ S p(s)^α q(s)^1-α,
∀ p,q∈Δ(S)
is the order-α Rényi divergence.
The Augustin information characterizes, e.g., the cutoff rate, the strong converse exponent, and the error exponent in the channel coding problem <cit.>.
When α = 0, the optimization problem (<ref>) specializes to the definition of the log-optimal portfolio <cit.>, and is equivalent to the definition of the maximum-likelihood estimate in Poisson inverse problems
<cit.>.
The optimization problem (<ref>) does not admit a closed-form expression.
While the optimization problem is convex, the objective function violates the standard smoothness assumption in the optimization literature.
Therefore, even the convergence guarantees of projected gradient descent, arguably the simplest convex optimization algorithm, do not directly apply <cit.>.
<cit.> proposed the
following
fixed-point iteration to solve the optimization problem (<ref>):
x_t+1 = Z_t^-1· x_t ⊙ (-∇ f_Aug(x_t)),
∀ t ∈ℕ ,
where Z_t is the normalizing constant,
ensuring that x_t + 1 remains a probability distribution,
and ⊙ denotes the entry-wise product.
The algorithm was later rediscovered by <cit.>.
When α = 0, this fixed-point iteration coincides with Cover's method for computing the log-optimal portfolio <cit.>, and is equivalent to the expectation maximization algorithm for solving Poisson inverse problems <cit.>.
<cit.> proposed an alternating minimization method whose iteration consists of two steps.
Combining the two steps yields Augustin's fixed-point iteration.
Recently, <cit.> proposed an algorithm similar to Augustin's fixed-point iteration to compute a Rényi information measure
of statistical independence, which was explored
by <cit.> and <cit.>.
For
any
α∈[0,1)∪(1,∞), this order-α Rényi information measure is defined
by the following optimization problem:
min_x∈Δ([m])min_y∈Δ([n]) f_Ren(x,y),
f_Ren(x,y)
D_α( p ∥ x ⊗ y ),
where p is a given probability distribution over [m] × [n] and ⊗
denotes the tensor product.
The Rényi information measure emerges in the error exponent of a hypothesis testing problem, where we test against the independence of two random variables given
independent and identically distributed (i.i.d.)
samples from their joint distribution <cit.>.
's algorithm to compute the Rényi information measure iterates as:
2
x_t+1 = Z_1, t^-1· x_t ⊙ ( -∇_x f_Ren(x_t, y_t))^1/α,
y_t+1 = Z_2, t^-1· y_t ⊙ ( -∇_y f_Ren(x_t+1, y_t))^1/α,
where Z_1, t and Z_2, t are normalizing constants,
ensuring that x_t+1 and y_t+1 remain probability distributions.
The notation v^r denotes the entry-wise power for any vector v and number r.
This iterative algorithm is reminiscent of Augustin’s fixed-point iteration but differs in the powers applied to the gradients.
The convergence behaviors of Augustin's fixed point iteration and Kamatsuka et al.'s algorithm remain largely unclear.
For Augustin's fixed-point iteration, <cit.> and <cit.> have shown that it asymptotically converges for α∈[open]01;
<cit.> and <cit.> have proved a convergence rate of O ( 1 / t ) for the case where α approaches zero.
For Kumatsuka et al.'s algorithm, <cit.> have shown that it asymptotically converges for α∈[open right]1 / 21∪[open]1∞.
We aim to carry out non-asymptotic analyses for the two algorithms.
One common approach to analyzing an iterative method is to show that it is
contractive
under a suitable metric.
Since the two algorithms (<ref>) and (<ref>) map positive vectors to positive vectors, we view them as positive dynamical systems and consider the so-called Hilbert's projective metric <cit.>.
In this work, we prove that with respect to Hilbert's projective metric, Augustin's fixed-point iteration is contractive for α∈(1/2,1) ∪ (1,3/2), and
's algorithm is also contractive for α∈(1/2,1) ∪ (1,∞).
Based on these contractivity results, we establish the following non-asymptotic convergence guarantees for the two algorithms.
* For computing the Augustin information of order α∈[open]1/21∪[open]13/2, Augustin's fixed-point iteration converges at a rate of O((21-α)^t) with respect to Hilbert's projective metric.
This improves on the previous asymptotic convergence guarantee <cit.> when α∈(1/2,1) and extends the range of convergence to include α∈(1,3/2).
* For computing the Rényi information measure of order α∈(1/2,1)∪(1,∞), the iterative algorithm of converges at a rate of O(1-1/α^2t) with respect to Hilbert's projective metric.
When α=1/2, this method also converges linearly if p has full support.
This improves on the previous asymptotic convergence guarantee <cit.>.
Notations
We write ℝ_+ and ℝ_++ for the sets of non-negative and strictly positive numbers, respectively.
For any positive integer n, we write [n] for the set 1, …, n.
Let v ∈ℝ^d and A, B ∈ℝ^m × n.
We write v ( i ) for the i-th entry of the vector v, and A ( i, j ) the (i, j)-th entry of the matrix A.
We write A ⊙ B for the entry-wise product between A and B.
We write A^r for the matrix ( A(i, j)^r )_1 ≤ i ≤ m, 1 ≤ j ≤ n.
For a set S⊆^d, we denote by S its relative interior.
We will adopt the convention that 0^0=0, 0/0=1, ∞·∞=∞, a·∞=∞ for any a>0, and log∞=∞.
We call Δ([d]) the probability simplex and view elements in Δ([d]) as d-dimensional vectors.
§ RELATED WORK
We have discussed Augustin's fixed-point iteration and 's algorithm in Section <ref>.
This section reviews other optimization algorithms for computing the Augustin information and the Rényi information measure.
For computing the Augusitin information of order α, entropic mirror descent with Armijo line search <cit.> and with the Polyak step size <cit.>, as well as a variant of Augustin's fixed-point iteration explored by <cit.>, all achieve asymptotic convergence for all α∈[open right]01∪[open]1∞.
Riemannian gradient descent with the Poincaré metric <cit.> converges at a rate of O ( 1 / t ) for all α∈[open right]01∪[open]1∞.
An alternating minimization method due to <cit.>[<cit.> only claimed an asymptotic convergence guarantee in their paper.
We find that their Lemma 2 indeed implies a convergence rate of O(1/t).] also achieves a converges rate of O ( 1 / t ), but for a narrower range of α∈[open]1∞.
None of the existing works have yet established a linear convergence rate.
For computing the Rényi information measure of order α, entropic mirror descent with Armijo line search <cit.> and with the Polyak step size <cit.> both asymptotically converge for α∈[open right]1/21∪[open]1∞.
However, when α∈[open]01/2, the optimization problem (<ref>) becomes non-convex <cit.>, and currently, there are no known algorithms that provably solve this problem.
Similarly to the computation of the Augustin information, none of the existing works have established a linear convergence rate.
§ PRELIMINARIES
Our analyses are based on properties of Hilbert's projective metric and Birkhoff's contraction theorem, which we introduce in this section.
Let K be a closed cone in a finite-dimensional real vector space, such as the positive orthant and the set of Hermitian positive semidefinite matrices.
For any x,y∈ K, we write x≤ y if y-x∈ K.
For any x,y∈ K ∖{0}, define
M(x/y) inf{β≥ 0 | x ≤β y } > 0 .
If the set is empty, then M ( x / y ) ∞.
Hilbert's projective metric is defined as
( x, y ) log ( M(x/y)M(y/x) ) ∈ [0, ∞] , ∀ x, y ∈ K ∖ 0 .
In addition, (0,0) is defined to be 0.
The following lemma shows that is indeed a metric on the set of rays.
The following properties hold.
* For any x,y∈ K and any α,β>0, we have ( α x, β y ) = ( x, y ).
* We have (x,y)=0 if and only if x=ry for some r>0.
In the rest of the paper, we will only consider the cone K=_+^d.
Consider Hilbert's projective metric on the cone K=_+^d.
* For any x,y∈_+^d∖{0}, we have
M(x/y) = max_i∈[d] x(i) / y(i) , (x,y) = logmax_i,j∈[d] x(i)y(j) / y(i)x(j) .
*
(Δ([d]), ) is a metric space <cit.>.
Given the second item above, we will measure the errors of both Augustin's fixed-point iteration and 's algorithm in terms of Hilbert's projective metric between their iterates and the minimizer.
The following lemma lists several properties of Hilbert's projective metric, which are direct consequences of Corollary 2.1.4 and Corollary 2.1.5 of <cit.>.
The following properties hold.
* ( x^ r, y^ r ) ≤r( x, y ) for any x,y∈_+^d and any r∈∖{0}.
* ( v⊙ x, v⊙ y ) ≤( x, y ) for any x,y,v∈_+^d.
* ( A x, A y ) ≤( x, y ) for any x,y∈_+^d and A∈_+^d'× d.
When the matrix in Lemma <ref> (iii) is
entry-wise strictly positive, <cit.> showed that linear transformation defined by it
is
a contraction.
Let A∈_++^m× n.
It holds that
(Ax, Ay) ≤λ(A) ·(x, y), ∀ x,y∈_+^n,
where
λ(A) tanh(δ(A)/4) < 1,
and
δ(A) logmax_(i,j),(i',j')∈[m]×[n] A(i,j)A(i',j') / A(i',j) A(i,j') ≥ 0.
§ LINEAR RATE OF AUGUSTIN'S FIXED-POINT ITERATION
In this section, we show that Augustin's fixed-point iteration converges linearly with respect to Hilbert's projective metric for computing the Augustin information of order α∈(1/2,1)∪(1,3/2).
§.§ Augustin's Fixed-Point Iteration
Define the following operators:
T_α(x):=_p∼ P[ T_α,p(x) ],
T_α,p(x) := p^α⊙ x^1-α/⟨ p^α , x^1-α|⟩, ∀ p∈Δ([d]).
Augustin's fixed-point iteration (<ref>) can be equivalent written as follows:
* Initialize x_1∈Δ([d]).
* For all t∈, compute x_t+1=T_α(x_t).
For α∈(0, 1)∪(1,∞), the optimization problem (<ref>) has a unique minimizer x^⋆, which satisfies
the fixed-point equation
x^⋆=T_α(x^⋆).
§.§ Linear Rate Guarantee
The main result of this section is the following theorem, which bounds the Lipschitz constant of the mapping T_α with respect to Hilbert's projective metric.
Its proof is
postponed to
the next subsection.
For α∈[0,1)∪(1,∞), we have
(T_α(x), T_α(y))≤γ·(x, y), ∀ x,y∈Δ([d]),
where γ:=2|1-α|.
Linear convergence of Augustin's fixed-point iteration for α∈[open]1/21∪[open]1∞ immediately follows.
For any α∈(1/2, 1)∪(1,3/2), let x^⋆ be the minimizer of the optimization problem (<ref>) and {x_t} be the iterates of Augustin's fixed-point iteration (<ref>).
We have
(x_t+1, x^⋆) ≤γ^t ·(x_1, x^⋆),
for all t∈, where γ<1 is defined in Theorem <ref>.
Corollary <ref> is meaningful only when (x_1, x^⋆)<∞.
Lemma 13 of <cit.> shows that _p[p]∈Δ([d]) implies x^⋆∈Δ_n.
In this case, it suffices to choose x_1∈Δ([d]) to ensure
that
(x_1, x^⋆)<∞.
§.§ Proof of Theorem <ref>
The proof primarily consists of two steps, which are reflected by the following two lemmas.
First, we show that the operators T_α, p are Lipschitz with respect to Hilbert's projective metric and bound the Lipschitz constant.
Then, given that T_α ( · ) = 𝔼_p [ T_α, p ( · ) ], we prove a general lemma that bounds Hilbert's projective metric between two random probability vectors in terms of Hilbert's projective metric between their realizations, which is of independent interest.
The proofs of both lemmas are deferred to Appendix <ref>.
For any α∈[0,1)∪(1,∞) and p∈Δ([d]),
(T_α,p(x), T_α,p(y))
≤1-α( x, y ),
∀ x,y∈Δ([d]).
Let X, Y: Ω→Δ([d]) be two random probability vectors,
where Ω denotes the sample space.
We have
( [ X ], [ Y ] )
≤ 2 sup_ω∈Ω( X(ω), Y(ω) ).
Theorem <ref> follows immediately:
By Lemma <ref>, we write
( T_α(x), T_α(y) )
≤ 2sup_p∈Δ([d])( T_α,p(x), T_α,p(y) ).
Then, by Lemma <ref>, we obtain
( T_α(x), T_α(y) )
≤ 2sup_p∈Δ([d])1-α( x, y )
= 21-α( x, y ).
This completes the proof.
§ LINEAR RATE OF 'S ALGORITHM
In this section, we show that 's algorithm converges linearly with respect to Hilbert’s projective metric for computing the Rényi information measure of order α∈[1/2,1)∪(1,∞).
For convenience, we will view any p ∈Δ ( [m] × [n] ) as a matrix in ℝ_+^m × n whose entries sum to 1.
We will denote Hilbert's projective metric on both ℝ_++^m and ℝ_++^n by .
The associated cone should be clear from the context.
§.§ 's Algorithm
Define the following two operators:
U_α(y) := ( p^α y^1-α)^1/α/( p^α y^1-α)^1/α_1 ,
V_α(x) := ( (p^α)^⊤ x^1-α)^1/α/( (p^α)^⊤ x^1-α)^1/α_1 .
's algorithm (<ref>) can be equivalently written as follows:
* Initialize x_1∈Δ([m]), and compute y_1 = V_α( x_1 ).
* For all t∈, compute x_t+1 = U_α( y_t ) and y_t+1 = V_α( x_t+1 ).
This algorithm is inspired by the following lemma <cit.>.
For α∈[1/2,1)∪(1,∞), every minimizer (x^⋆,y^⋆) of the optimization problem (<ref>) satisfies x^⋆ = U_α(y^⋆) and y^⋆ = V_α(x^⋆).
§.§ Linear Rate Guarantee
The following theorem presents a key observation, showing that the operators U_α and V_α have a Lipschitz constant of 1-α^-1 with respect to Hilbert's projective metric.
Its proof is postponed to the next subsection.
For α∈[1/2,1)∪(1,∞), we have
2( V_α(x), V_α(x') ) ≤γ'·( x, x' ),
∀ x,x'∈Δ([m]),
( U_α(y), U_α(y') ) ≤γ'·( y, y' ),
∀ y,y'∈Δ([n]),
where
γ' := 1 - ( 1 / α ).
Moreover, if p∈_++^m× n, then the Lipschitz constant γ' can be improved to
γ” := 1 - 1/αλ( p^α ) < γ',
where λ(·) is defined in Theorem <ref>.
Theorem <ref> implies the following corollary, showing that the iterative algorithm converges linearly.
The proof of the corollary is deferred to Appendix <ref>.
Let (x^⋆, y^⋆) be a minimizer of the optimization problem (<ref>) and { (x_t, y_t) } be the iterates of the iterative algorithm (<ref>).
* If α∈(1/2,1)∪(1,∞), then
( x_t+1, x^⋆ )
≤ (γ')^2t·( x_1, x^⋆ ),
( y_t+1, y^⋆ )
≤ (γ')^2t+1·( x_1, x^⋆ ),
for all t∈, where γ'<1 is defined in Theorem <ref>.
* If α∈[1/2,1)∪(1,∞) and p∈_++^m× n, then
( x_t+1, x^⋆ )
≤ (γ”)^2t·( x_1, x^⋆ ),
( y_t+1, y^⋆ )
≤ (γ”)^2t+1·( x_1, x^⋆ ),
for all t∈, where γ”<1 is defined in Theorem <ref>.
Corollary <ref> is meaningful only when (x_1, x^⋆) < ∞.
For α∈[1/2,1)∪(1,∞), Lemma <ref> in Appendix <ref> ensures that x^⋆∈Δ([m]) whenever p∈_++^m× n .
In this case, it suffices to choose x_1∈Δ([m]) to ensure (x_1, x^⋆) <∞.
§.§ Proof of Theorem <ref>
By Lemma <ref> and Lemma <ref> (i), we have
( V_α(x), V_α(x') )
= ( ( (p^α)^⊤ x^1-α )^1/α,
( (p^α)^⊤ (x')^1-α )^1/α)
≤α^-1( (p^α)^⊤ x^1-α,
(p^α)^⊤ (x')^1-α).
By Lemma <ref> (iii) and (i), we have
( V_α(x), V_α(x') ) ≤1-α^-1(x, x').
This proves the first inequality in (<ref>).
The second inequality follows from a similar argument.
Assume p∈_++^m× n.
We can apply Birkhoff's contraction theorem (Theorem <ref>) instead of Lemma <ref> (iii) to obtain
( V_α(x), V_α(x') )
≤1-α^-1λ( (p^α)^⊤)·(x, x').
The theorem follows by noticing that λ( (p^α)^⊤) = λ(p^α) < 1.
§ DISCUSSIONS
We have proved that Augustin's fixed-point iteration converges at a linear rate for computing the Augustin information of order α∈[open]1/21∪[open]13/2, and that 's algorithm converges at a linear rate for computing the Rényi information measure of order α∈[open right]1/21∪[open]1∞.
In contrast, existing results are asymptotic and apply to a narrower range of α.
Our proofs are simple, demonstrating the effectiveness of selecting an appropriate mathematical structure.
Preliminary numerical experiments indicate that Augustin's fixed-point iteration may converge linearly for α∈[open]01∪[open]12.
This observed range is broader than that we have established.
It is natural to explore extending the range of α that admits linear convergence
We thank Marco Tomamichel and Rubboli Roberto for discussions.
C.-E. Tsai, G.-R. Wang, and Y.-H. Li are supported by the Young Scholar Fellowship (Einstein Program) of the National Science and Technology Council of Taiwan under grant number NSTC 112-2636-E-002-003, by the 2030 Cross-Generation Young Scholars Program (Excellent Young Scholars) of the National Science and Technology Council of Taiwan under grant number NSTC 112-2628-E-002-019-MY3, by the research project “Pioneering Research in Forefront Quantum Computing, Learning and Engineering” of National Taiwan University under grant numbers NTU-CC-112L893406 and NTU-CC-113L891606, and by the Academic Research-Career Development Project (Laurel Research Project) of National Taiwan University under grant numbers NTU-CDP-112L7786 and NTU-CDP-113L7763.
H.-C. Cheng is supported by the Young Scholar Fellowship (Einstein Program) of the National Science and Technology Council, Taiwan (R.O.C.) under Grants No. NSTC 112-2636-E-002-009, No. NSTC 113-2119-M-007-006, No. NSTC 113-2119-M-001-006, No. NSTC 113-2124-M-002-003, and No. NSTC 113-2628-E-002-029 by the Yushan Young Scholar Program of the Ministry of Education, Taiwan (R.O.C.) under Grants No. NTU-112V1904-4 and by the research project “Pioneering Research in Forefront Quantum Computing, Learning and Engineering” of National Taiwan University under Grant No. NTU-CC-112L893405 and NTU-CC-113L891605. H.-C. Cheng acknowledges the support from the “Center for Advanced Computing and Imaging in Biomedicine (NTU-113L900702)” through The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.
§ OMITTED PROOFS
§.§ Proof of Lemma <ref>
By Lemma <ref>, we have
(T_α,p(x), T_α,p(y))
= ( p^α⊙ x^1-α , p^α⊙ y^1-α ).
Applying Lemma <ref> (i) and (ii) gives
(T_α,p(x), T_α,p(y))
≤( x^1-α , y^1-α )
≤1-α( x, y ).
This proves the lemma.
§.§ Proof of Lemma <ref>
We will use the following lemma, whose proof is postponed to the next subsection.
Let X, Y: Ω→Δ([d]) be two random probability vectors.
We have
M([X] / [Y]) ≤sup_ω∈Ω M(X(ω) / Y(ω) ).
By Lemma <ref>, we have
M( [X] / [Y] )
≤sup_ω∈Ω M( X(ω) / Y(ω) ),
M( [Y] / [X] )
≤sup_ω∈Ω M( Y(ω) / X(ω) ).
Then,
( [X] , [Y] )
≤sup_ω∈Ωlog M( X(ω) / Y(ω) )
+ sup_ω∈Ωlog M( Y(ω) / X(ω) ).
Since X(ω), Y(ω) ∈Δ([d]), we have
M( X(ω) / Y(ω) ) ≥ 1 and
M( Y(ω) / X(ω) ) ≥ 1.
This implies that
M( X(ω) / Y(ω) )
≤ M( X(ω) / Y(ω) )· M( Y(ω) / X(ω) ) ,
M( Y(ω) / X(ω) )
≤ M( X(ω) / Y(ω) )· M( Y(ω) / X(ω) ),
and hence
( [X] , [Y] )
≤sup_ω∈Ω( X(ω), Y(ω) ) + sup_ω∈Ω( X(ω), Y(ω) )
= 2sup_ω∈Ω( X(ω), Y(ω) ),
which completes the proof.
§.§ Proof of Lemma <ref>
Let M:=sup_ω∈Ω M(X(ω) / Y(ω) ).
We have
M[Y]= [M Y] ≥ [M(X/Y) Y] ≥ [X].
The lemma follows from the definition of M([X]/[Y]).
§.§ Proof of Corollary <ref>
For both (i) and (ii), by Lemma <ref> and Theorem <ref>, we have
(x_t+1, x^⋆)
= (U_α(y_t), U_α(y^⋆))
≤γ̃·(y_t, y^⋆),
and
(y_t+1, y^⋆)
= (V_α(x_t+1), V_α(x^⋆))
≤γ̃·(x_t+1, x^⋆),
where γ̃=γ' for (i) and γ̃=γ” for (ii).
The corollary follows by applying the above two inequalities alternatively.
§.§ Lemma <ref>
We prove the following lemma.
For α∈[0,1)∪(1,∞), let (x^⋆, y^⋆) be a minimizer of the optimization problem (<ref>) and assume p∈_++^m× n.
Then, we have x∈Δ([m]) and y∈Δ([n]).
By Lemma <ref>, we have
x^⋆ = U_α(y^⋆)
= ( p^α (y^⋆)^1-α)^1/α/( p^α (y^⋆)^1-α)^1/α_1 .
Since p∈_++^m× n and y^⋆∈Δ([n]), the vector p^α (y^⋆)^1-α is entry-wise strictly positive.
This implies that x^⋆ is entry-wise strictly positive, and hence x^⋆∈Δ([m]).
To show y^⋆∈Δ([n]), consider the equation y^⋆=V_α(x^⋆) and apply the same argument.
This completes the proof.
|
http://arxiv.org/abs/2409.03275v1 | 20240905063644 | Inverse Design of Winding Tuple for Non-Hermitian Topological Edge Modes | [
"Zihe Yang",
"Kunling Zhou",
"Bowen Zeng",
"Yong Hu"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
These authors contribute equally to this work.
School of Physics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, P. R. China
These authors contribute equally to this work.
School of Physics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, P. R. China
[][email protected]
Hunan Provincial Key Laboratory of Flexible Electronic Materials Genome Engineering,
School of Physics and Electronic Sciences, Changsha University of Science and Technology, Changsha 410114, P. R. China
[][email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, P. R. China
§ ABSTRACT
The interplay between topological localization and non-Hermiticity localization in non-Hermitian crystal systems results in a diversity of shapes of topological edge modes (EMs), offering opportunities to manipulate these modes for potential topological applications. The conventional strategy for characterizing the domain of EMs is to calculate the topological invariants, but which does not provide the wavefunction forms of EMs. This leads to the the bulk-boundary correspondence typically being verified only through numerical methods. In this work, by recognizing EMs as specified solutions of eigenequation, we derive their wavefunctions in an extended non-Hermitian Su-Schrieffer-Heeger model. We then inversely construct a winding tuple { w_ GBZ,w_ BZ} that characterizes the existence of EMs and their spatial distribution. Moreover, we define a new spectral winding number equivalent to w_ BZ, which is determined by the product of energies of different bands. The inverse design of topological invariants allows us to categorize the localized nature of EMs even in systems lacking sublattice symmetry, which can facilitate the manipulation and utilization of EMs in the development of novel quantum materials and devices.
Inverse Design of Winding Tuple for Non-Hermitian Topological Edge Modes
Yong Hu
September 9, 2024
========================================================================
Introduction—The topological edge modes (EMs) localized at edge corresponds to nontrivial bulk topological invariants performed on the Brillouin zone (BZ), a phenomenon known as the bulk-boundary correspondence (BBC), which is a central concept in the filed of celebrated topological band theory <cit.>. In non-Hermitian open systems, where non-Hermiticity arises from interactions with the environment, the conventional BBC may break down due to the systems' sensitivity to the boundary <cit.>. This sensitivity leads to the accumulation of macroscopic bulk states at the boundary under open boundary condition (OBC) called the non-Hermitian skin effect m(NHSE) <cit.> and equivalently reshapes the range of allowed wavevector, the collection of which is referred to as generalized Brillouin zone (GBZ) <cit.>. Note that the NHSE respects spectral topology, rather than band topology, which refers to the winding of the spectrum
under periodic boundary condition (PBC) with respect to the OBC spectrum defined on the GBZ <cit.>. Recently, Yao et al. <cit.> demonstrated that the bulk topological invariants performed on the GBZ can precisely feature the existence of EMs in the non-Hermitian systems, marking a reestablishment of BBC.
In Hermitian topological systems, such as the one-dimensional Su-Schrieffer-Heeger (SSH) model with intercell coupling larger than intracell coupling <cit.>, pairs of EMs are separately distributed at the two ends of systems <cit.>. In non-Hermitian systems, the interplay between topological localization and non-Hermiticity localization gives rise to diverse patterns of EMs <cit.>. For example, the balance between these two localization mechanisms allows the delocalization of topological EMs, a phenomenon observed in experiments <cit.>. This offers potential pathways for manipulating the EMs and designing “extended state in a localized continuum” <cit.>. While in the NHSE-dominant region, two EMs are localized at the same ends together <cit.>. Several topological invariants, such as the winding number based on the evolution of pseudomagnetic fields or spectrum, have been proposed for building the corresponding BBC <cit.>, for the domain of EMs and these invariants appears to be consistent. However, this approach does not provide the wavefunction forms of EMs.
Additionally, the reasons why such a BBC works well and the relationships between these topological invariants remain not entirely clear.
In this work, by considering EMs as special solutions distinct from bulk state solutions of eigenequation, we derive the analytical solutions for the EMs in a generalized non-Hermitian SSH model. According to the existence condition and behavior of this analytical solutions, we inversely construct a topological winding tuple { w_ GBZ,w_ BZ} for non-Hermitian topological EMs. A nontrivial w_ GBZ defined on the GBZ characterizes the presence of EMs and w_ BZ=0,±1 defined on the BZ corresponds to two EMs being separately distributed at two ends, or both localized together at the left (right) end, respectively. This winding tuple is exactly a combination of previously defined topological invariants from the literature and is applicable to systems without sublattice symmetry (SLS). Topological invariants concerning spectrum derived from w_ BZ has the form of the product of energies of different bands, revealing the connection between energy subbands and potentially advancing the spectral winding number of multi-band systems. Our results provide a comprehensive understanding of topological EMs and the associated BBC in non-Hermitian systems.
Model.—We consider a two-band non-Hermitian SSH model, as illustrated in Fig. <ref>(a), with a Hamiltonian defined in momentum space
H(β)=[ i γ+i λ(1/ β -β) t_1 e^-iθ+t_2 /β; t_1 e^iθ+t_2 β -i γ-i λ( 1/β -β) ].
Here H(β) represents the Bloch Hamiltonian for β∈BZ or the non-Bloch Hamiltonian for β∈GBZ. Here, ± i γ represents the on-site gain and loss at the A/B sites, which are the only non-Hermitian term in this model; t_1 (t_2) and λ are conjugated intracell (intercell) coupling and neighbor A-to-A or B-to-B sites. Without loss of generality, we only consider {t_1, t_2, λ}≥ 0. Additional phase ±θ and ±π/2 are introduced in t_1 and λ, respectively, which leads to magnetic fluxes of Φ_±=π/2±θ in the model's triangular loops as shown in Fig. <ref>(a). Such an SSH model enables gain (loss)-controlled and flux-controlled NHSE <cit.> as shown in Fig. <ref>(b), where the beige (blue) region represents the pile-up of bulk states at the left (right) boundary (see Fig. <ref>(c-d)). Such skewness corresponds to the positive (negative) winding of the PBC spectrum with respect to OBC spectrum <cit.>, respectively, as shown in insets of Fig. <ref>(c-d).
This model could support paired EMs, as shown in Fig. <ref>(c-d), with their energy symmetrically distributed with respect to the real axis. The EMs can be either localized with skin modes in Fig. <ref>(c) or one of them can be isolated at the left end as depicted in Fig. <ref>(d). The positive spectral winding number for EMs and their localization in the same direction in Fig. <ref>(c) appears to stick to conventional spectral BBC <cit.>. However, for separately distributed EMs in Fig. <ref>(d), the winding number is trivial.
It should be noted that establishing the BBC for EMs in our model may be challenging. One reason is that Eq. (<ref>) does not respect SLS and another reason is that diagonal elements in Eq. (<ref>) depend on wavevector. Thus, conventional approaches, such as defining topological invariant using the “Q matrix” or calculating the winding number of non-diagonal elements in Hamiltonian, may not be applicable <cit.>. Below, we attempt to establish the BBC for these EMs in non-Hermitian systems.
Analytic solutions for EMs—We begin by analytically calculating the wavefunction of EMs with details available in the Supplementary Materials. By substituting the non-Bloch wavefunction in Eq. (<ref>), the eigenequation yields four roots for each energy E, denoted as {β_1,β_2,β_3,β_4} sorted by their moduli. Then the wavefunction can be constructed as
[ ψ_A(n); ψ_B(n) ]
= ∑_i=1^4
[ ϕ_Ai; ϕ_Bi ]β_i^n,
where ϕ_Ai =α(β_i) ϕ_Bi with α(β_i)= t_1 e^- i θ + t_2/β_i/E - q(β_i) and q(β_i)=iγ + i λ(1/β_i -β_i).
For the boundary conditions of N-site-lattice, the coefficient matrix satisfies Mϕ=0, where ϕ=[ϕ_B1,ϕ_B2,ϕ_B3,ϕ_B4]^𝖳 and
M=α(β_1) α(β_2) α(β_3) α(β_4)
1 1 1 1
α(β_1) β_1^N+1 α(β_2) β_2^N+1 α(β_3) β_3^N+1 α(β_4) β_4^N+1
β_1^N+1 β_2^N+1 β_3^N+1 β_4^N+1.
The solutions exist when det(M)=0. For bulk states, this condition becomes the roots of characteristic function fulfilling the GBZ condition |β_2|=|β_3| in the thermodynamic limit <cit.>. The existence of EMs implies that special solutions can be found, which satisfy det(M)=0 but with |β_2| < |β_3|. By dividing the third and fourth rows of M by β_3^N+1 and taking the thermodynamic limit N→∞, M becomes a block upper triangular matrix
M=M_I M_III
0 M_IV,
where each element represents a 2 × 2 submatrix. Now, det(M)=det(M_I)×det(M_IV). The existence of special solutions requires either det(M_I) or det(M_IV) to be zero. Let C_e1 = α(β_1) = α(β_2) to ensure det(M_I)=0, where the subscript e1 denotes one EM if special solutions exist. Then we have
f_e1(β,E_e1) = - iλ C_e1β + (t_2 + iλ C_e1)/β
+ (t_1 e^- iθ + iγ C_e1 - C_e1 E_e1)=0.
Here, E_e1 is the energy of this EM. According to Eq. (<ref>), -E_e1 should be the energy of another EM f_e2(β,E_e2)=0 with E_e2=-E_e1 and β_3,4 being associated with E_e2. Same results would be obtained if we first let det(M_IV)=0. By simultaneously solving the eigenequation of Eq. (<ref>) and Eq. (<ref>), the characteristic equation can be derived
Γ_e(β) = (-t_2C_e^2-2 iλ C_e)β+(2 iλ C_e+t_2)/β
+(-t_1 e^iθ C_e^2+t_1e^-iθ+2 iγ C_e)=0,
with index e={e1;e2}. Here C_e^2 - i t_2 /λ C_e + 1=0 (details available in Supplementary Materials). With four roots solved, we further obtain the energies of EMs
E_e=±γ t_2/λ+2it_1sinθ/C_e1-C_e2.
and the wavefunction for two EMs on each site
ψ_e1(n) = [ C_e1β_1^n - C_e1β_2^n; β_1^n - β_2^n ]
ψ_e2(n) = [ C_e2β_3^n-(N+1) - C_e2β_4^n-(N+1); β_3^n-(N+1) - β_4^n-(N+1) ].
It can be observed that the ψ_e1(n) and ψ_e2(n) are predominantly governed by β_2 and β_3, respectively.
For arbitrary given parameters, the EMs exist if both two roots associated with a specified E_e among four roots are larger (less) than the other two roots (also can see Fig. S1 in the Supplementary Materials). This allows us to map the system's phase diagram for EMs, as shown in Fig. <ref>(a), where the white (colored) region represents the absence (presence) of EMs. From the wavefunction Eq. (<ref>), it can be predicted that two EMs are localized at the left end when |β_2|<|β_3|<1 and right end when 1<|β_2|<|β_3|, and both ends when |β_2|<1<|β_3|, which corresponds to the beige, blue and green region in Fig. <ref>(a), respectively. These predictions are further verified and exemplified by numerical results in Fig. <ref>(b-d). The emergence of delocalized EM when |β_2|=1 or |β_3|=1, corresponding to a critical case between different colored regions such as point b in Fig. <ref>(a). An extended EM over bulk is shown in Fig. <ref>(b) with the associated energy located on the PBC spectrum. Thus, according to the root distribution but without the knowledge of any topological topological invariants, we derive the conditions for the existence of EMs and their spatial distribution.
Inverse Design of Topological Winding Tuple—
To establish the BBC for topological EMs, we inversely design the corresponding topological invariants by the root distribution as follows. Note that the existence of EMs requires two largest (smallest) roots are associated with E_e2 (E_e1) among four roots. According to the GBZ theory, for an energy in our model that does not belong to OBC bulk spectrum, two roots must lie within the GBZ, while the other two roots must lie outside the GBZ <cit.>. In other words, the GBZ bisects the four roots. Considering that Γ_e(β) has a first-order pole at β=0, we can construct the winding number by Γ_e1;e2(β) (only the solutions for EMs satisfy Γ_e1;e2(β)=0) along the GBZ
w_e1;e2 = ∮_ GBZ1/2π idlnΓ_e1;e2(β).
For the existence of EMs, w_e1=1 and w_e2=-1. A trivial w_e1=0 implies that β_1 and one of β_3 and β_4 are associated with E_e1. In this case, the corresponding w_e2=0, indicates the absence of EMs. Both nontrivial w_e1;e2 feature the existence of EMs, as does their combination in the form of
w_ GBZ = w_e1-w_e2/2.
Given the dominance of |β_2| and |β_3| in Eq. (<ref>), we naturally compare their magnitudes with 1 to determine the domain of EMs. Note that the BZ on the complex plane is a unit circle with a modulus of 1, which inspires us to define a winding number along the BZ
w_e1;e2' = ∮_ BZ1/2π idlnΓ_e1;e2(β).
Here, w_e1'=1(0) implies that there are two (one) roots for E_e1 in BZ and corresponds to this EM at the left (right) end. For the other EM, a similar distribution is characterized by w_e2'=0(-1). The distribution of these two EMs can also be characterized by
w_ BZ = w_e1' + w_e2',
where w__BZ∈{-1, 0, 1}. When w__BZ = 1(-1), three (one) of the four β_i of the EMs are within BZ, implying
|β_2;3| < 1 (|β_2;3| > 1) and EMs are localized at the left end (right end). When w_1 = 0, |β_2| < 1 < |β_3|, two EMs are localized at opposite ends, a distribution analogous to the Hermitian scenario.
Thus, the topological EMs in non-Hermitian systems correspond to a winding tuple
W = { w_ GBZ,w_ BZ}.
The first element defined on the GBZ determines the existence of EMs, while the second element defined on the BZ determines the localized nature of the EMs.
Such a winding tuple is consistent with the phase diagram in Fig. <ref>(a), as illustrated in Fig. <ref>. Fig. <ref>(a-c) show four roots of EMs with respect to GBZ and BZ for three representative cases b, c, d in Fig. <ref>(a), corresponding to w_ GBZ=1, W={1,1}, W={1,-1}. For a specified case b in Fig. <ref>(a), w_ BZ is not well-defined as the evolution of Γ_e2(β) crosses the zero point as shown in Fig. <ref>(d). The phase transition in Fig. <ref>(a) by adjusting t_1e^iθ under fixed t_2 is captured by the variation of winding tuple as shown in Fig. <ref>(e).
Cross-validation for topology—In this section, we further address the relationship between winding tuple Eq. (<ref>) and previously defined topological invariants <cit.>. The wavefunction of single EM in systems with SLS typically is located at the single site A or B. However, the wavefunction in Eq. (<ref>) is distributed on both sites, which inspires us to perform a similarity transformation H^'(β)=U^-1H(β)U with transformation matrix U=1/μ[ d_- d_+; 1 1 ]. Here, μ=√(d_–d_+) is normalization factor and d_-(d_+) is the ratio of amplitude between different sites. The new Hamiltonian reads
H^'(β) = 1/μ^2[ m_0 + m(β) R_+(β); R_-(β) -m_0 -m(β) ].
with the diagonal element including constant mass term
m_0=iγ(d_++d_-)-d_+ d_- t_1 e^iθ+t_1 e^-iθ;
and β-dependent term
m(β) = iλ(d_+ + d_-)(1/β-β) -t_2 β + d_+d_-t_2 /β;
and the non-diagonal term
R_±(β)= ±[(-t_2 d_±^2-2 iλ d_±) β + (2 iλ d_± +t_2)/β.
.+(-t_1 e^iθd_±^2 +2iγ d±+t_1 e^-iθ)]
.
To compare with the well-studied systems that have SLS, the selection of parameters should ensure the on-site mass term independent of β (m(β)=0), which requires d_+ + d_- = it_2/λ and d_+ d_- = 1. These conditions align with solutions for C_e, i. e., {d_+,d_-}={C_e2,C_e1} and thereby R_+(β) =Γ_e2(β), R_-(β) = -Γ_e1(β), and m_0^2/μ^4 = E_e^2. Since the constant on-site mass term only alters the energy of EMs but not the wavefunction <cit.>, the previously defined winding number on the GBZ <cit.> and on the BZ <cit.> return to Eq. <ref>
and Eq. <ref>, respectively. These results also cross-validate our proposed analytic solutions for EMs.
Considering that similarity transformation does not alter the eigenvalue
E(β)^2 =E_e^2 -Γ_e1(β)Γ_e2(β)/μ^4,
we can define a new topological invariant by spectrum equivalent to w_ BZ
w_ BZ(E) =1/2π i∮_ BZdln[∏_i=1;2 E_i(β)-∏_i=1;2 E_ei],
where E_i(β) is the energy of different bands. Different from conventional spectral winding number <cit.> that refers to the winding of PBC spectrum related to reference energy, w_ BZ(E) take the form of the product of energies of different bands.
As shown in Fig. <ref>, w_ BZ(E) fits well with the previous calculations.
Two left-localized EMs in Fig. <ref> (c) and separately distributed EMs in Fig. <ref> (d) are capture by w_ BZ(E)=1 and w_ BZ(E)=0. A critical case is shown in Fig. <ref>(c) where the product of two EMs falls in the ∏ E_i(β), corresponding to a delocalized state in Fig. <ref>(b).
Conclusions—In this study, we derive the analytic solutions for EMs and inversely construct the topological winding tuple W = { w_ GBZ,w_ BZ}, corresponding to the existence and localized nature of EMs, respectively. We also define a novel spectral winding number based on the product of the energy of different bands, offering a new perspective for unveiling the extraordinary EMs in non-Hermitian systems. Our study not only advances the theoretical understanding of non-Hermitian topological EMs but also suggests potential avenues for manipulating and utilizing EMs for future experimental explorations aimed at harnessing the practical applications of topological phenomena.
We thank Xiaoshang Jin for helpful discussions. This work is supported by the
Natural Science Foundation of Hunan Province (2024JJ6011) and Innovation Program for Quantum
Science and Technology (Grant No. 2021ZD0302300).
|
http://arxiv.org/abs/2409.02728v1 | 20240904140156 | Task-Oriented Communication for Graph Data: A Graph Information Bottleneck Approach | [
"Shujing Li",
"Yanhu Wang",
"Shuaishuai Guo",
"Chenyuan Feng"
] | cs.LG | [
"cs.LG",
"cs.SI",
"eess.SP"
] |
Task-Oriented Communication for Graph Data: A Graph Information Bottleneck Approach
Shujing Li, Yanhu Wang, Shuaishuai Guo, Senior Member, IEEE, and Chenyuan Feng, Member, IEEE
Shujing Li, Yanhu Wang and Shuaishuai Guo are with School of Control Science and Engineering, Shandong University, Jinan 250061, China and also with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, P. R. China (e-mail: [email protected], [email protected], [email protected]).
Chenyuan Feng is with Department of Communication Systems, EURECOM, Sophia Antipolis 06410, France (e-mail: [email protected]).
Received 2024; accepted 2024
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Graph data, essential in fields like knowledge representation and social networks, often involves large networks with many nodes and edges. Transmitting these graphs can be highly inefficient due to their size and redundancy for specific tasks. This paper introduces a method to extract a smaller, task-focused subgraph that maintains key information while reducing communication overhead. Our approach utilizes graph neural networks (GNNs) and the graph information bottleneck (GIB) principle to create a compact, informative, and robust graph representation suitable for transmission. The challenge lies in the irregular structure of graph data, making GIB optimization complex. We address this by deriving a tractable variational upper bound for the objective function. Additionally, we propose the VQ-GIB mechanism, integrating vector quantization (VQ) to convert subgraph representations into a discrete codebook sequence, compatible with existing digital communication systems. Our experiments show that this GIB-based method significantly lowers communication costs while preserving essential task-related information. The approach demonstrates robust performance across various communication channels, suitable for both continuous and discrete systems.
Task-oriented communication, graph neural network, graph information bottleneck, vector quantization.
§ INTRODUCTION
In the swiftly changing realm of communication systems, marked by the emergence of 5G and its advancements, an increasing demand arises for efficient and smart communication frameworks capable of adjusting to intricate and ever-shifting environments. Conventional communication systems concentrate on enhancing metrics such as throughput and latency, frequently neglecting the distinct needs of task-oriented communications. In task-oriented scenarios, the objective extends beyond the mere effective transmission of raw data; it aims to optimize the performance for specific tasks like classification, prediction, or decision-making. It is worth noting that in addition to Euclidean data such as images and text, there are vast amounts of non-Euclidean data generated in social networks, biological networks, transportation networks and recommendation systems, etc. <cit.> This graph-structured data has no order or coordinate reference points, and is difficult to represent in a grid-like matrix or a tensor. A graph is usually composed of nodes and edges, which contain rich relational information. We can regard the graph data as a web of relationships, wherein the nodes are the subjects and the edges represent the relationships between the nodes. This feature makes graph an effective tool for modeling complex relationships in various non-Euclidean data scenarios <cit.>.
Graph data is widely applied to knowledge representation, recommendation systems, and user behavior analysis <cit.>. However, the inherent complexity and size of graph data can pose challenges for transmission and storage. A complete graph, which contains all possible relationships between the nodes, is often not suitable for direct transmission due to the substantial bandwidth and storage resources required. Moreover, a complete graph tends to contain redundant information, leading to unnecessary resource consumption. In many practical applications, only specific task-related information within the graph needs to be transmitted to accomplish particular objectives. Additionally, graph data might contain sensitive information or personal privacy that should be safeguarded during transmission. Transmitting the entire graph increases the risk of information leakage. Therefore, an intelligent and concise communication system is needed to transmit graph data effectively. Within this context, the Graph Information Bottleneck (GIB) stands out as an innovative strategy to tackle these challenges. It draws upon the principles of information theory and graph neural networks to craft solutions. Central to this approach is the compression of transmitted information, ensuring that it retains only the most pertinent features relevant to the task. This targeted compression facilitates a more efficient and impactful mode of communication.
§.§ Related Works
Task-oriented communication is a promising solution for graph data transmission. Different from traditional communication, it focuses on the accurate transmission of task-relevant information, rather than the bit-level precise transmission <cit.>. Its characteristics align well with our demands for graph data transmission. It is difficult to characterize or extract task-related information with mathematical models, and the existing task-oriented communication systems mainly rely on deep learning technology <cit.>. That is, the neural networks (NNs)-based encoder and decoder are constructed and trained, so that the task-oriented communication system obtains the ability to extract task-relevant information <cit.>.
Precisely, Farsad et al. proposed a neural network architecture for text transmission task, that combines joint source-channel coding (JSCC) with recurrent neural network (RNN) based encoder, binarization layers, channel layers, and RNN-based decoder <cit.>. Xie et al. introduced a system for textual semantic restoration tasks based on Transformer model <cit.>. Their approach focuses on maximizing system capacity while minimizing semantic errors by restoring the meaning of sentences. Guo et al. proposed to utilize pre-trained language models to quantify the semantic importance of text and allocate unequal power based on semantic importance <cit.>. Some JSCC methods map image pixel values directly to input representations to achieve high-quality image reconstruction tasks <cit.>. Kang et al. proposed an image transmission method specifically designed for scene classification tasks <cit.>. Shao et al. developed a task-oriented communication scheme for edge inference using the information bottleneck (IB) <cit.>. Their scheme aims to improve the performance of image classification task at the edge server by efficiently transmitting relevant information. These NNs-based task-oriented communication systems have achieved outstanding outcomes, demonstrating the capacity to effectively execute specific tasks.
§.§ Motivation & Contributions
However, these works cannot be directly used for graph data transmission.
Because they are based on CNNs or fully connected networks and deal with regular data such as text (represented as sequences of characters) and images (represented as continuous two-dimensional or three-dimensional pixel sets).
Graph data exhibits a more intricate structure, which is composed of nodes and edges with attribute information.
When transmitting graph data, it is important to consider the efficiency of transmitting large-scale graph data while accurately preserving the correlation between nodes and edges <cit.>.
Therefore, it is necessary to design a new task-oriented communication system for graph data transmission.
Using information bottleneck theory to develop task-oriented communication systems is a good option <cit.>.
Because IB theory aims to find an optimal intermediate representation that can retain important information in the input data to ensure the accuracy of the output prediction while eliminating redundant information <cit.>.
However, it should be noted that the direct utilization of IB-based frameworks in graph data processing is not feasible. This is because the IB framework assumes that the data follows an independent identically distributed (IID) pattern.
Whereas in graph data, the presence of edges and their attributes results in the data points being dependent on one another, which makes the graph data deviate from the IID assumption <cit.>.
Recently, an information-theoretical design principle for graph data, named GIB, has been developed, which seeks the right balance between data compression and information preservation for graph data<cit.>. Specifically, employing GNNs[GNN is a deep learning model specifically designed to work with graph data. In contrast to traditional deep learning models primarily handling vectorized data, GNNs excel at capturing intricate relationships within graphs, involving nodes and edges<cit.>. The fundamental principle of GNNs is to gradually aggregate local neighborhood information by iteratively updating the representation of nodes<cit.>. Typical GNN models include Graph Convolutional Networks (GCNs), Graph Isomorphism Network (GIN), Graph Attention Networks (GATs), etc.] as the foundational framework for graph data processing, GIB works well. Through the process of learning and parameter adjustments, GIB ensures the preservation of only task-relevant information while simultaneously compressing extraneous data.
One of the challenges in studying graph information bottleneck, i.e., how to handle the interdependencies between nodes in graph data, is the non-IID character of graph data. One of the primary technical contributions of our research is task-oriented transmission, which we will further optimize for.
In addition, the actual widespread use of digital communication systems makes the realization of compatibility between task-oriented communication systems and digital communication systems a necessity.
Thus, there is a requirement to identify a suitable digitization mechanism for communication systems geared towards graph data, ensuring both effective data compression and robustness. Vector quantization (VQ) emerges as a powerful technique that addresses these needs by mapping high-dimensional data into a finite set of lower-dimensional codewords. This process not only facilitates significant data compression but also enhances the system's robustness against noise and transmission errors. Furthermore, by integrating with deep learning, VQ is highly adaptive and can be dynamically adjusted to different data distributions and channel conditions.
Motivated by these issues, we design GIB-enabled task-oriented communication systems for graph data in this work. Our main contributions are summarized as follows:
* We introduce GIB into a task-oriented communication system for graph data. We build a Markov chain model for information transmission and formulate an optimization problem to maximize the mutual information between the task target and received codewords while minimizing the mutual information between the received codewords and the raw graph data. This approach balances the preservation of critical information in the extracted features while eliminating redundant information, thereby enhancing task success rates and reducing communication overhead.
* To address the difficulty of handling mutual information terms in GIB due to high-dimensional integration, we use the Mutual Information Neural Estimator (MINE) to directly estimate the mutual information between the original graph and the subgraphs. This approach overcomes the challenge of obtaining the prior distribution of the subgraphs in applying variational approximation methods.
* Recognizing the importance of topological information in graph data, particularly in revealing community structures, we introduce a connectivity loss term into the objective function. This term leverages topological information during feature extraction, reduces fluctuations in ambiguous node assignments, and contributes to a more stable training process.
* We map the resulting subgraph representation onto a jointly trained codebook to generate a discrete index sequence for transmission. This mapping ensures compatibility with existing digital communication systems. Experimental results show that the system is able to achieve higher compression rates while achieving task success rates comparable to traditional digital communication methods.
§.§ Organization
The subsequent sections of this paper are organized as follows.
In Section 2, we outline the system model, describe the structure and design objectives of the task-oriented communication system for graph data. Section 3 introduces the details of GIB-enabled task-oriented communication systems for graph data, including the handling of GIB objectives and the system training strategy. Section 4 presents our proposed approach for digitizing the task-oriented communication system. In Section 5, we evaluate the performance and effectiveness of the proposed method through experiments. In Section 6, potential applications of the proposed method in practical scenarios are discussed. Finally, we make a brief summary of this paper in Section 7.
§.§ Notations
In this paper, a graph with m nodes is defined as g or G=( V,E,A,𝒳), where V={V_i|i=1,2,...,m } is the set of nodes with cardinality m, E={( V_i,V_j)|i<j,V_iandV_j areconnected } is the edge set, A={ 0,1 }^m× m is the adjacent matrix, and 𝒳∈R^m× d is the feature matrix corresponding to V with feature dimension d. The pair ( G,Y ) stands for the graph data and its target variable.
The entropy of Y is defined as H( Y ). The mutual information between X and Y is represented as I( X,Y ).
§ SYSTEM MODEL AND PROBLEM DESCRIPTION
§.§ System Model
In this paper, we consider a task-oriented communication system for graph data as shown in Fig. <ref>. The system mainly includes a transmitter and a receiver, where the transmitter consists of a feature extractor implemented by a GNN and a joint source-channel (JSC) encoder. The receiver is a neural network used for task inference. Given an input graph g, they cooperate to perform tasks to make the inference output ŷ consistent with the true target variable y.
The random variables (G, Y) together with the encoded codeword X and the channel-corrupted codeword X̂ form the following probabilistic model:
( Y )G → X →X̂→Ŷ,
which satisfy p( ŷ|g )=p_θ( ŷ|x̂)p_channel( x̂|x )p_ϕ( x|g ).
Herein, the lowercase letters g, x, x̂, y, and ŷ denote realizations of the variables represented by the corresponding uppercase letters.
p_ϕ( x|g ) represents the transmitter neural network parameterized by adjustable parameters ϕ.
For a given input graph g, the feature encoder identifies the representation x related to the task. The task-relevant features are encoded by the JSC encoder and then transmitted to the receiver through the channel.
To accommodate both continuous channels and discrete channels within this framework, we introduce an indicator variable ζ that distinguishes between the channel types: ζ=0 for an continuous channel and ζ=1 for an discrete. The conditional probability distribution p_channel( x̂|x ) is thus defined as:
p_channel(x̂|x; ζ) =
1/√(2π N_0)exp(-(x̂-x)^2/2N_0), if ζ = 0
P_X̂|X(x̂|x). if ζ = 1
where we assume that the continuous channel is the additive white Gaussian noise (AWGN) channel and the discrete channel is the Symmetric Discrete Channel (SDC).
AWGN is implemented using an untrained neural network layer, and the transfer function is expressed as:
x̂=x+ϵ,
where the Gaussian noise ϵ∼𝒩( 0,N_0/2).
For SDC, P_X̂|X(x̂|x) signifies the transition probability matrix, encapsulating the likelihood of transitioning from input to output symbols.
The SDC model assumes that both channel inputs and outputs utilize the same symbol set, with each symbol corresponding to an identical output probability distribution. This probabilistic behavior is succinctly captured by a transition matrix:
𝒫=[ ε 1-ε/r-1 ⋯ 1-ε/r-1
1-ε/r-1 ε ⋯ 1-ε/r-1
⋯ ⋯ ⋯ ⋯
1-ε/r-1 1-ε/r-1 ⋯ ε
]_r× r,
where ε represents the probability of correct symbol transmission and
r denotes the cardinality of the channel symbol set. Potential transmission errors could lead to the receiver retrieving incorrect vectors, impacting the inference task.
The conditional distribution p_θ( ŷ|x̂) stands for the task inference network parameterized by adjustable parameters θ at the receiver.
It deduces the class label ŷ based on x̂ based on the channel-corrupted codeword x̂.
§.§ Problem Description
The effectiveness of task execution is intricately linked to the dimension of the feature vector produced by the transmitter. For instance, in the context of graph classification tasks, a higher dimension of the feature vector could lead to an elevated classification accuracy. However, this advantage comes at the cost of increased communication overhead. Thus, the challenge is to identify a concise and informative representation that aligns with the optimal subgraph.
To formalize this trade-off, we formulate the following objective function based on the GIB principle:
ℒ_GIB=-I( Y,X̂)+β I( G,X̂),
in which I(Y,X̂) represents the mutual information capturing the relevance of task-specific information in the received codeword, I(G,X̂) represents the preserved information in X̂ given G, and β acts as a trade-off factor governing the relationship between the two.
Developing task-oriented communication based on GIB is a promising approach to solving the graph transmission challenges.
This approach offers an information metric for the graph data and can effectively capture node and edge information. However, there are a few noteworthy problems:
* Problem 1: How to deal with mutual information containing graph data to get a tractable objective function?
The expansion of the first term in (<ref>) yields:
I( Y,X̂)= ∫ p(y, x̂) logp(y, x̂)/p(y) p(x̂) d y d x̂
= - ∫p(y,x̂)log p(y)dydx̂_H( Y )=constant
+ ∫p(y,x̂)log p(y|x̂)dydx̂,
where the first term of the second equation is the entropy of Y. For a given graph and a given task, the entropy of Y is determined, so this term can be regarded as a constant and can be ignored in subsequent optimization. Therefore, we can obtain the expression for (<ref>) in the optimization implication:
I( Y,X̂) = ∫p(y,x̂)log p(y|x̂)dydx̂.
The joint distribution p( g,y ) for graph data and target labels is known. p_ϕ( x̂|g ) is determined by the transmitter network p_ϕ( x|g ) and channel model p_channel( x̂|x;ϵ). For the second integral term, we derive according to the Markov chain:
p(y |x̂) =∫p(g,y) p_ϕ(x̂| g)/p(x̂) d g,
Next, we address the second mutual information term of (<ref>) and expand it:
I( G,X̂)=∫p(x̂| g)p(g)logp_ϕ(x̂| g)/p(x̂)dgdx̂,
where p(x̂) is also an intractable high-dimensional integral:
p(x̂) =∫ p(g) p_ϕ(x̂| g) d g.
It is customary in IB to substitute this marginal distribution with a tractable prior distribution<cit.>. In GIB, finding a suitable variational prior is difficult. This is because of GIB's interpretation of p(x̂): it represents the distribution of irregular subgraph structures, not just a latent graph data representation. Moreover, due to the non-IID nature of graph data, finding a simple function as a prior distribution is not feasible.
Therefore, a new method is needed to estimate that mutual information.
* Problem 2: How to utilize topological information in graph data for communication tasks?
By designing a feature extractor based on GIB theory, it becomes possible to selectively extract task relevant information, reducing redundancy and improving efficiency. Yet, when GNNs extract features from graph data, they often concentrate too much on nodes and neglect the graph's structural features.
However, for many tasks, the crucial information is deeply rooted in the graph's topology. Therefore, in the process of feature extraction, it is necessary to consider the topological structure of the graph.
To do this, we need to impose constraints on the feature extractor to improve the role of topological information in feature extraction.
* Problem 3: How to be compatible with digital communication systems?
While the proposed task-oriented communication system effectively addresses the challenge of graph data transmission, its use of continuous signals poses compatibility issues with existing digital communication systems. Given the well-established infrastructure of digital communication systems, it is impractical to abandon it completely. Therefore, it is necessary to develop a task-oriented communication system compatible with the digital communication system to transmit graph data.
§ GIB-ENABLED TASK-ORIENTED COMMUNICATION
In this section, we develop a task-oriented communication system based on GIB theory. Specifically, we combine variational approximation and MINE techniques to improve the GIB formulation <cit.>.
This modification enables a more streamlined optimization process employing neural networks, which solves Problem 1. In addition, we describe the process of feature extraction in detail and also node assignment, including the network configuration and the pertinent revision made to the objective function, which solves Problem 2.
§.§ Graph Information Bottleneck Reformulation
To solve Problem 1,
we introduce the variable distribution q_θ( y|x̂) as a substitute for the true posterior distribution p( y|x̂). θ represents the parameter of the inference neural network in the receiver computing the inference output ŷ. Consequently, (<ref>) is transformed into:
I( Y,X̂) = ∫p( y,x̂)logq_θ( y|x̂)dydx̂
=E_p( y,x̂)[ logq_θ( y|x̂) ].
Subsequently, Monte Carlo sampling is used to derive the empirical distribution of the joint distribution as an approximation. N samples are taken for the given data, that is:
p( y,x̂)≈1/N∑_i=1^Nδ_y( y_i)δ_x̂( x̂_i).
Here, δ( ·) denotes the Dirac function utilized for sampling the training data. y_i and x̂_i represent the label and representation received by the receiver for the i-th training data, respectively. This approximation leads to a tractable optimization objective from (<ref>):
ℒ_inf( q_θ( y|x̂) ) = -1/N∑_i=1^Nlogq_θ( y_i|x̂_i).
Minimizing the objective function in (<ref>) is consistent with maximizing I( Y,X̂) and minimizing ℒ_inf.
This function is defined as the loss between the inference result Y and the ground truth for the received x̂. A smaller loss signifies superior performance of the inference neural network.
About the second mutual information term I( G,X̂) of (<ref>), indicating the transmission informativeness, we employ the MINE to directly approximate it without the need to estimate p(x̂). We express this term in the form of Kullback-Leibler (KL)-divergence:
I( G,X̂)=D_KL[ P( G,X̂)||P( G )⊗ P( X̂) ].
For ease of analysis and mathematical treatment, we adopt the Donsker-Varadhan representation of KL-divergence, which expresses the mutual information term as the difference between the expected value and the logarithmic expected value:
I( G,X̂) = T:G×X̂→ℝsup E_P( G,X̂)[ T ]-logE_P( G )P( X̂)[ e^T]
= ℒ_MI( T ),
in which T=f_κ( g,x̂) encompasses all functions that render two expectations finite. The MINE can be used to estimate the mutual information between regular input data and its vector representation. Due to the irregularity of graph data, GNN is employed to extract the vector representation before feeding it into the multi-layer perceptron (MLP) for processing alongside the representation X̂. f_κ( ·) serves as the statistics network, denoting the neural network that executes the process of obtaining the corresponding real numbers from G and X̂. The optimization of this estimator involves adjusting the MINE parameter κ such that the value on the right-hand side of (<ref>) closely approximates I( G,X̂). This optimization is formalized as:
κmax ℒ_MI( κ ,X̂)=1/K∑_i=1^Kf_κ( g_i,x̂_i)-log1/K∑_i=1,j i^Ke^f_κ( g_i,x̂_i).
After training with K sets of training data, the training process generates a set of suboptimal MINE parameters denoted as κ^* (see Fig. <ref>).
Drawing from (<ref>) and (<ref>), we derive a reformulation of the overall loss function:
ℒ_GIB ( ϕ ,θ ,κ^*)= E_p( g,y ){E_p_ϕ( x̂|g )[ -logq_θ( y|x̂) ]
+β[ E_p_ϕ( x̂|g )f_κ^*( g,x̂)-logE_p( x̂)e^f_κ^*( g,x̂)] } .
By combining the sampling method, equations (<ref>) and (<ref>), we arrive at a tractable optimization problem for the entire system:
ϕ ,θmin ℒ_GIB( ϕ ,θ ,κ^*)=ℒ_inf( θ,ϕ)+βℒ_MI( κ^*,X̂).
§.§ Model Training Strategy
To solve Problem 2, we specially design the feature extraction module. Node assignment is the main step in the graph feature extraction process. First, we introduce assignment probabilities as a continuous relaxation in node assignment, to address the problem that the discrete nature of the graph make it difficult to optimize (<ref>) using gradient optimization methods.
Next, we introduce connectivity loss to improve the impact of topological information on feature extraction, which can also make the node assignment process more stable.
The detailed design process of this module will be discussed below.
We devise a node assignment mechanism to determine the inclusion of each node in the graph within the subgraph. This mechanism involves GNN extracting node features X from the original graph G. These features are then input into an MLP to obtain preliminary node assignments. The Softmax function is subsequently employed to convert these assignments into a probabilistic form. We represent the selected subgraph as G_sub. Specifically, for a given graph G, nodes either belong to G_sub or G_sub. The node assignment mechanism yields a matrix S:
S= Softmax( MLP_σ_2( X ) ) with X=GNN_σ_1( A,X ).
The matrix S has dimensions m× 2, where m is the number of nodes in the input graph, and each row is a two-dimensional vector indicating the assignment probability of the corresponding node. Specifically, the i-th row of S,
[ p( V_i∈G_sub|V_i),p( V_i∈G_sub|V_i) ],
represents the probability of the i-th node belonging to G_sub or G_sub.
We define ϕ_1={σ_1,σ_2} as the parameter of the node assignment mechanism, which is part of the parameter of the transmitter. Once adequately trained, the values in each row of S should converge to 0 or 1, achieving robust node assignment. We take the first row of S^TX to obtain the feature vectors of the n nodes belonging to the subgraph.
In conjunction with the JSC encoder parameter ϕ_2, we obtain the transmitter parameter ϕ ={ϕ_1,ϕ_2}. For simplicity, we collectively refer to the node assignment mechanism and the JSC encoder as the feature extractor and encoder, denoted as g_ϕ( ·). Consequently, the objective function is explicitly expressed as:
ϕ ,θmin ℒ_GIB( ϕ ,θ ,κ^*)=ℒ_inf( q_θ( g_ϕ( G ) ) )+βℒ_MI( κ^*,X̂).
We introduce a continuous relaxation with probabilistic assignment of nodes, alleviating the problems arising from the discreteness of the graph. Nevertheless, inadequate initialization may lead to poorly trained node allocation mechanisms and failure to achieve the desired outcomes. In other words, if the probabilities of a node belonging to G_sub or not are too close, nodes amay not be appropriately assigned. On one hand, over-assigning nodes to G_sub will lead to the presentation of a subgraph that includes an excessive amount of redundant information. On the other hand, assigning too few nodes to G_sub will result in an inadequate amount of task-related information in the subgraph, rendering it incapable of successfully executing the task.
To address the above issues, we assume that the model has an inductive bias that helps the model to focus more on the connectivity relationships between nodes. The model is thus better able to capture and utilize the topological information of the graph. We incorporate the connectivity loss proposed in <cit.> to introduce this inductive bias:
ℒ_con= Norm( S^TAS )-I_2_F,
where Norm( ·) denotes row normalization, I_2 is the 2× 2 identity matrix , and ·_F is the Frobenius norm. Elements a_11 and a_12 in the first row of S^TAS are defined as follows:
a_11=∑_i,jA_ijp( V_i∈G_sub|V_i)p( V_j∈G_sub|V_j),
and
a_12=∑_i,jA_ijp( V_i∈G_sub|N_i)p( V_j∈G_sub|V_j).
Intuitively, if a node belongs to G_sub, its neighboring nodes are highly probable to belong to G_sub as well. Conversely, if a node does not belong to G_sub, its adjacent nodes likely do not belong to G_sub either. Consistently with this, ensuring adequate nodes are assigned to G_sub while reducing redundancy is achieved through a_11/a_11+a_12→ 1. This occurs simultaneously with reducing redundancy in G_sub through a_12/a_11+a_12→ 0. Analogously, this holds for G_sub and the elements of the second row of S^TAS.
To summarize, we introduce the continuous relaxation with probabilistic node assignment to enhance the optimization process. To address potential challenges stemming from inadequate initialization, such as ambiguous node assignment and unstable training process, we incorporate the inductive bias with the connectivity loss ℒ_con. The overall loss function is refined as:
ϕ ,θmin ℒ_GIB( ϕ ,θ ,κ^*)=ℒ_inf( q_θ( g_ϕ( G ) ) )
+βℒ_MI( κ^*,X̂)+αℒ_con( g_ϕ( G ) ).
The training procedures for the GIB-enabled task-oriented communication system are illustrated in Algorithm <ref>.
§.§ Computational Complexity Analysis
The computational complexity inherent in the proposed methodology primarily emanates from its reliance on GNNs. In our analysis, the GCN is chosen as the representative GNN to elucidate the computational complexity of the system. Characteristically, GCN employs a one-hop receptive field to assimilate local features, subsequently enlarging this receptive field through the stratification of layers. Denoting the number of GCN layers by L, the computational complexity attributed to GNNs can be articulated as 𝒪(|E|∑_i=1^LD_in^(i)D_out^(i)), where |E| signifies the quantity of edges in the graph, and D_in^(i), D_out^(i) represent the input and output dimensions of each respective layer. In parallel, the complexity associated with the MLP layer and the Softmax function within the node assignment mechanism is quantified as 𝒪(mD) and 𝒪(m) respectively, with
m indicating the total number of graph nodes and
D symbolizing the output dimension of GNNs. Consequently, the cumulative computational complexity of the proposed framework can be comprehensively expressed as 𝒪(|E|∑_i=1^LD_in^(i)D_out^(i)+ND).
§ DIGITIZATION OF GIB-ENABLED TASK-ORIENTED COMMUNICATIONS
This section extends the analog task-oriented communication system, introduced in Section 3, to suit digital communication environments. As outlined in Problem 3, practical applications necessitate compatibility with digital systems. We address this by integrating vector quantization for digital transmission adaptation.
§.§ Digital Transmission
A pivotal strategy for transitioning to a digital framework involves discrete codebook mapping. This technique maps the continuous outputs of the neural network encoder into discrete codewords, as discussed in related works <cit.>. The mapping process employs a predefined discrete codebook alongside nearest neighbor rules <cit.>. Let x_1,x_2,… ,x_n∈ℝ^d be the continuous outputs from the encoder described in Section III. The codebook, denoted as ℰ∈ℝ^K× d consists of K codewords e_1,e_2,… ,e_K∈ℝ^d, where K represents the codebook size. The discrete mapping function is formulated as:
x_i_q=e_k where k=jmin x_i-e_j_2.
Employing discrete codebook mapping, the probabilistic model of the system is transformed into a new Markov chain, represented as:
( Y )G → X → Z →Ẑ→X̂→Ŷ,
which adheres to the following relationship:
p( ŷ|g )=p_ϕ( x|g )p_Q( z|x )p_SDC( ẑ|z )p_DQ( x̂|ẑ)p_inf( ŷ|x̂).
Here, the transition from
X to Z entails discretizing feature representations using the codebook.
Z represents the index sequence derived via the nearest neighbor rule, substituting X as the input to the channel.
For the discrete signal Z, we employ the discrete channel defined in (<ref>) for its transmission, that is:
p_SDC( ẑ|z )=p_channel(ẑ|z; ζ=1)
=P_Ẑ|Z(ẑ|z),
where P_Ẑ|Z(ẑ|z) is the transition probability matrix of SDC described in Section II when ζ=1.
In digital transmission, as indicated in (<ref>), each vector x_i from the encoder output is aligned with its nearest embedding vector e_k from the shared codebook. The transmitter's role is simplified to sending the index k to the receiver. The assignment of indices for transmission is defined as:
p(Z_i=k|x_i)={[ 1 for k=jmin x_i-e_j_2; 0 otherwise; ]..
This results in a continuous vector being translated into a discrete one-hot codeword. However, channel imperfections can lead to erroneous index detection (Ẑ). We utilize the SDC to model this transmission of discrete indices.
§.§ Design of Discrete Codebook
Each codeword e_i in the vector quantizer's codebook defines a Voronoi region 𝒱_i:
𝒱_i={ x∈R^d: x-e_i≤ x-e_j, for all j i }.
These regions collectively span the entire vector space ℝ^d
to which the encoder's output belongs. The quantization process entails finding the codeword closest to a given vector, which subsequently determines the Voronoi region and index for the vector. Balancing the size of the codebook is crucial as increasing it reduces quantization distortion but also elevates computational complexity.
We opted for a moderate-sized, suboptimally designed codebook to achieve acceptable quantization results. To enhance the codebook's performance, we introduce a learnable aspect, allowing codewords to adapt during training. Additionally, the encoder's output range is constrained to prevent extreme variations.
The non-differentiable nature of the quantization operation poses a challenge for encoder training. To address this, a straight-through estimator is employed, allowing gradients from the decoder input to flow back to the encoder output. Furthermore, a loss term (ℒ_vq) is added to reduce the distance between the encoder output and the corresponding codeword:
ℒ_vq= sg[ x ]-e _2,
with sg[ ·] representing a stop gradient function.
A commitment loss (ℒ_CM) is also introduced to ensure the encoder's output does not deviate excessively from the codewords:
ℒ_CM= x-sg[ e ] _2.
The final objective function (
ℒ_VQ-GIB) encompasses these elements alongside the mutual information and connectivity losses, guiding the entire system:
ℒ_VQ-GIB =ℒ_inf( q_θ( g_ϕ( G ) ) )+βℒ_MI( κ^*,X̂)
+αℒ_con( g_ϕ( G ) )+ℒ_vq+λℒ_CM
subject to the optimized mutual information parameters κ^*:
κ^*=κmax ℒ_MI( κ ,X̂).
ℒ_vq= sg[ x ]-e _2,
where sg[ ·] denotes stop gradient. Therefore, this item is solely valid for codebook learning, not encoder training.
Given that the volume of the encoder's output space is dimensionless, achieving satisfactory quantization mappings is challenging if the training of the codewords in the codebook does not align with the rate of parameter training in the encoder. Therefore a commitment loss is introduced to constrain the encoder's output from increasing:
ℒ_CM= x-sg[ e ] _2.
This item only affects the encoder, bringing the output closer to the code vector without influencing codebook learning.
For codebook updating, we adopt an exponential moving average (EMA) approach akin to K-means clustering. This method assigns variable weights to cluster centers over training batches, ensuring that the codebook dynamically adapts to the encoder's output.
§ EXPERIMENTS AND DISCUSSIONS
In this section, we evaluate the performance of the proposed GIB-based task-oriented communication system on graph classification tasks, and investigate the adaptability and effectiveness of continuous and discrete communication systems respectively.
Additionally, ablation studies are also conducted to illustrate the contributions of the MI loss ℒ_MI in GIB and the connectivity loss ℒ_con presented in Section 3, as well as the impact of the trade-off factor β on the system performance.
§.§ Experimental Setup
§.§.§ Datasets
For our graph classification experiments, we carefully select two datasets: COLLAB and PROTEINS.
* COLLAB: This scientific collaboration dataset represents a researcher's ego network, where nodes correspond to researchers and edges indicate collaboration between them. Each researcher's ego network is labeled based on the field to which the researcher belongs, resulting in three possible labels. COLLAB consists of 5,000 graphs, with an average of 74 nodes and 2,457 edges per graph.
* PROTEINS: This dataset comprises 1,113 proteins classified as enzymes or non-enzymes. Nodes in the graph represent amino acids, and edges exist between nodes if the distance between corresponding amino acids is less than 6 angstroms. On average, each graph in the PROTEINS dataset has 39 nodes and 73 edges.
§.§.§ Setings and Baselines
To assess the performance of the proposed method in a graph classification task, we integrate GIB into two distinct backbones: Graph Convolutional Network (GCN) <cit.> and Graph Isomorphism Network (GIN) <cit.>.
We compare the proposed method with other graph-level representation learning methods: InfoGraph based on mean aggregation <cit.> and ASAP (Adaptive Structure Aware Pooling) based on pooling aggregation <cit.> in terms of graph classification accuracy, respectively.
* InfoGraph: InfoGraph is an unsupervised graph-level representation learning method that maximizes the mutual information between the representations of entire graphs and the representations of substructures at different granularity (e.g., nodes, edges, triangles) to make the graph representations adequately capture the features of substructures.
* ASAP: ASAP utilizes self-attention network along with GNNs to capture the importance of each node in a given graph. It also learns a sparse soft cluster assignment for nodes at each layer to pool the subgraphs to form the pooled graph.
For a fair comparison, all methods use the same number of GNN layers in the backbone. Models are trained using Stochastic Gradient Descent (SGD) with the Adam optimizer. We employ 10-fold cross-validation to report classification accuracy in experiments to validate the performance of the models.
§.§.§ Neural Network Architecture
While we do not enforce complete consistency in the network architectures of different approaches, we ensure that the GNN backbone and the number of layers are consistent across methods. The neural network architectures for the proposed method are shown in Tables <ref> and <ref>, which are outlined below. In the tables, num is the number of nodes, batch is the batch size during training, class_n is the number of graph classes, and dim represents the hidden dimension. The size in Table <ref> refers to the size of the codebook, which was set to 256 in the experiment.
* Table <ref> shows the network architecture of the proposed method where GCN is used as the backbone. We use two GCN layers to extract the node features of the graph and then perform node assignment. The output of the Node Assignment layer is multiplied with the full node features extracted by the GCN to obtain the node features of the selected subgraph. This output undergoes power normalization before passing through the AWGN channel.
* Table <ref> illustrates the network architecture of the proposed digital communication system. In discrete communication systems, subgraph representations require quantization before entering the channel. During the training process, the codebook utilized for vector quantization undergoes automatic updates until it reaches a stabilized state. The index sequence obtained from quantization passes through a symmetric discrete channel with a certain error transfer probability, and the decoder performs dequantization based on the codebook shared with the encoder.
We note that the hidden layer dimensions of the proposed method and baseline methods are set to the same. To comply with the original implementation idea of the baseline methods, the output dimension of their encoder network is twice as large as that of the proposed method, consuming more communication resources. In addition, since the vector quantization part is not involved in baselines, the same quantization method as the proposed method is adopted for comparison algorithms.
§.§ Performance of GIB-enabled Task-oriented Communications Without Vector Quantization
In the experiments, we evaluated the robustness of the proposed method amidst varying channel conditions. Prior to transmission, we implemented signal power normalization. During the training phase, a consistent SNR of 5 dB was maintained, whereas for testing, the SNR was methodically varied within a range extending from -15 dB to 25 dB. Two distinct experimental sets were executed, each with a batch size of 128. The first set operated with a hidden dimension of 16, while in the second, this parameter was augmented to 32.
Fig. <ref> graphically illustrates the inference performance of the methods under evaluation across different channel quality scenarios. A discernible trend was noted, wherein the inference accuracy of all three methods demonstrated progressive enhancement in conjunction with rising SNR levels during testing, culminating in a plateau. Specifically, Fig. <ref> (a) delineates the variation of classification accuracy relative to SNR for the three methods on the PROTEINS dataset. Our proposed GIB-based method, alongside the ASAP method, showcased robust classification performance, with the former exhibiting particularly notable efficacy. The InfoGraph method achieved commendable classification accuracy, especially under near-ideal channel conditions and with a higher hidden dimension. Fig. <ref> (b) presents the experimental outcomes on the COLLAB dataset, a social network dataset characterized by complex topology and diverse node interrelations. The presence of densely connected communities within such networks significantly influences the graph's structural information, which in turn impacts the classification task. Both baseline methods exhibited suboptimal performance on this dataset, attributable to their inadequate consideration of graph topology. As explicated in Section III, the connectivity loss
ℒ_con
steers the model towards heightened consideration of the graph's structural characteristics, enhancing the performance of our approach.
Fig. <ref> compares the graph classification performance between our method, which employs the GIN as the backbone, and the alternative method utilizing the GCN. These experiments were conducted on both the PROTEINS and COLLAB datasets, with hidden dimensions set to 16 and 32, respectively. Notably, in the realm of graph classification, the GIN-based method surpassed its GCN-based counterpart. The GCN, a prevalent choice for graph data processing, updates node representations through uniform aggregation of neighboring nodes. In contrast, GIN adopts a distinct methodology, where each node initially aggregates its feature with those of its neighbors via weighted summation. This non-commutative aggregation process, which is indifferent to the order of nodes, imparts to the GIN model an invariance to graph isomorphism, thus enabling it to effectively capture both the structure and global information inherent in the graph.
Given the specific focus on graph classification in this study, the adoption of GIN as the backbone is recommended. However, for a broader spectrum of tasks, the selection of GNNs with varying intrinsic characteristics as backbones can significantly bolster task-specific performance.
§.§ Performance of GIB-enabled Task-Oriented Communication Systems Utilizing Vector Quantization
In this part, the performance of the proposed discrete communication system, which incorporates digital codebooks, is rigorously evaluated. Our experiments are structured to assess system robustness under varying probabilities of correct transmission. Here, the probability of correct transmission, denoted as
ε, signifies the likelihood that the index received by the receiver accurately corresponds with that transmitted by the sender. Consequently, higher values of
ε are indicative of superior channel quality. During the model training phase,
ε is fixed at 0.94, whereas for testing, an array of
ε values, specifically [0.90, 0.92, 0.94, 0.96, 0.98], are examined.
Fig. <ref> graphically represents the classification performance of all three evaluated methods as a function of
ε, with the hidden dimensions set at 16 and 32 respectively. The results delineated in these figures unequivocally demonstrate that our method significantly outperforms the baseline methods under various channel quality scenarios. The integration of InfoGraph with VQ is observed to be substantially ineffective for graph classification tasks within digital communication systems. The ASAP method exhibits moderate effectiveness on the PROTEINS dataset; however, its efficacy is markedly diminished when applied to the more complex COLLAB dataset. As illustrated in Figure <ref> (b), its classification accuracy is confined to approximately 0.6 or lower, indicating that minor transmission errors exert negligible impact on the outcomes. The curve illustrating the fluctuation of classification accuracy in relation to
ε is characterized by irregular and unpredictable oscillations. In stark contrast, our proposed method demonstrates robust performance across both datasets, with classification accuracy exhibiting a modest but consistent upward trend in line with improvements in channel quality. This outcome aligns well with our initial hypotheses and theoretical underpinnings.
To further evaluate the performance of our proposed task-oriented digital communication system based on the GIB and VQ, we compared it against traditional digital communication method. Additionally, to validate the effectiveness of the VQ mechanism in task-oriented graph data transmission scenarios, we conducted experiment combining GIB with 8-bit scalar quantization followed by Quadrature Phase Shift Keying (QPSK) modulation.
Given the non-uniformity in the dimensionality of the signals produced by these methods, we apply the same error rate per symbol to all methods to ensure that the assessment is based on the methods' ability to handle equivalent levels of channel-induced errors, rather than on their inherent dimensions. Models were trained with a fixed error rate of 0.01 and tested across a range of error rates from 0.06 to 0.014. These experiments were carried out on two datasets, PROTEINS and COLLAB, respectively.
The experimental results are shown in Fig. <ref>.
On PROTEINS, all methods exhibited comparable outcomes. This is because the PROTEINS data set is relatively simple and does not require a more elaborate symbolic representation. However, on COLLAB, the performance of the GIB based methods is significantly better than that of traditional digital communication method, which reflects the effectiveness of GIB for extracting task-related information on complex graph data. Between the two GIB-based methods, vector quantization significantly outperforms the traditional digitization method, highlighting its advantages in handling complex datasets with higher dimensionality and complexity.
The superior performance of the proposed task-oriented digital communication system based on the GIB and VQ on the COLLAB dataset highlights its potential advantages over traditional communication systems and traditional digital schemes, particularly in scenarios where data is characterized by high dimensionality and intricate patterns.
§.§ Ablation Study
To empirically ascertain the contribution of the individual components in our proposed methodology, we executed a series of ablation studies. Specifically, we extracted ℒ_MI and ℒ_con from the loss function, thereby deriving two distinct variants of our method:
GIB lacking ℒ_MI
and GIB lacking ℒ_con.
§.§.§ Effectiveness of ℒ_MI
Observations from Tables <ref> and <ref> indicate that, across both datasets, the system based on GIB and its variant without the ℒ_MI component consistently exhibit superior performance at a hidden dimension of 32 compared to a hidden dimension of 16.
This is because an increase in the hidden layer dimension increases the capacity of the model, which means that the network can capture more details in the data. Especially for graph data with complex structures, higher model capacity can help the model learn richer representations and thus improve performance.
It is worth noting that the size of hidden dimension has a more significant effect on the performance of the variant without ℒ_MI than GIB.
The absence of ℒ_MI in the variant causes the model to focus more on maximizing mutual information between the input graph data and the extracted representations, while neglecting the aspect of compression. Consequently, the extracted representations contain much redundant information that takes up space for useful information. Therefore, as the hidden dimension increases, the model capacity increases. This allows the model to capture richer information in the graph data and more information needed for the task. In other words, GIB takes into account information compression, meaning it can achieve good performance even with relatively smaller model capacity.
§.§.§ Effectiveness of ℒ_con
Furthermore, the absence of the connectivity loss constraint,
ℒ_con
, not only detrimentally impacts system performance on certain datasets but also instigates volatility in node assignments, subsequently leading to inconsistent graph classification results.
To illustrate the significance of connectivity loss and its impact on system reliability, we adopt the arithmetic mean of outcomes derived from a 10-fold cross-validation process as the benchmark for task execution accuracy. Complementarily, the standard deviation computed from these same 10 iterations serves as the metric to gauge the consistency and robustness of our system's performance under different communication conditions.
It becomes evident upon analysis that the standard deviation associated with GIB added ℒ_con exhibits a reduction in comparison to its variant without ℒ_con under various channel conditions. This result suggests that the connectivity loss we incorporate plays a critical role in improving the robustness and stability of the system performance.
§.§.§ Influence of Variations in
β on System Performance
The hyperparameter β, associated with the mutual information term, plays a pivotal role in modulating the extent of mutual information between the graph features and the subgraph features, thereby reflecting the relative importance of the graph's global structure versus its local features. To explore this interplay, we conducted experiments assessing the impact of varying β values on classification accuracy, with the findings presented in Table <ref>. The experimental data reveal that the system attains optimal performance at a moderate MI factor level, in contrast to either extremely high or low levels.
When β is set to a low value, the model predominantly accentuates local features, while marginalizing the global structural information of the graph. This can lead to the model's failure in capturing critical global relationships within the graph, culminating in a diminished classification accuracy. Inversely, a high β value results in the model prioritizing the global graph structure at the expense of local features. Such an overemphasis on the macro structure of the graph renders the classifier less receptive to nuanced local features, thereby constraining the model's task execution capabilities. By selecting a moderate value for β, the model achieves an equilibrium between harnessing the global structure and incorporating local features. This equilibrium enables the model to effectively utilize the intrinsic global structure of the graph while concurrently considering the node-specific local features. The resultant synergistic integration of global and local features thus significantly enhances the model's performance in graph classification tasks.
§ DISCUSSION
Task-oriented graph data transmission is a promising research avenue, especially relevant in smart city scenarios such as traffic management and environmental monitoring, where distributed edge computing plays a crucial role. In these contexts, numerous sensors and devices across the city communicate with central servers for data processing. This communication technique significantly reduces bandwidth usage while maintaining accurate inference capabilities, making it highly suitable for real-time applications in urban settings. To effectively translate the theoretical foundations of the GIB approach into tangible applications, it is essential to demonstrate its practical utility in addressing real-world, task-oriented communication challenges. Here are some heuristics worth exploring.
§.§ Heuristics for Inference Goal Formulation
Real-world scenarios encompass a broad spectrum of inference objectives, contingent upon the specific application context. For example, in the realm of autonomous vehicles, the focus might be on detecting and responding to traffic signals and obstacles. In contrast, industrial IoT may concentrate on predicting equipment failures. The GIB framework is designed to fine-tune the communication process, thereby amplifying the precision and efficiency tailored to these particular tasks. The GIB model can be crafted to cater to either uni-task or multi-tasks. While task-specific models are optimized for a unique objective, task-agnostic models can be expanded to address a variety of tasks through strategic architectural enhancements and transfer learning techniques. This approach necessitates training the model across a diverse array of tasks and datasets, fostering its ability to generalize across varied communication scenarios.
§.§ Heuristics for GIB Model Training
The GIB model should be trained on data that mirrors the complexities of its target real-world task. This process entails amassing comprehensive datasets that encompass both raw input data, such as sensor readings or visual imagery, and the relevant task-specific labels, such as classifications of objects or indicators of faults. It is imperative that these data are meticulously annotated and preprocessed to construct a graph that encapsulates the interconnections and interdependencies among various data elements. The efficacy of GIB framework should be substantiated through rigorous experimentation with datasets that pertain to different task-oriented communication applications. For instance, industrial datasets could be utilized to train the model in predictive maintenance tasks, employing sensor data to anticipate equipment failures. Additionally, deploying the GIB framework on medical imaging datasets could enhance communication efficiency in tasks such as diagnosing diseases and monitoring patient conditions.
§.§ Heuristics for Training Dataset Construction
To construct a training dataset for tasks without a standard dataset, follow these steps: First, clearly specify the task, such as node classification or link prediction; second, collect raw data needed for the graph-based task, typically in the form of nodes, edges, and node features; third, preprocess the data suitable for graph-based learning by normalizing node features, removing self-loops, and converting the graph to an appropriate format (e.g., adjacency matrix, edge list); Construct graph representations to serve as inputs to GNN, typically involving adjacency matrices and node feature matrices; Generate pairs of input graphs and target outputs specific to the task. For instance, for node classification, the input is subgraphs centered around each node with the output being the class label of the central node for node classification tasks, while for link prediction tasks, the input is subgraphs around a candidate link with the target being a binary label indicating whether the link exists; Implement GIB to identify and retain the most informative parts of the graph while discarding unnecessary information by encoding the graph data into a compressed representation and decoding it to approximate the original or task-specific output; Finally, divide the dataset to ensure the model generalizes well to unseen data. Additionally, feature engineering, such as aggregating neighborhood information and generating higher-order features, can enrich node features. Data augmentation can also be used to improve the model's robustness in data-hungry cases.
§ CONCLUSION
In this study, we explored GIB-based task-oriented communication for graph data transmission, focusing on optimizing mutual information between received codewords and the task goal while reducing the mutual information with the original graph representation. Utilizing variational approximation, Monte Carlo sampling, and MINE, we devised a workable objective function, addressing the challenge of computing mutual information for irregular graph data. We also incorporated a connectivity loss term to account for community structure in graph data, which improved subgraph selection and stabilized training, as confirmed by our experiments. Furthermore, we adapted our method for digital transmission using vector quantization. Our tests across various SDC channel qualities consistently showed strong performance, underscoring the method's adaptability and efficacy. Future work will explore more practical application areas and include hands-on experiments with real-world data.
IEEEtran
|
http://arxiv.org/abs/2409.03705v1 | 20240905170844 | The loop equations for noncommutative geometries on quivers | [
"Carlos Perez-Sanchez"
] | math-ph | [
"math-ph",
"hep-th",
"math.MP",
"math.OA",
"math.PR",
"58B34 (Primary) 16G20, 81T75, 81T13, 05E10, 47-XX (Secondary)"
] |
§ ABSTRACT
We define a path integral over Dirac operators
that averages over noncommutative geometries on a fixed graph,
as the title reveals, partially using quiver representations.
We prove algebraic relations that are satisfied by the
expectation value of the respective observables, computed in terms of
integrals over unitary groups, with weights defined by the spectral action. These equations generalise the
Makeenko-Migdal equations—the constraints of lattice gauge theory—from lattices to arbitrary graphs.
As a perspective, these constraints are combined with
positivity conditions (on a matrix of parametrised by
composition of Wilson loops). A simple example
of this combination known as `bootstrap' is fully worked out.
RAG based Question-Answering for Contextual Response Prediction System
Nian Yan
5th September 2024
======================================================================
§ INTRODUCTION
Before discussing our problem in its due context,
we describe it aridly
and postpone its motivation for Section <ref>. For integers n, N > 1,
fix a polynomial
S∈_⟨ 2 n⟩=⟨ u_1, u_1^*, u_2, u_2^*,…, u_n,u_n^*⟩
in noncommutative u-variables satisfying u_ju_j^*=1=u_j^* u_j for j=1,…, n.
Consider a family of integrals of the type
I_β= ∫_(N)^nβ(U_1,U^*_1,…, U_n,U_n^*)
^N S(U_1,U^*_1,…, U_n,U_n^*) U_1 U_2 ⋯ U_n, β∈_⟨ 2 n ⟩,
with each factor U_i being the Haar measure on (N).
Assuming that S is real-valued over the whole integration domain,
we derive the loop equations, that is to say,
algebraic relations among the integrals {I_β}_β∈ℐ parametrised by a certain family ℐ⊂_⟨ 2n ⟩.
This type of integrals has been considered by physicists in the context of
lattice gauge field theory. In mathematics, integrals over the unitary group
are relevant in the context of
Weingarten-calculus <cit.>, developed mainly by
Collins and collaborators (e.g. <cit.>).
§.§ Motivation: Random matrix theory and noncommutative geometry
Our interest in integrals of the type (<ref>) emerges from
Connes' noncommutative geometrical <cit.> approach to fundamental interactions, in which geometric notions are mainly governed by a
self-adjoint operator D named after Dirac. In this setting,
the physical action
S(D) is claimed to depend only
on (the spectrum of) D and is known as spectral action <cit.>.
Our problem is to evaluate the moments
that the spectral action yields via
𝔼 [ h(D) ] = 1/𝒵∫_Dirac h(D) ^-S(D) D, 𝔼 [1 ] =1, h(D)∈,
for an ensemble of Dirac operators D (the normalisation condition defines 𝒵). Of course, this requires to have defined
the measure D over such ensemble of
Dirac operators D.
(The original problem,
as it appears in <cit.>,
considers also fermions, as it has been recently
addressed in <cit.>, but which we do not include yet.)
Part of the relatively vivid interest in
the problem (<ref>) during the last decade is
due to the reformulation <cit.>
of fuzzy spaces[We do not
aim at a comprehensive review here, for fuzzy spaces see e.g. <cit.>
and the works of Rieffel
<cit.> (and references therein)
that address, from diverse mathematical angles, the rigorous convergence of matrix algebras
to the sphere. We are also not reviewing all the quantisation approaches either;
for a Batalin-Vilkovisky approach: cf
<cit.>
for Tate-Koszul resolutions applied to a model of 2× 2-matrices
and
<cit.> for the
homological-perturbative approach to
Dirac-operator valued integrals.
]
as finite-dimensional spectral triples. This led to the application of
random matrix theory tools
<cit.> that followed to the first
numerical results <cit.>.
All these works deal with multimatrix interactions that include
a product of traces (as opposed to
the ordinary interactions that are single a traces of a noncommutative polynomial).
Independently,
<cit.>
Taylor-expands the spectral action as a matrix model of the form
V(M) = ∑_l=1^∞∑_i_1,i_2,…,i_l F_i_1,i_2,…, i_l M_i_1,i_2
M_i_2,i_3⋯ M_i_l,i_1. This expansion was shown in <cit.> to be
convergent
and, taking some elements of <cit.> and with own techniques, to possess a neat reorganization in terms of universal Chern-Simons forms
and Yang-Mills forms integrated against (B,b)-cocycles that do depend on the geometry. The resulting model
V(M) above goes beyond the solved <cit.> generalisations
of the Kontsevich matrix model <cit.> (in which only the unitary invariance
of the quadratic term is broken) known as Grosse-Wulkenhaar model <cit.>.
These two independent approaches portend a symbiosis between
random matrix theory and noncommutative geometry.
Both the multiple trace interactions and the unitary broken interactions
could motivate (if they have not yet) new developments in random matrix theory.
And vice versa, the
path-integral quantisation (<ref>) of noncommutative
geometries seems
hopeless without the intervention of random matrix theory[The only
alternative known to the author is
the use of Choi-Effros operators systems <cit.> (cf. also <cit.>) that emerge
when one assumes (or rather, when one accepts) that only
a finite part of the Dirac spectrum is measurable. The price to pay is nonassociativity.
].
§.§ Ensembles of unitary matrices in noncommutative geometry
The interaction between these two disciplines has taken place in hermitian grounds.
In this letter,
integrals over Dirac operators boil down to
ensembles of unitary matrices (they are also unitary-invariant,
like ordinary hermitian matrix ensembles, but
unitary ensembles integrate over unitary random matrices). These
can be considered as an approach to average over
`noncommutative geometries
on a graph'.
When the graph is provided with
additional structure,
it might be grasped as a discretisation of space.
For instance, edges would carry a representation, and vertices
equivariant maps, in a
spin network approach; here, we refrain from including information that
associated to gravitational degrees of freedom and
address exclusively the problem of gauge interactions.
The background geometry is therefore fixed
and the finiteness of the unitary groups appearing is not
a shortage of the theory; as a caveat,
they are not to be interpreted as a truncation of infinite-dimensional
symmetries (but to be compared with the unitary structure group of
Yang-Mills, for example).Representation theory does still play a role, but rather
in the context of quiver representations in a certain category
that emerges from noncommutativity geometry as
exposed in <cit.> after the fundamental ideas of <cit.>.
We can now restate the aim of this letter as follows:
Define a partition function
for noncommutative geometries on a graph—that is, define a measure
over all Dirac operators—and prove the algebraic relations that the respective
observables shall satisfy. Such quantities
have the form I_β as in eq. (<ref>) and are called Wilson loops (although
not each I_β is a Wilson loop).
(Proper definitions follow in the main text.)
Such relations generalise the Makeenko-Migdal equations, the loop equations in
lattice gauge theory. After
introducing the setting in
Section <ref>, we prove the main result
in Section <ref> and conclude with
a fully worked-out application
that mixes the loop equations with positivity conditions
of a certain matrix (`bootstrap') in Section <ref>.
§ QUIVER REPRESENTATIONS AND NONCOMMUTATIVE GEOMETRY
We call quiver Q a directed multigraph.
Since Q is directed, there are maps s,t: Q_1 ⇉ Q_0
(from the edge-set Q_1 to the vertex-set Q_0) determining the vertex s(e) at which
an edge e begins, and the one t(e) where it ends.
Multiple edges e,e' ∈ Q_1 and self-loops o_v∈ Q_0 at a
certain vertex v∈ Q_0 are allowed, namely { s(e) , t(e) } = {s(e'),t(e')} as sets,
and s(o)=t(o) =v, respectively.
One interprets a quiver Q as a category
whose objects are Q_0. The morphisms _Q(v, w)
are the paths from v to w, namely edge-sequences
γ=(e_1 e_2⋯ e_n) with e_1,…, e_n∈ Q_1 and s(γ)=s(e_1)=v,
and t(γ)=t(e_n)=w as well as t(e_j)=s(e_j+1) for j=1,…, n-1.
We shall write γ:v → w if v=s(γ) and w=t(γ)
and call ℓ(γ)=n the length of γ.
The path γ with reversed order is denoted by γ̅=(e_n e_n-1⋯ e_2e_1)
(not to be confused with the inverse morphism of γ).
Obviously, unless otherwise stated, paths are directed, but it will prove useful to
consider also paths in Γ Q,
the underlying graph of the quiver (Q with
forgotten orientations). If s(γ)=t(γ)
we call γ a loop. The space of loops[We comment for sake of completeness,
that the space of endomorphisms Ω_v(Q)=_Q(v,v) has as identity the constant zero-length path, which does play a role in the theory
of path algebras while constructing an equivalence between the category of representation
and modules of the path algebra <cit.>, but here we do not need this explicitly.] at v, is denoted here Ω_v(Q),
that is Ω_v(Q)=_Q(v,v),
and Ω Q will denote the space ∪_v ∈Q_0Ω_v(Q)
of all loops.
A quiver exists essentially to be represented (otherwise one would say multidigraph)
in a category 𝒞.
A 𝒞-representation of Q is by definition a functor from Q
to 𝒞.
§.§ The spectral triple associated to a quiver representation
We restrict the discussion to finite dimensions
and introduce the setting of <cit.>.
By definition, an object in the category of prespectral triples is a pair (A,H) of a *-algebra A
faithfully *-represented, λ: A ↷ H, in an inner product -vector space H (*-represented means here,
that λ(a^*) is the adjoint operator of λ(a) for all a∈ A).
A morphisms in _ ( A_s,H_s ; A_t,H_t )
is a couple (ϕ,U) of an involutive algebra map ϕ: A_s → A_t as well as
a unitary map U: H_s → H_t. As part of the definition,
a morphism should in addition satisfy U λ_s (a)
U^* = λ_t [ ϕ (a) ] for all a∈ A.
.74
In other words, a -representation of Q
associates with each vertex v of Q a prespectral triple (A_v,H_v) ∈ and
with any path γ: v→ w a morphism (ϕ_γ,γ ) :
(A_v,H_v) → (A_w,H_w)
in such a way that if γ = (e_1⋯ e_2), then
γ= U_e_n⋯ U_e_1 and ϕ_γ= ϕ_e_n∘ϕ_e_n-1∘⋯∘ϕ_e_1,
where ϕ_e_j : A_s(e_j)→ A_t(e_j) and U_e_j: H_s(e_j)→ H_t(e_j)
form a -morphism.
We refer to γ as the holonomy of γ.
.23
< g r a p h i c s >
If two vertices are connected by a path γ, notice that
γ is a unitarity and H_s(γ) = H_t(γ).
If Q is connected there might be no (directed) path between two given vertices v and w;
it is however easy, if necessary by inverting some subpaths of a path γ̃ in Γ Q
that connects v with w, to establish the constancy of the map Q_0∋ v↦ H_v:=N;
we call such constant N= R, the dimension of the representation R, somehow abusively. A spectral triple (A,H,D)
is a prespectral triple (A,H) together with a self-adjoint element D ∈(H), referred to as Dirac operator.
(This terminology comes from the hard fact that, if certain operators are added
to the [in that case, infinite-dimensional] spectral triple and if, together with D,
such operators, satisfy a meticulous list of axioms, then D is the spin geometry Dirac operator <cit.>; see also
<cit.> for an introduction geared to physicists).
As a side note, it is possible to compute the space of all
-representations of Q. It was proven in <cit.>
that such space—which in fact forms the category of representations—can be described in
terms of products of unitary groups subordinated to
combinatorial devices called Bratteli networks. This will not be recalled here,
since we will focus only on the spectral action.
Nevertheless, it is important to observe that, in stark contrast with ordinary 𝖵𝖾𝖼𝗍_-quiver representations, providing labels to the vertices is not enough to determine a -quiver representation.
The lifts of whole paths should exist, and this needs the compatibility of the maps ϕ_v
at all vertices v, which in turn is what the so-called
Bratteli networks guarantee (concretely
*-algebra maps for M_m() → M_n() for m>n do not exist, and
if a representation yields A_s(e) = m and A_t(e) = n for some edge e,
a lift fails, cf. <cit.>).
Despite this, we denote representations of quivers as
R={ (A_v,H_v), (ϕ_e,U_e) } _v∈ Q_0,e∈ Q_1
instead of R={ (A_v,H_v), (ϕ_γ,γ) } _v∈ Q_0,γ∈Ω Q,
under the tacit assumption that lifts of whole paths exist.
We associate now a spectral triple to a given -representation
R={ (A_v,H_v), (ϕ_e,U_e) } _v∈ Q_0,e∈ Q_1 of a
connected quiver Q.
We define the Dirac operator associated to R as the matrix
D_Q(R) ∈ M_# Q_0 () ⊗ M_N()
with matrix entries [D_Q(R)]_v,w∈ M_N() in the second factor given by
[D_Q(R)]_v,w = (
∑_e ∈ s (v) ∩ t (w)
U_e) +(
∑_e ∈ t (v) ∩ s (w)
U_e^* ) (v,w∈ Q_0).
By construction, this operator is self-adjoint, and crucially for our purposes,
the next collection
(A_Q(R), H_Q(R), D_Q(R) ) =
( ⊕ _ v∈ Q_0 A_v, ⊕ _ v∈ Q_0 H_v, D_Q(R) )
is a spectral triple.
§.§ The spectral action and the partition function
Given a polynomial f(x) = f_0 + f_1 x^1 +f_2 x^2 +…+ f_d x^d in real variables f_0,f_1,…, f_d ∈ℝ,
and a quiver representation, the spectral action on a quiver reads
S( D) = _H f(D), where we abbreviate D=D_Q(R).
It is possible to compute the spectral action as a loop expansion in terms of generalised
plaquettes γ as follows
_H f(D)= ∑_k=1^d f_k ∑_v ∈ Q_0 γ∈Ω_v(Q)
ℓ(γ) =k γ,
where in the rhs is the trace of M_N() with 1= R=N.
The proof of eq. (<ref>) is given in <cit.>, but the reader will recognise this
formula as a noncommutative
generalisation of the following well-known fact in graph theory:
and C_G denotes the adjacency matrix of a graph G,
the number of length-n paths in G between two of its vertices, i and j, is the entry
[C_G^n] _i,j of the matrix (C_G)^n.
Now we turn to the space of -representations of Q.
Let
A_v=⊕_j=1^l_vn_v, j
denote the algebra associated by R to the vertex v (so l_v is the number
of simple subalgebras of A_v).
Let r_v,j be the multiplicity of the action of the factor
n_v,j⊂ A_v on the Hilbert space H_v,
that is H_v= ⊕_j=1^l_v^ r_v,j⊗^n_v,j
where n_v,j only acts non-trivially on ^n_v,j via the fundamental
representation.
Once labels to the vertices are consistently[`Consistently'
is quantitatively described in terms of certain transition matrices in <cit.>, and qualitatively means that the labels
are such that involutive maps A_s(e)→ A_t(e) always exist for each e∈ Q_1, and
several dimension-matching checks. The reader will note that we do not
include the minimal amount of information in each group at the edges.
The origin of the projective groups (n) is that
(n) acts via the adjoint action.
] assigned,
the possible labels of an edge e are parametrised by
∏_j=1,…, l_t(e)(n_t(e),j)
<cit.>,
and at each vertex it holds ∑_j r_v,j× n_v,j = N,
since each vector space H_v is isomorphic to ^N.
This in turn allows us to embed
each (n_t(e),j) as a subgroup of the unitarities
(N) of H_t(e) and motivates the following measure.
Given an N-dimensional representation R={ (A_v,H_v), (ϕ_e,U_e) } _v∈ Q_0,e∈ Q_1
of a connected quiver Q, we define the Dirac operator measure on Q
as
D := ∏_(v,w) ∈ Q_0× Q_0 [D_Q(R)]_v,w, where [ D_Q(R)]_v,w := ∏ _ e ∈ s (v) ∩ t (w) ∏_j=1^l_t(e) u_e,j,
being u_e,j the Haar measure on ( n_t(e),j ),
where u_e,j sits in the matrix U_e associated to e by R in the respective block-diagonal entry in
U_e = (
1_r_t(e),1 ⊗ u_e,1
, 1_r_t(e),2⊗ u_e,2 ,⋯,
1_r_ t(e) ,l_t(e)⊗ u_e, l_t(e) ).
Therefore D
is the product Haar measure on
∏_e ∈ Q_1 ∏_j=1^l_t(e) ( n_t(e),j)
seen as subgroup of (N)^# Q_1.
The partition function of a given -representation R of a quiver Q
reads
𝒵_Q,R = ∫^- S( D ) D,
N= R and D=D_Q(R).
For any β∈Ω Q,
a Wilson loops[We refer both to β and to 𝔼 [ β]
ambiguously as Wilson loops.] is by definition
𝔼 [ ( β) ]:=
1/𝒵_Q,R∫ (β ) ^- N S(D) D.
For fixed N, the partition function
𝒵 _Q= ∑_R -rep of Q^ R=N𝒵_Q,R
seems to be a more interesting
quantity, or even more so the sum over
a class of quivers Q encoding different
background geometries, 𝒵=∑_Q ∑_R -rep of Q^ R=N𝒵_Q,R. For the moment we content ourselves with the partition function (<ref>)
for a fixed representation R and a fixed quiver Q.
§ THE MAKEENKO-MIGDAL LOOP EQUATIONS FOR THE SPECTRAL ACTION
§.§ Notation
We now derive the constraints on the set of Wilson loops.
We root an edge ∈ Q_1 with s() ≠ t(), and consider its associated
unitarity U_ as given by a fixed -representation R={ (A_v,H_v) ,(ϕ_e, U_e)}_v,e
of Q. Now consider the infinitesimal variation of the spectral action
by the change of variable exclusively
for the unitarity U_
at the edge as follows. Let
U_↦ U_'= ^ Y U_ , Y ∈𝔰𝔲(N) , =√(-1),
where Y is given in terms of arbitrary matrices y_k∈𝔰𝔲(n_t(e),k) for k=1,2,…,
l_t()=: L by
^ Y := [ 1_r_t(e),1 ⊗exp( y_1 ),
1_r_t(e),2 ⊗exp( y_2 ), …,
1_r_t(e), L ⊗exp ( y_ L ) ] .
One should keep in mind that this implies also the substitution
U_ ^*
↦ (U_')^*
= U_^* ^- Y, as it follows from the change (<ref>).
This rule defines a new representation R' differing from R
only by the value of the unitarities at the edge , that is
R'= {(A_v,H_v) ,(ϕ_e, exp( δ_e,e_0 Y ) U_e}_v ∈ Q_0,e ∈ Q_1,
where δ_e,e' is the indicator function on
the edge-set.
Assume that along a given path γ
the combinations e e̅ and e̅ e are absent
for each edge e ∈γ. We call this type of paths reduced (Fig. <ref>)
and it is trivial to see that reduction of a path (i.e. removing those pairs) yields a new one
with unaltered holonomy.
Consider then a reduced loop γ that appears in the spectral action and
contains the rooted edge .
This assumption allows (w.l.o.g. due to cyclic reordering) the decomposition
γ = ^ϵ_1α_1 ^ϵ_2α_2 ⋯^ϵ_mα_m
(cf. Fig. <ref>) where each
of ϵ_1,ϵ_2,…,ϵ_n ∈{+1,-1} is a sign.
This convention means that
^ϵ= is the edge
backwards if ϵ=-1, while of course ^ϵ is itself if ϵ=1.
(The condition that γ starts with implies ϵ_1=1 above, but leaving
this implicit is convenient.)
By asking that each subpath α_1,…, α_m ⊂γ does not
contain neither nor , one
uniquely determines them.
For another loop β, which also starts with ,
under the same assumption that and do not appear consecutively
in β in any order,
a similar decomposition holds
β = ^σ_1μ_1 ^σ_2μ_2 ⋯^σ_pμ_p
in terms of signs σ_j ∈{-1,+1}
and paths μ_j not containing neither the rooted edge
nor . The only difference in notation —which we will keep throughout—
is that γ will refer to generalised plaquettes (i.e. contribution to the spectral action)
while β will be the path of a Wilson loop.
Take again the polynomial f(x)=f_0+ f_1 x + f_2 x^2 + … + f_d x^d,
and rephrase the spectral action of eq. (<ref>) as
S(D_Q)= f(D_Q) = γ∈Ω Q
γ reduced g_γγ.
Now g_γ is a function of f_ℓ(γ) but possibly also
of f_ℓ(γ) + 2 , f_ℓ(γ) + 4 ,…, whenever these last
coefficients are non-zero. The reason for the contribution
of the higher coefficients is the possible appearance of a contiguous pair of edges
e, e̅, for which the respective unitarities will satisfy U_e U^*_e=1 =U_e^* U_e.
These cancellations are not detected
by the holonomy, which is the criterion used in (<ref>) to collect all terms
(instead of using, as in eq. (<ref>), the f_0,…, f_d coefficients and
performing directly the sum over paths).
For instance, if γ is the path in
Fig. <ref> (b),
then g_γ depends on f_ℓ(γ) and f_ℓ(γ)+8, since
Fig. <ref> (a) contributes the same to the spectral action.
The function g_γ=g_γ(f_0,f_1,…, f_d)
is of course quiver-dependent.
§.§ Main statement
The Makeenko-Migdal or loop equations
we are about to generalise appeared first in lattice quantum chromodynamics
<cit.>. They have been a fundamental ingredient in
the construction of
Yang-Mills theory in two dimensions <cit.> in rigorous probabilistic terms.
Let R be a representation of a connected quiver Q and let N= R.
Root an edge of Q and abbreviate by
U=U_ the unitarity that R determines for . Then
for any reduced loop β
the following relation among Wilson loops holds :
𝔼[
j =1
σ_j =+1 ^p 1/N ( U^σ_1μ_1 ⋯ U^σ_j-1μ_j-1 ) 1/N (U^σ_j μ_j⋯ U^σ_pμ_p )
-
j =1
σ_j =-1 ^p 1/N ( μ_1 U^σ_2μ_2⋯ U^σ_j-1μ_j-1 ) 1/N (μ_j U^σ_j+1⋯ U^σ_pμ_p )
]
= γ∈ S(D)
γ reduced g_γ𝔼[
j =1
ϵ_j =+1 ^m
1/N (
β· U^ϵ_jα_j ⋯α_m U^ϵ_1α_1 ⋯ U^ϵ_j-1α_j-1 )
-
j =1
ϵ_j =-1 ^m 1/N ( β·α_j U^ϵ_j+1⋯α_m U^ϵ_1α_1 ⋯α_j-1 U^ϵ_j ) ] .
Notice that the second line takes the expectation value of
1N ( U^σ_1μ_1 ⋯ U^σ_j-1μ_j-1 U^σ_j ) ×1/N (μ_j ⋯ U^σ_pμ_p ),
but σ_j being -1 allows for a cancellation, hence the
apparent lack of harmony between the two first lines of the lhs.
We also stress that the first term in the lhs, which corresponds to j=1=σ_1,
yields the input Wilson loop β in the first trace and a constant path in the second;
the latter yields a factor of N, which is cancelled by its prefactor.
The loop or Dyson-Schwinger or Makeenko-Migdal equations follow from the identity
∫∑_a,b=1^N(∂_Y)_a,b{ (_R' )β_b,a×^- N S ( D' ) } D =0 D'=D_Q(R')
(The entries of the matrix derivative are (∂_Y)_a,b = ∂/∂ Y_b,a.) This identity
follows from the invariance of the Haar measure at the rooted edge
under the transformation
(<ref>), implying
D'= D. Below, we show that this implies
𝔼[ ( 1/N⊗1/N) (∂_Y β ) ]
=𝔼[ 1/N (∂_Y S β ) ],
and compute each quantity inside the trace(s). On the lhs, the matrix derivative is then the Rota-Stein-Turnbull noncommutative derivation
∂_Y Y^k+1 =∑_l=0^k Y^l ⊗ Y^k-l (Y^*=Y ∈ N, k=_≥ 0)
while on the rhs, the matrix yields Voiculescu's cyclic derivation, since the quantity
it derives, S, contains a trace. Recall that is multiplicative,
one has
γ = U ^ϵ_1α_1
U^ϵ_2α_2 U^ϵ_3⋯α_m-1
U ^ϵ_mα_m.
With respect to the transformed representation R'
we can compute the holonomy _R'δ of any path δ. This depends on Y and but we use a prime
in favor of a light notation and write only ' δ.
Since none of the subpaths α_j contains
the transformed edges and , one has
' α_j = α_j, so
' (γ)
=U ^'ϵ_1α_1
U^'ϵ_2α_2 U^'ϵ_3⋯
U ^'ϵ_mα_m
where U^'ϵ is ^ Y U if ϵ=1
and U^*^- Y if ϵ=-1.
Therefore the variation of the loop γ writes
[∂_Y ' γ ] |_Y=0 =
j =1
ϵ_j = +1 ^m
U α_j U^ϵ_j+1α_j+1⋯
U^ϵ_nα_n
U^ϵ_1α_1 ⋯
U^ϵ_j-1α_j-1
-
j =1
ϵ_j = -1 ^m
α_j U^ϵ_j+1α_j+1⋯
U^ϵ_nα_n
U^ϵ_1α_1 ⋯
U^ϵ_j-1α_j-1 U^* .
The cyclic wandering of any fix holonomy, say α_1, in the rhs of
the main result is due to Voiculescu's cyclic derivation. We now compute the variation of the Wilson line β, whose
holonomy writes for the representation R' as
' β =∏_j=1^p (U')^σ_j' μ_j
=
∏_j=1^p (U')^σ_jμ_j.
To take the variation observe that 'β is not inside a trace.
For any matrices A,B∈ N, and a,b,c,d=1,…, N,
(∂_Y)_b,a
[ A exp( Y ) B ]_c,d|_Y=0 = ∑_k=0^∞^k/k!∑_l=0^k A_c,r [ Y^l ⊗ Y^k-1-l] _b,s | a,r |_Y=0 B_s,d
= A_c,rδ_b,sδ_a,r
B_s,d.
Using this rule for the previous expression of ' β, one obtains each summand for each occurrence of U^±1
and the result follows after equating the indices c=a, and b=d, which
is the initial situation in the initial identity (<ref>).
Suppose that the
plaquettes in the action S(D) = ∑_γ^ reducedg̃_γ [ γ + γ̅], intersect each either and exactly
once. Notice that this time we have rewritten it
as sum over pairs γ and γ̅ (which is always possible
since the paths are in Γ Q and the spectral action is real valued). Then
𝔼[
j =1
σ_j =+1 ^p 1/N
( U μ_1 ⋯ U^σ_j-1μ_j-1 ) 1/N (U^σ_j μ_j ⋯ U^σ_pμ_p )
-
j =1
σ_j =-1 ^p 1/N ( μ_1 ⋯ U^σ_j-1μ_j-1 ) 1/N (μ_j ⋯ U^σ_pμ_p ) ]
= γ∈ S(D)
γ=(U,α) reduced g̃_γ𝔼[
(
β· U α )
-
(
βα̅· U^* ) ] .
§.§ Graphical representation of the Makeenko-Migdal equations
We illustrate graphically the meaning of the Makeenko-Migdal equations.
Let us place and the reversed edge along a fixed axis of the picture.
To represent a Wilson loop β or a reduced
generalised plaquette γ, we choose the following notation.
In order to avoid drawings with several intersections,
for each time that γ or β walks along either or ,
we jump to the next `plane' in anti-clockwise direction around the fixed axis.
Thus each of these planes
represents abstractly the subpath μ_j ⊂β
or α_j⊂γ according to the decompositions
(<ref>)
and (<ref>), that is:
< g r a p h i c s >
We kept a rectangular appearance for sake of visual simplicity,
but the subpaths μ_j and α_j are arbitrary (as far as
they have positive length).
In fact, the depicted situation is still to some extent oversimplified, since the theorem
describes the more general case that μ_j or α_j
might be loops themselves (as α_3 in Fig. <ref>),
but this would render the pictures unreadable. The
representation of the Makeenko-Migdal equations reads then as follows:
1/N^2×(-.5
< g r a p h i c s >
) = ∑_γ g_γ(
-.5
< g r a p h i c s >
)
In the rhs, the very similar terms need a word of notation.
The blue arrow is executed right after the green part of the path,
while the red arrow follows only after the purple ones.
§ APPLICATIONS AND OUTLOOK
This last section has as aim to illustrate the power
of the equations derived here when combined with the positivity conditions.
This combination, sometimes known as `bootstrap',
appeared in <cit.> for
lattice gauge theory
and <cit.> in a string context (for hermitian multimatrix models).
§.§ Positivity constraints
Let v∈ Q_0 be fixed for this subsection and fix a representation R of Q
of dimension N.
Consider a complex variable z_β for each
loop β based at a fixed vertex v ∈ Q_0, z={ z_β : β∈Ω_v( Q) },
as well as the matrix
P( z ) := β∈Ω_v(Q) z_ββ, P( z ) ∈ N.
It follows that [ P(z) P(z)^* ] =
∑_β, α z_β z^*_αβ· (α)^*
=
∑_β, α∈Ω_v(Q)
z_β z^*_α( βα̅) ≥ 0
independently of the z-tuple; this is preserved by
expectation values, i.e.
∑_β, α∈Ω_v(Q)
z_β z^*_α𝔼[ ( βα̅) ] ≥ 0, for all z ∈^Ω_v(Q),
which is an equivalent way to state the positivity ℳ≽ 0 of the matrix
ℳ∈ [[N,f_0,f_1,…,f_d ]]
whose entries are given by
(ℳ )_i,j := 𝔼[ ( β_i β̅_j ) ]
for any ordering of the loops {β_1,β_2,…} at the fixed vertex v.
The positivity of ℳ is clearly independent of the way we order these loops,
as a conjugation by a permutation matrix (which is a unitary transformation) will not change the eigenvalues of ℳ.
The choice for the matrix (<ref>) is originally from <cit.>,
who pushed forward the bootstrap for lattice Yang-Mills theory.
The techniques of <cit.> were implemented for fuzzy spectral triples
for an interesting kind of hermitian matrix <cit.>
and a hermitian 2-matrix model <cit.>.
The loop equations of <cit.> have been extended here to include
arbitrary plaquettes that go along the rooted edge more than once, and Wilson loops
that are allowed to do the same.
Consider the triangle quiver Q=-1ex
< g r a p h i c s >
with a rooted edge ,
and let ζ = μ be the only loop of length 3 starting
with (μ is the path v_2→ v_3→ v_1, of course). Consider the action S(D) = f(D) for
f(t)=f_0 + f _1 t+f_2 t^2+ f_3 t^3. This implies
S(D) =(f_0+2 f_2)N +
x [
ζ + ζ ]
where we set x =3 f_3. Since the terms in the even coefficients are just constants
that when evaluating the Wilson loops expectation values, they
disappear when dividing by the partition function
and play no role in integration we set f_0=f_2=0. Now pick a loop β= ζ^n for positive
n∈. According to the loop equations (<ref>), one has
𝔼[ ∑_k=0^n-1 (1/N⊗1/N) (ζ^k ⊗ζ^n-k)]
= x/N (𝔼ζ^n+1 -
𝔼ζ^n-1 ) .
Defining the large-N moments by m_j:= lim_N→∞𝔼 [ 1Nζ^j] for each j∈,
this means
∑_l=0^n-1
m_l · m_n-l
=
x (m_n+1
-m_n-1 ) , (N→∞)
since large-N factorisation holds, N^-2𝔼[ ζ^i ζ^j] →
m_i · m_j, as N→∞.
For the loop β= ζ^-n
with n∈, one has
-∑_j=0^n-1
m_-(n-j)· m_-j
=x ( m_-(n-1) - m_-(n+1)) ,
Finally, going through the derivation of the MM
equations for the constant Wilson loop, one obtains the vanishing of the lhs, so 0= x (m_1 - m_-1),
hence m̅_1 = 𝔼 [ ζ ] =
𝔼 [ ζ̅]
= 𝔼 [ ζ ] =m_-1 = m_1, so m_1 is real (this can be derived
by other means, but the loop equations yield this explicitly). Together
with the last equation this implies m_-j = m_j for all j=1,2,… and the
moments can be arranged in the following (Toeplitz) matrix.
ℳ =
[ 1 m_1 m_2 m_3 … ; m_-1 1 m_1 m_2 … ; m_-2 m_-1 1 m_1 … ; m_-3 m_-2 m_-1 1 … ; ⋮ ⋮ ⋮ ⋮ ⋱ ]=
[ 1 m_1 m_2 m_3 … ; m_1 1 m_1 m_2 … ; m_2 m_ 1 1 m_1 … ; m_3 m_2 m_ 1 1 … ; ⋮ ⋮ ⋮ ⋮ ⋱ ]
Thanks to Theorem <ref>, ℳ can be computed recursively in terms of y:=m_1 and
the coupling x,
m_1 = y m_4 = 4 y/x + 3 y^2/x^2 + 1/x^2 + y/x^3 + 1
m_2 = y/x + 1
m_5 = y + 3 y^2/x + 2 y^3/x^2 + 3/x + 9 y/x^2 + 6 y^2/x^3 + 1/x^3 + y/x^4
m_3 = y + y^2/x + 1/x + y/x^2 m_6 =9 y/x + 18 y^2/x^2 + 10 y^3/x^3 + 6/x^2 + 16 y/x^3 + 10 y^2/x^4 + 1/x^4 + y/x^5 + 1 .
The positivity condition
ℳ(x,y) ≽ 0
can be plotted on the first-moment–coupling plane
in terms of the simultaneous positivity of the minors.
The observed situation for a large class of hermitian matrix integrals
are tight constraints for the first moment
in terms of the coupling, which, by increasing
the size of the minors, typically
determine a curve y=y(x)—and by the respective loop equations, all
the moments and thus the solution of the model.
In this letter we do not claim convergence, but
the results of this academic example encourage us to
explore this combination in future works, including also a hermitian
scalar (`Higgs') field
that self-loops yield.
§ ACKNOWLEDGEMENTS
This work was mainly
supported by the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation program (grant agreement
No818066) and also by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) under Germany’s Excellence Strategy
EXC-2181/1-390900948 (the Heidelberg Structures Cluster of
Excellence). I thank the
Erwin Schrödinger International Institute for Mathematics and Physics
(ESI) Vienna, where this article was finished, for optimal working conditions
and hospitality.
I acknowledge the kind answers
by the group of Masoud Khalkhali at Western U., specially by Nathan Pagliaroli,
on a question about bootstrapping.
alpha
BHGW22
[AK17]Kruczenski
Peter D. Anderson and Martin Kruczenski.
Loop Equations and bootstrap methods in the lattice.
Nucl. Phys. B, 921:702–726, 2017.
[AK24]TRNCG
Shahab Azarfar and Masoud Khalkhali.
Random finite noncommutative geometries and topological recursion.
Ann. Inst. Henri Poincaré D, Comb. Phys. Interact.,
11(3):409–451, 2024.
[Bar15]Barrett:2015naa
John W. Barrett.
Matrix geometries and fuzzy spaces as finite spectral triples.
J. Math. Phys., 56(8):082301, 2015.
[BG16]Barrett:2015foa
John W. Barrett and Lisa Glaser.
Monte Carlo simulations of random non-commutative geometries.
J. Phys. A, 49(24):245001, 2016.
[BHGW22]zbMATH07650768
Johannes Branahl, Alexander Hock, Harald Grosse, and Raimar Wulkenhaar.
From scalar fields on quantum spaces to blobbed topological
recursion.
J. Phys. A, Math. Theor., 55(42):30, 2022.
Id/No 423001.
[CC97]Chamseddine:1996zu
Ali H. Chamseddine and Alain Connes.
The Spectral action principle.
Commun. Math. Phys., 186:731–750, 1997.
[CC06]Connes:2006qj
Alain Connes and Ali H. Chamseddine.
Inner fluctuations of the spectral action.
J. Geom. Phys., 57:1–21, 2006.
[CGL24]Collins:2020iri
Benoît Collins, Razvan Gurau, and Luca Lionni.
The tensor Harish-Chandra–Itzykson–Zuber
integral I: Weingarten calculus and a generalization of monotone Hurwitz
numbers.
J. Eur. Math. Soc., 26(5):1851–1897, 2024.
[CM08]CMbook
Alain Connes and Matilde Marcolli.
Noncommutative geometry, quantum fields and motives, volume 47
of Texts Read. Math.
New Delhi: Hindustan Book Agency, 2008.
[Col03]Collins:2003ncs
Benoît Collins.
Moments and cumulants of polynomial random variables on
unitarygroups, the Itzykson-Zuber integral, and free probability.
Int. Math. Res. Not., 2003(17):953–982, 2003.
[Conn94]ConnesNCGbook
Alain Connes.
Noncommutative geometry.
1994.
[Conn13]Connes:2008vs
Alain Connes.
On the spectral characterization of manifolds.
J. Noncommut. Geom., 7:1–82, 2013.
[CPS23]Cao:2023uqm
Sky Cao, Minjae Park, and Scott Sheffield.
Random surfaces and lattice Yang-Mills.
7 2023.
arXiv:2307.06790.
[CS06]Collins:2006jgn
Benoît Collins and Piotr Śniady.
Integration with Respect to the Haar Measure on Unitary, Orthogonal
and Symplectic Group.
Commun. Math. Phys., 264(3):773–795, 2006.
[CvS21]Connes:2020ifm
Alain Connes and Walter D. van Suijlekom.
Spectral truncations in noncommutative geometry and operator
systems.
Commun. Math. Phys., 383(3):2021–2067, 2021.
[CvS22]Tolerance
Alain Connes and Walter D. van Suijlekom.
Tolerance relations and operator systems.
Acta Sci. Math., 88(1-2):101–129, 2022.
[DGHK17]zbMATH06721414
Bruce K. Driver, Franck Gabriel, Brian C. Hall, and Todd Kemp.
The Makeenko-Migdal equation for Yang-Mills theory on compact
surfaces.
Commun. Math. Phys., 352(3):967–978, 2017.
[DLL22]Tolerance_Lizzi
Francesco D'Andrea, Giovanni Landi, and Fedele Lizzi.
Tolerance relations and quantization.
Lett. Math. Phys., 112(4):28, 2022.
Id/No 65.
[DW17]bookQuivRep
Harm Derksen and Jerzy Weyman.
An introduction to quiver representations, volume 184 of Grad. Stud. Math.
Providence, RI: American Mathematical Society (AMS), 2017.
[GHW20]zbMATH07208520
Harald Grosse, Alexander Hock, and Raimar Wulkenhaar.
Solution of the self-dual Φ^4 QFT-model on
four-dimensional Moyal space.
J. High Energy Phys., 2020(1):17, 2020.
Id/No 81.
[GNS22]Gaunt:2022elo
James Gaunt, Hans Nguyen, and Alexander Schenkel.
BV quantization of dynamical fuzzy spectral triples.
J. Phys. A, 55(47):474004, 2022.
[GW14]GW12
Harald Grosse and Raimar Wulkenhaar.
Self-dual noncommutative ϕ^4-theory in four dimensions is
a non-perturbatively solvable and non-trivial quantum field theory.
Commun. Math. Phys., 329(3):1069–1130, 2014.
[HKP22]Hessam:2021byc
Hamed Hessam, Masoud Khalkhali, and Nathan Pagliaroli.
Bootstrapping Dirac ensembles.
J. Phys. A, 55(33):335204, 2022.
[HKPV22]Hessam:2022gaw
Hamed Hessam, Masoud Khalkhali, Nathan Pagliaroli, and Luuk S. Verhoeven.
From noncommutative geometry to random matrix theory.
J. Phys. A, 55(41):413002, 2022.
[IvS17]Iseppi:2016olv
Roberta A. Iseppi and Walter D. van Suijlekom.
Noncommutative geometry and the BV formalism: application to a
matrix model.
J. Geom. Phys., 120:129–141, 2017.
[Konts92]KontsevichModel
Maxim Kontsevich.
Intersection theory on the moduli space of curves and the matrix
Airy function.
Commun. Math. Phys., 147(1):1–23, 1992.
[KP21]Khalkhali:2020djp
Masoud Khalkhali and Nathan Pagliaroli.
Phase Transition in Random Noncommutative Geometries.
J. Phys. A, 54(3):035202, 2021.
[KP24]Khalkhali:2023onm
Masoud Khalkhali and Nathan Pagliaroli.
Coloured combinatorial maps and quartic bi-tracial 2-matrix
ensembles from noncommutative geometry.
JHEP, 05:186, 2024.
[KPV24]Khalkhali:2024tyl
Masoud Khalkhali, Nathan Pagliaroli, and Luuk S. Verhoeven.
Large N limit of fuzzy geometries coupled to fermions.
5 2024.
arXiv:2405.05056.
[KZ24]Kazakov:2024ool
Vladimir Kazakov and Zechuan Zheng.
Bootstrap for Finite N Lattice Yang-Mills Theory.
4 2024.
arXiv:2404.16925.
[Lév17]zbMATH06731252
Thierry Lévy.
The master field on the plane, volume 388 of Astérisque.
Paris: Société Mathématique de France (SMF), 2017.
[Lin20]LinBootstrap
Henry W. Lin.
Bootstraps to strings: solving random matrix models with
positivity.
JHEP, 06:090, 2020.
[MM79]MakeenkoMigdal
Yu. M. Makeenko and Alexander A. Migdal.
Exact Equation for the Loop Average in Multicolor QCD.
Phys. Lett. B, 88:135, 1979.
[Erratum: Phys.Lett.B 89, 437 (1980)].
[MvS14]MvS
Matilde Marcolli and Walter D. van Suijlekom.
Gauge networks in noncommutative geometry.
J. Geom. Phys., 75:71–91, 2014.
[NSS21]Nguyen:2021rsa
Hans Nguyen, Alexander Schenkel, and Richard J. Szabo.
Batalin-Vilkovisky quantization of fuzzy field theories.
Lett. Math. Phys., 111:149, 2021.
[Pér21]Perez-Sanchez:2020kgq
Carlos I. Pérez-Sánchez.
On Multimatrix Models Motivated by Random Noncommutative Geometry I:
The Functional Renormalization Group as a Flow in the Free Algebra.
Annales Henri Poincare, 22(9):3095–3148, 2021.
[Pér22a]SAfuzzy
Carlos I. Pérez-Sánchez.
Computing the spectral action for fuzzy geometries: from random
noncommutative geometry to bi-tracial multimatrix models.
J. Noncommut. Geom., 16(4):1137–1178, 2022.
[Pér22b]Perez-Sanchez:2021vpf
Carlos I. Pérez-Sánchez.
On Multimatrix Models Motivated by Random Noncommutative Geometry
II: A Yang-Mills-Higgs Matrix Model.
Annales Henri Poincare, 23(6):1979–2023, 2022.
[Pér24]NCGquivers
Carlos I. Pérez-Sánchez.
The Spectral Action on quivers.
2024.
arXiv:2401.03705.
[Rie10]Rieffel:2007hv
Marc A. Rieffel.
Leibniz seminorms for ”Matrix algebras converge to the sphere”.
Clay Math. Proc., 11:543–578, 2010.
[Rie23]Rieffel:2021ykh
Marc A. Rieffel.
Dirac Operators for Matrix Algebras Converging to Coadjoint Orbits.
Commun. Math. Phys., 401(2):1951–2009, 2023.
[StSz08]Steinacker:2007iq
Harold Steinacker and Richard J. Szabo.
Localization for Yang-Mills theory on the fuzzy sphere.
Commun. Math. Phys., 278:193–252, 2008.
[vNvS21]vanNuland:2021otn
Teun D. H. van Nuland and Walter D. van Suijlekom.
Cyclic cocycles in the spectral action.
J. Noncommut. Geom., 16(3):1103–1135, 2021.
[vS11]vanSuijlekom:PertOpTrace
Walter D. van Suijlekom.
Perturbations and operator trace functions.
J. Funct. Anal., 260(8):2483–2496, 2011.
[vS15]WvSbook
Walter D. van Suijlekom.
Noncommutative geometry and particle physics.
Mathematical Physics Studies. Springer, Dordrecht, 2015.
|
http://arxiv.org/abs/2409.03071v1 | 20240904204726 | Minimizing Cost Rather Than Maximizing Reward in Restless Multi-Armed Bandits | [
"R. Teal Witter",
"Lisa Hellerstein"
] | cs.DS | [
"cs.DS"
] |
Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation
Amir Syahmi*, Xiangrong L. Lu*, Yinxuan Li*, Haoxuan Yao*, Hanjun Jiang,
Ishita Acharya, Shiyi Wang, Yang Nan, Xiaodan Xing, Guang Yang†
Department of Bioengineering and Imperial-X, Imperial College London
London, W12 7SL, United Kingdom
September 9, 2024
==========================================================================================================================================================================================================================================================
§ ABSTRACT
Restless Multi-Armed Bandits (RMABs) offer a powerful framework for solving resource constrained maximization problems. However, the formulation can be inappropriate for settings where the limiting constraint is a reward threshold rather than a budget. We introduce a constrained minimization problem for RMABs that balances the goal of achieving a reward threshold while minimizing total cost. We show that even a bi-criteria approximate version of the problem is PSPACE-hard. Motivated by the hardness result, we define a decoupled problem, indexability and a Whittle index for the minimization problem, mirroring the corresponding concepts for the maximization problem. Further, we show that the Whittle index for the minimization problem can easily be computed from the Whittle index for the maximization problem. Consequently, Whittle index results on RMAB instances for the maximization problem give Whittle index results for the minimization problem. Despite the similarities between the minimization and maximization problems, solving the minimization problem is not as simple as taking direct analogs of the heuristics for the maximization problem. We give an example of an RMAB for which the greedy Whittle index heuristic achieves the optimal solution for the maximization problem, while the analogous heuristic yields the worst possible solution for the minimization problem. In light of this, we present and compare several heuristics for solving the minimization problem on real and synthetic data. Our work suggests the importance of continued investigation into the minimization problem.
§ INTRODUCTION
Restless Multi-Armed Bandits (RMABs) are a powerful
tool for modeling sequential decision-making problems
under resource constraints.
RMABs model a setting where an agent must choose some number of actions at each time step.
Each action incurs a cost and yields a stochastic reward depending on the state of the environment.
Traditionally, the agent's goal is to maximize the reward subject to a budget constraint.
Since exactly solving RMABs is computationally hard <cit.>, a common heuristic is to assign a value—called the Whittle index—for each possible action.
Then the heuristic greedily selects actions with the largest Whittle indices.
This Whittle index heuristic has been successfully applied in a variety of domains including healthcare engagement <cit.>, anti-poaching <cit.>, and sustainable energy <cit.>.
A limitation of the maximization formulation of RMABs is that it may not be appropriate for settings with a variable budget where the primary goal is to achieve a certain amount of reward.
We give several examples of such settings:
Wildlife Conservation
A nonprofit organization seeks to rescue an endangered species by re-introducing captive-bred individuals into wild areas.
Each area has a different set of environmental conditions such as food availability, predator abundance, human proximity, and habitat quality.
There is a cost associated with re-introducing the species into each area and a stochastic reward associated with the number of subsequent offspring born from the re-introduced individuals.
The goal is to re-introduce the species into several areas at minimum cost so that a certain number of individuals are born in the wild.
A solution to the maximization problem may require abandoning the project once the budget is exceeded even if the species is still critically endangered.
Energy Management
A company seeks to reduce their carbon footprint by cutting their energy use.
There are several energy-saving measures they can take such as installing solar panels, upgrading to smart appliances, using motion-sensing lights, and moderating temperature settings.
Each measure has a different cost and a stochastic reward depending on the reliability of the measure and user behavior.
The goal is to reduce energy consumption by a certain amount at minimum cost.
A solution to the maximization problem may require abandoning energy-saving measures once the budget is exceeded even if energy use remains high.
Healthcare
A medical team seeks to provide care to a sick patient.
There are different treatments they can provide such as surgery, medication, physical therapy, and counseling.
Each treatment has a different cost and a stochastic reward depending on the patient's condition and the treatment's effectiveness and reliability.
The goal is to provide care at minimum cost so that the patient stabilizes to a healthy state.
A solution to the maximization problem may require abandoning treatment once the budget is exceeded even if the patient is still sick.
In <cit.>, the authors consider a more flexible version of the maximization problem where the budget is aggregated over multiple time steps.
While their work is a step in the right direction, it still necessitates a hard budget constraint.
In fact, if their aggregated budget is exceeded early, the results may be even worse since they are unable to take actions for the rest of the aggregated period.
The solution we propose is the minimization problem:
The goal of the minimization problem is to achieve a certain amount of reward while minimizing total cost.
In the wildlife conservation example, the minimization problem requires re-introducing individuals at minimum cost until a certain number of offspring are born in the wild.
In the energy management example, the minimization problem requires becoming more energy efficient at minimum cost until energy usage is reduced by a certain amount.
In the healthcare example, the minimization problem requires providing care at minimum cost until the patient recovers to their previous baseline health.
While not applicable to all settings, the minimization problem is more appropriate than the maximization problem when the primary goal is to achieve a certain amount of reward.
§.§ Our Contributions
Our first contribution is the formulation of the
minimization problem for RMABs.
Since solving the minimization problem exactly is computationally intractable, we introduce a bi-criteria approximation problem.
We then show that even finding a bi-criteria approximation within any approximation factor is PSPACE-hard.
As a result, if PSPACE ≠ P, there are no polynomial-time algorithms which can provably solve the bi-criteria approximation problem within any approximation factor.
Given the computational hardness result, our second contribution is the decoupling of the minimization problem.
Analogous to the maximization problem, we introduce a decoupled problem, a notion of indexability, and a Whittle index for the minimization problem.
Our third contribution is a comparison between the minimization and maximization problems.
We show that the indexability of the maximization problem implies the indexability of the minimization problem, and vice versa.
Further, we show a simple relationship between the Whittle index of the maximization problem and the Whittle index of the minimization problem.
It then follows that existing results on the Whittle index for the maximization problem give the Whittle index for the minimization problem.
While the minimization and maximization problems are similar in many ways, algorithms designed for the maximization problem do not necessarily perform well on the minimization problem:
We present an RMAB instance where the standard heuristic for the maximization problem gives the optimal strategy but the analogous heuristic for the minimization problem gives the worst possible strategy.
Inspired by the need for new heuristics, our fourth contribution is the development of two heuristics for the minimization problem, inspired by prior work: an increasing budget heuristic and a truncated reward heuristic.
We compare the heuristics on anonymized patient data from the National Inpatient Sample and synthetic data generated from a well-studied hidden two-state RMAB for which the Whittle index is known exactly in closed form <cit.>.
§.§ Related Work
Restless Multi-Armed Bandits
A line of recent work has studied RMABs
for the problem of promoting patient engagement.
In <cit.>, the authors present the results of
a field study on the impact of an RMAB solution to
maternal and child health.
Several papers have also studied the problem of approximating
the Whittle index for RMABs when the transition function is
not known in advance <cit.>.
In <cit.>, they extend the RMAB instance
to the case where there are more than two possible actions.
In <cit.>, they study the maximization problem within the more general setting of non-negative costs.
We similarly consider the setting with non-negative costs but in the context of the minimization problem we introduce.
RMABs have also been studied in the context of
anti-poaching <cit.>
and sustainable energy <cit.>.
Restless Bandits and Exact Whittle Index
There is a large body of work on the Whittle index for
various RMAB instances.
For (special cases of) the following RMAB instances,
the Whittle index is known in closed form:
a hidden two-state Markov chain where the state
is only learned if the chain is activated
<cit.>,
a two-state Markov chain where the state is always unknown
<cit.>,
several variants of the age of information problems
<cit.>,
collapsing bandits <cit.>,
and crawling websites for ephemeral content
<cit.>.
Q-Learning Whittle Index
Often, the transition function is not known in advance
or is too complex to compute exactly.
In this case,
establishing indexability and deriving closed form equations
for the Whittle index can be challenging.
A line of recent work has
used Q-learning to approximate Whittle indices
<cit.>.
Q-learning is a model-free reinforcement learning algorithm that learns the value of taking an action in a particular state.
We use Q-learning in our real data experiments to approximate the Whittle index.
General Guarantees of the Whittle Index
While there are no general guarantees for the optimality
of the Whittle index strategy,
there is a line of work showing that the strategy is optimal
in a limited asymptotic sense <cit.>
and in special cases <cit.>.
In <cit.>, the authors study general conditions
under which a problem is indexable.
§ RMAB DEFINITION AND NOTATION
The Restless Multi-Armed Bandit (RMAB) problem
is built on top of n independent
Markov Decision Processes (MDPs).
Consider a particular MDP i ∈ [n] with states
𝒮_i and actions 𝒜_i.
Let τ_i be the transition function that stochastically
maps each state-action pair to a state.
Let r_i be a reward function that stochastically
maps each state-action pair to a real-valued reward.
Finally, let c_i be a deterministic cost function that maps actions to non-negative real costs.
At time step t, the agent observes the state s_i^(t)∈𝒮_i, selects an action a_i^(t)∈𝒜_i, incurs a cost c_i(a_i^(t)), and receives a reward r_i(s_i^(t), a_i^(t)).
The MDP then transitions to a new state s_i^(t+1) according to the transition function τ_i(s_i^(t), a_i^(t)).
Let 𝒮^n = 𝒮_1 ×…×𝒮_n be the combined state space and 𝒜^n = 𝒜_1 ×…×𝒜_nbe the combined action space.
We use τ, r, and c to denote the transition, reward, and cost functions on the combined state and action spaces.
The problem can be formulated for a general set
of actions but, in order to define the Whittle index, we assume each action space 𝒜_i is binary.
In this case, action 1 corresponds to choosing MDP i and action 0 (also called the passive action) corresponds to not choosing MDP i.
The cost function is generally restricted so that c(1) = 1 and c(0)=0.
However, we consider a more general setting where c(1) could be any non-negative number.
There are several ways to define the objective function of the RMAB problem.
In this work, we consider the discounted expected reward with discount factor β∈ (0,1), over an infinite horizon.
The RMAB maximization problem, generalized to non-negative costs, is as follows:
[Exact Maximization]
Consider a budget C.
The optimal solution to the maximization problem is a policy
π^+ 𝒮^n →𝒜^n
with maximum expected discounted reward
_{𝐚^(t), 𝐬^(t)}_t=1^∞∼π, τ[ ∑_t=1^∞β^t-1∑_i=1^n
r_i(s_i^(t), a_i^(t)) ]
subject to the constraint that
∑_i=1^n c_i(a_i^(t)) ≤ C
for all t.
Since evaluating the optimal strategy for the maximization problem
is PSPACE-hard <cit.>,
the classical approach is to make a series of relaxations
to get a decoupled problem for each MDP.
When we consider a single MDP i, we typically drop the subscript for notational brevity.
The decoupled problem considers each MDP in isolation and assigns a value to each action.
Let
V_max(s,a,λ) = [ r(s, a) - λ c(a) ]
+ _s' ∼τ(s,a)[βmax_a' V_max(s', a', λ) ].
The Whittle index gives a way to compare the value of taking the active action and the passive action for each MDP in each state.
(To define the Whittle index, we need a technical condition called indexability, the details of which appear in the technical appendix due to space constraints.)
The Maximization Whittle Index for an MDP in state s
is the smallest value for which it is optimal to take
the passive action.
Formally, the Whittle Index is given by
λ^+ = inf{λ: V_max(s,0,λ) > V_max(s,1,λ)}.
Because a larger value indicates the active action is more valuable than the passive action, the Whittle index suggests a natural measure that we can use to compare the value of taking an action in different MDPs.
In the classical RMAB maximization problem, the constraint is to choose a fixed number m of actions in every time step to be active.
A standard heuristic is to choose the m bandits with the highest Whittle indices.
A simple generalization of this heuristic to arbitrary costs is given in Algorithm <ref>.
In each step, it chooses the MDP with highest Whittle index, from all MDPs that fit the remaining budget.
§ THE MINIMIZATION PROBLEM
We introduce the RMAB minimization problem for settings where a fixed amount of reward must be met.
[Exact Minimization]
Consider a reward threshold R.
The solution to the minimization problem is a policy
π^- 𝒮^n →𝒜^n
with minimum expected discounted cost
_{𝐚^(t), 𝐬^(t)}_t=1^∞∼π, τ[
∑_t=1^∞β^t-1∑_i=1^n
c_i(a_i^(t))
]
subject to the constraint that
∑_i=1^n r_i(s_i^(t), a_i^(t)) ≥ R
for all t.
It is easy to satisfy the budget constraint of the maximization problem by simply limiting the number of actions selected.
In contrast, it is difficult to satisfy the reward constraint of the minimization problem because the rewards from actions are stochastic.
As a result, the challenge of the minimization problem stems
from both minimizing the objective and satisfying the constraint.
We therefore consider a bi-criteria approximation problem.
[Approximate Minimization]
Consider an approximation factor α≥ 1
and a success probability ρ∈ (0,1].
A policy π is an (α, ρ)-approximation
to the minimization problem if its expected
discounted cost (Equation <ref>)
is within a factor of α of the optimal policy π^-
and
(∑_i=1^n r_i(s_i, a_i^(t)) ≥ R )
≥ρ
for all t.
Note that the approximate minimization problem is allowed
to violate the reward constraint with probability 1-ρ
but only with respect to the randomness of the reward function.
Our main theoretical result is that even the approximate minimization problem is computationally hard.
theorempspace
Fix α≥ 1 and ρ > 0.
Finding an (α,ρ)-approximate strategy
for the minimization problem is PSPACE-hard even when costs are binary.
Our reduction is from the generic problem of determining whether a polynomial-space Turing machine halts.
The reduction constructs an RMAB minimization instance which has an MDP corresponding to each cell of the Turing machine tape.
A policy for the RMAB instance can either simulate the Turing machine, or not.
If it simulates the Turing machine, it incurs cost at least 2α^2 if the Turing machine halts and no cost otherwise.
If it does not simulate the Turing machine, it always incurs cost α.
Therefore making the optimal choice requires determining whether the Turing machine halts and making the wrong choice gives a worse than α approximation (to satisfy the reward constraint with any non-zero probability since the reward function is deterministic).
Because of its length, we delay the full proof of the theorem to the technical appendix.
In the proof, we show that an algorithm which (even approximately) solves the minimization problem can be used to determine whether a polynomial-space Turing machine halts.
Because determining whether a polynomial-space Turing machine halts is PSPACE-complete by definition, it immediately follows that the approximate minimization problem is PSPACE-hard.
§ DECOUPLING THE MINIMIZATION PROBLEM
Given the hardness of the approximate minimization problem, there are no efficient algorithms with provable guarantees unless P = PSPACE.
Instead, we turn to heuristics.
Following the approach of <cit.>, we decouple the minimization problem to consider each MDP in isolation.
In particular, we relax the constraint in the minimization problem and then apply Lagrange multipliers.
The resulting decoupled problem for a particular MDP in the minimization problem appears below.
For notational brevity, we drop the subscript i.
[Decoupled Minimization]
Consider a particular MDP in the RMAB instance.
Fix λ≥ 0.
The decoupled minimization problem for that MDP is to find the policy
max_π𝒮→𝒜[ ∑_t=1^∞β^t-1
(λ· r(s^(t), a^(t)) - c(a^(t))) ].
We now show how to convert the exact minimization problem
into the decoupled minimization problem.
The process is analogous to the maximization case in the literature <cit.>.
The idea is to turn the constrained optimization problem
into an unconstrained optimization problem.
We will accomplish this by applying Lagrange multipliers.
However, the first step is to relax the constraint so that
the objective and constraint are similar.
In particular, we will relax the constraint
in the exact minimization problem to hold on average:
(1-β) [
∑_t=1^∞β^t-1∑_i=1^n
r_i(s_i^(t), a_i^(t)) ] ≥ R.
The multiplicative normalization factor 1-β
is chosen so that if the strict constraint is satisfied
for all t,
then the relaxed constraint is also satisfied.
We apply Lagrange multipliers to the constrained problem under the relaxed constraint and reach the Lagrangian function given by:
[
∑_t=1^∞β^t-1∑_i=1^n
c_i(a_i^(t)) - λ· r_i(s_i^(t), a_i^(t)) ]
+ λ R/1-β
where λ≥ 0 is a Lagrange multiplier.
The next step is to decouple the unconstrained problem.
We can interchange the summations since n is finite.
Then we consider the problem for a fixed λ≥ 0.
The resulting problem is
min_π[
∑_i=1^n ∑_t=1^∞β^t-1
(c_i(a_i^(t)) - λ· r_i(s_i^(t), a_i^(t)))
]
+ λ R/1-β.
The problem is now decoupled since the policy for each
MDP is optimized in isolation.
With the observations that the final term is constant
for fixed λ and that minimization is equivalent
to maximization after a sign flip,
the decoupled minimization problem follows.
The difficulty of the RMAB problem lies in the complicated interactions between MDPs.
By considering each MDP separately, the decoupled problem lets us characterize the `value' of selecting a particular MDP.
Then the RMAB problem can be solved by a heuristic for choosing MDPs that relies on their value.
Analogous to the maximization problem, we introduce the Whittle index for the minimization problem to characterize the value of choosing each MDP.
We will first define the value function for the minimization problem.
V_min(s,a,λ) = [ λ· r(s, a) - c(a) ]
+ _s' ∼τ(s,a)[βmax_a' V_min(s', a', λ) ].
For the Whittle index to be defined, the MDP needs to meet a technical condition known as indexability.
An MDP is indexable if for all states s ∈𝒮
and real numbers λ' ≤λ,
V_min(s,0,λ) > V_min(s,1,λ)
V_min(s,0,λ') > V_min(s,1,λ').
In words, indexibility is the following property: If it is optimal to take the passive action under
a subsidy λ in state s,
then it must also be optimal to take the passive
action under a smaller subsidy λ' in state s.
Then the Whittle index for the minimization problem is naturally the following:
The minimization Whittle Index for an MDP in state s
is the largest value for which it is optimal to take
the passive action.
Formally, the index is given by
λ^- = sup{λ: V_min(s,0,λ) > V_min(s,1,λ)}.
With the Whittle index in hand, we can develop a heuristic for the minimization problem.
The heuristic is analogous to the greedy maximization heuristic except that the stopping condition is different.
Instead of stopping when the budget is exceeded, the minimization heuristic stops when the reward constraint is probabilistically satisfied.
Algorithm <ref> presents this strategy in pseudocode.
Remark: It is not obvious how to determine the probability of satisfying the constraint.
If the reward function is known in closed form,
then the probability can be computed exactly.
However, in many cases, the reward function is not known in closed form or computing the exact probability is computationally intensive (i.e., because there are many possibilities on the combined outcome space of the actions).
Another option is to use a concentration inequality
specialized to sums of random variables such as
Bernstein's or Hoeffding's inequalities <cit.>.
However, the concentration inequality may be quite loose depending on the RMAB reward function and so the heuristic could be overly conservative.
The option we recommend is to simulate a small number of realizations of the reward function.
This approach is computationally efficient and can be made arbitrarily accurate by increasing the number of simulations.
However, simulations may not be possible in all settings of the problem.
§ MAXIMIZATION VS MINIMIZATION PROBLEMS
In this section, we show the close connection between the Whittle indices for the maximization and minimization problems.
However, despite this connection, we also show that heuristics analogous to those used for the maximization problem can perform arbitrarily poorly for the minimization problem.
In general, indexability does not hold for all
RMAB instances <cit.>.
So the first step of using any Whittle index-based heuristic is establishing that indexability holds.
Unfortunately, it can be quite difficult for a particular RMAB instance.
Fortunately, we show that if indexability
holds for the maximization problem
then it also holds for the minimization problem.
Due to space constraints, the proofs of Corollaries <ref> and <ref> appear in the technical appendix.
corollaryindexability
The decoupled maximization problem
is indexable if and only if the
decoupled minimization problem is indexable.
A similar result holds for the Whittle index.
If the Whittle index is known for either the
maximization or minimization problem then it
is also known for the other problem.
corollaryindex
Suppose the decoupled maximization
and minimization problems are indexable.
Let λ^+ be the Whittle index for
the decoupled maximization problem
and λ^- be the index for the decoupled
minimization problem.
If λ^+, λ^- > 0, then λ^+ = 1/λ^-.
Instead of deriving the Whittle index for the minimization problem and the maximization problem, Corollary <ref> tells us how to find the Whittle index for the minimization problem if the Whittle index for the maximization problem is already known.
The following are a selection of RMABS where the Whittle index is known in closed form for the maximization problem: age of information* <cit.>, partially hidden two-state <cit.>, completely hidden two-state <cit.>, collapsing bandits <cit.>, crawling content* <cit.>, and controlled resets <cit.>.
An asterisk indicates the problem is formulated to maximize the expected (rather than discounted) reward.
By Corollaries <ref> and <ref>, the minimization problem is indexable for each of these instances and the Whittle index for the minimization problem can be easily computed.
So far, it seems that the maximization
and minimization problems are morally the same.
However, we will show that algorithms adapted from the maximization problem can perform arbitrarily poorly for the minimization problem.
Let n be the number of MDPs and
ρ > 0 be a success probability.
There is a simple RMAB instance with unit costs where
Algorithm <ref> is optimal but Algorithm <ref> has expected cost n times the optimal strategy in order to satisfy the constraint with probability ρ.
Notice that the performance of Algorithm <ref> is the worst possible for the unit cost case:
Always choosing every MDP in each time step
will trivially give an n-approximation.
Consider the following RMAB instance.
For every MDP i ∈ [n], there is a single state s.
If the active action is selected, then
with probability 0 < p_i < 1, reward r_i > 0 is received.
The cost of selecting the active action from s is 1.
If the passive action is selected,
then no reward is received and no cost is incurred.
Observe that for MDP i, we have
V_max(s,1,λ) = p_i r_i - λ
+ max_a V_max(s,a,λ)
V_max(s,0,λ) = 0
+ max_a V_max(s,a,λ).
The decoupled maximization problem is clearly indexable
and the Whittle index is λ_i^+ = p_i r_i.
Algorithm <ref> selects MDPs
with the largest Whittle indices first.
For the maximization problem, this strategy is optimal
(a simple interchange argument shows why).
By Corollary <ref>
and Corollary <ref>,
the decoupled minimization problem is also indexable
and the Whittle index for the minimization problem is
λ_i^- = 1 / (p_i r_i).
Algorithm <ref> selects MDPs
with the smallest minimization Whittle indices
(i.e., largest maximization Whittle indices) first.
We now exhibit a choice of parameters where the strategy
fails for the minimization problem.
For i < n, let p_i = log_1/(2e)(1-ρ)/n-1
and r_i=10 R /p_i
where R is the reward threshold.
Then the Whittle index for the minimization problem is λ_i^- = 1/(10R).
Let p_n = 1 and r_n = R.
Then the Whittle index for the minimization problem is λ_n^- = 1/R.
Algorithm <ref> selects
the MDPs with i<n first.
Even if the algorithm selects all MDPs with i<n,
the probability of not satisfying the constraint is
(
1 - log_1/(2e)(1-ρ)/n-1)^n-1 > (1/2e)^log_1/(2e)(1-ρ)
= 1-ρ
where the inequality holds for sufficiently large n.
Therefore, the probability of satisfying the constraint
is strictly less than ρ.
In contrast, choosing the nth MDP deterministically
achieves reward R with cost 1.
Therefore, Algorithm <ref>
has expected cost n
times the optimal strategy in order to satisfy the constraint
with probability ρ.
§ HEURISTICS FOR THE MINIMIZATION PROBLEM
The example in Claim <ref> shows that the greedy minimization heuristic described in Algorithm <ref> can fail badly.
As a result, we need alternative algorithms for the minimization problem.
Because the approximate minimization problem is PSPACE-hard, there is no polynomial-time algorithm that can provably approximate the minimization problem unless P = PSPACE.
Instead, we can at most hope for heuristic algorithms without provable guarantees that perform well in practice.
In this section, we present and discuss two such algorithms we generalize from prior work: a standard increasing budget heuristic described in Algorithm <ref> and a more specialized truncated reward heuristic described in Algorithm <ref>.
A slightly simpler version of Algorithm <ref> has been theoretically analyzed in special cases of our problem in prior work <cit.>.
The first algorithm we consider is an increasing budget heuristic.
At each phase, the heuristic greedily selects MDPs until it exhausts the current budget.
The budget grows exponentially with a multiplicative factor m.
In this way, the heuristic takes low cost actions first.
If the low cost actions satisfy the reward constraint, then we've satisfied the constraint at minimum cost.
If the low cost actions do not satisfy the constraint and we need to keep going, then we haven't paid too much more than the optimal strategy because the actions are low cost and the budget grows exponentially.
By Corollary <ref>, the largest Whittle indices for the maximization problem are the smallest Whittle indices for the minimization problem. We use this observation to simplify the pseudocode by calling GreedyMax.
We say a scale is poor if all the remaining MDPs that were not selected have a Whittle index of at most 1/b.
While it takes low cost actions first, Algorithm <ref> can choose actions which are desirable in expectation but only because their reward is large enough to balance out their small probability of having reward.
Notice these actions are desirable for the maximization problem because the goal is to maximize expected reward.
However, for the minimization problem, these actions are not desirable because they have a low probability of satisfying the reward threshold.
The second algorithm we consider addresses this problem by truncating rewards at different levels.
Just as Algorithm <ref> initially only considers actions with low cost, Algorithm <ref> initially only considers actions with low reward.
The advantage is that the actions selected have high probability of outputting reward which is helpful for solving the minimization problem.
The algorithm still performs well when high reward actions are better because the truncation factor exponentially increases.
Since we want to keep the increasing budget property, we repeat the truncation factor search for each size of the budget.
§ EXPERIMENTS
We test our algorithms on real and synthetic data sets, with deterministic costs and stochastic rewards.
On each data set, we compare the discounted cost as the reward threshold, number of MDPS, and success probability vary.
We set the discount factor to β=.9 and run each simulation for 10 time steps, repeating 10 times.
We report the mean (lines) and standard deviations (shaded regions) in the plots.
The code is available in the supplementary material and will be accessible online after publication.
§.§ National Inpatient Sample
Figure <ref> shows the performance of algorithms for selecting patient care in the National Inpatient Sample data set <cit.>.
The cost is the (normalized) dollar cost of treatment and the reward is the improvement in a patient's medical condition as measured by a four-level severity index <cit.>.
Because the costs and rewards are well-concentrated in the data, the increasing budget and truncating reward techniques have little effect on the discounted cost.
The random baseline of uniformly selecting new actions is slow because it needs to select many more actions before (probabilistically) satisfying the reward constraint.
Since Whittle indices cannot be computed for the data set in closed form, we use a Q-learning approach to approximate the Whittle indices <cit.>.
Additional details are available in the supplementary material.
§.§ Partially Observable MDPs
We also test the algorithms on synthetic data sets where the Whittle indices can be computed in closed form.
Each MDP is in either a reward-producing state or a non-reward-producing state.
The MDP transitions between states at each time step and the current state can only be observed if the MDP is selected.
The goal is to select MDPs that are likely to give large reward.
Figure <ref> shows the performance of algorithms on adversarial instances where half the MDPs reliably give a small reward and half the MDPs unreliably give a large reward.
Algorithms <ref> and <ref> (the algorithms are actually the same because the costs are all 1) select the MDPs with unreliable reward first and perform poorly.
In contrast, the truncated reward heuristic quickly gives solutions with lower cost because it truncates the large rewards of the second group and selects the MDPs with reliable reward instead.
Figure <ref> shows the performance of algorithms on uniform instances where the probabilities are selected randomly while the rewards are chosen so that all MDPs have roughly equal expected reward.
Since the probabilities and therefore rewards are similar, Algorithm <ref> gives the best performance.
§ CONCLUSION
We introduce the minimization problem for RMABs, designed specifically for applications where a certain amount of reward must be achieved.
We show that even approximating the minimization problem is PSPACE-hard.
Without provably accurate algorithms, we turn to heuristics for solving the minimization problem, defining minimization Whittle indices and presenting two heuristics.
While algorithms adapted from the maximization problem perform well when rewards are well-concentrated, we find that specialized algorithms are needed for more complex problems.
We believe our work suggests the importance of continued research into the minimization problem for RMABs.
§ ACKNOWLEDGEMENTS
R. Teal Witter was supported by the NSF Graduate Research Fellowship under Grant No. DGE-2234660.
Lisa Hellerstein was supported in part by NSF Award IIS-1909335.
alpha
§ PROOF OF PSPACE-HARDNESS
*
Let M be a 1-tape Turing Machine that on any input x, runs in space p(|x|), for some polynomial p (where |x| is the length of x). We reduce the problem of determining whether M halts on a given input x to the problem of determining an approximately optimal policy for a given RMAB minimization instance.
The reduction is as follows.
Given x, let n=p(|x|),
the maximum number of tape cells that will be used by M when run on input x. We assume without loss of generality that the tape of M consists of n cells, numbered 1 through n, that x appears at the start off the tape, and that the head of M only accesses cells 1 through n when run on input x.
Let
Γ be the tape alphabet of M,
Σ⊆Γ be the tape alphabet,
Q be the set of states of M,
q_0 ∈ Q be the initial state of M,
q_accept, q_reject∈ Q be the halting states,
and δ Q ×Γ→ Q ×Γ×{R,L}.
Let T = |Q| n^|Γ|+2, which is an upper bound on the number of steps that M could perform on input x if M halts on x.
We now describe the RMAB minimization instance.
We choose β so that the discounted costs will be between 1/2 and 1 for each step of the RMAB.
In particular, we set β = exp(log(.5)/(T+5α^2)).
The problem is to find a strategy for the RMAB that gives an (α, ρ)-approximation to the minimization problem with reward threshold R.
We will describe the instance of the RMAB minimization problem
for which solving the approximate minimization problem
requires determining whether Turing Machine M halts in T steps.
Interestingly, the MDPs in the RMAB are deterministic.
In the RMAB, there are n+1 MDPs, a “special” MDP, and n “cell” MDPs, one for each cell of the tape of M.
The operation of the RMAB can be described has having two phases: first a warm up phase, and then a simulation phase, in which the cell MDPs can simulate the operation of Turing Machine M on input x.
To perform the simulation of the Turing Machine, each cell MDP has states that can keep track of
the current Turing machine state (if the head is at that cell, or about to move to that cell), the current symbol contained in the tape cell,
and whether or not the Turing Machine's head is currently at that cell.
During the warm up phase of the RMAB, there are a series of deterministic transitions between the cell MDP states, with zero cost and reward, that end in the proper initialization of the cell MDPs to represent the Turing machine's initial tape contents, head position, and state.
For MDP i∈ [n] at time step t ∈ [T] of the Turing Machine's simulation,
the state of the MDP is represented by a 7-tuple:
(i, TMst_i^(t), symbol_i^(t),
current_i^(t), next_i^(t), j^(t), k^(t)).
When clear from context, we drop the subscript and superscript.
Here TMst∈ Q ∪{0}
represents the current state of the Turing Machine if the head is currently at the cell (or about to move there), or 0 otherwise. Additionally,
symbol∈Γ represents the current symbol in the cell,
current∈{, } represents
whether the cell currently has the head, and
next∈ [n] represents the cell that the head will move to next.
(This value is irrelevant when the cell does not contain the head.)
The indices j ∈ [n] and k ∈ [2|Q|] are artifacts of the reduction
used for passing the head and its current state between cells.
We can think of these indices as a central `clock' which allows for communication between MDPs even though they are independent. This approach was used in proving PSPACE hardness of the RMAB maximization problem <cit.>.
After the warm up phase described below, the indices j and k of the cell MDPs are initialized at 0.
At each time step of the RMAB during the simulation phase,
each cell MDP updates indices j and k as follows:
j^(t) = j^(t-1) + 1 n
and, if j^(t-1) = 0,
k^(t) = k^(t-1) + 1 2 |Q|+1.
In this way, the indices j are incremented n times for each time
k is incremented.
The remaining values stay fixed unless otherwise noted.
If the simulation ends (the Turing machine halts), the MDP corresponding to the cell containing the head of the Turing Machine enters a trapping state that incurs cost 1 and provides reward R.
Before describing the simulation phase in more detail, we first describe the warm up phase. It has 2α time steps. Both the cell MDPs and the special MDP use their states to keep track of the number of time steps spent so far in this phase. The n cell MDPs incur zero cost and earn zero reward throughout the steps of the warm up phase, no matter whether or not they are played.
At the start of the warm up phase, the RMAB policy needs to decide whether or not to play the special MDP. If it does, the special MDP enters a series of states
where for the next 2α steps, it incurs unit cost and reward R if it is played, and zero cost and zero reward if it is not played. Beyond that, it enters a trapping stage with zero cost and zero reward.
If the special MDP is not played at the first step, it enters a series of 2α states where it receives reward R at zero cost whether or not it is played.
After the 2α states, the special MDP then transitions to a trapping state that incurs zero cost and receives zero reward.
Since the constraint in the RMAB minimization problem requires at least R reward to be received in every time step,
if the RMAB policy plays the special MDP at the first step of the initialization phase, it must play the
special MDP throughout the phase, since that is the only way to satisfy the constraint.
Hence by the choice of β,
if the special MDP is played in the first step,
the discounted cost incurred in the initialization phase is between α and 2 α, and no further cost needs to be incurred.
If the special MDP is not played in the first step, then the only way to receive reward following the initialization phase (and to satisfy the reward constraint) is to simulate the Turing Machine by playing the cell MDPs.
We describe below how the simulation works, but at this point what is relevant is simply that during the simulation, no cost is incurred. If and when the Turing Machine halts, it must do so within T steps, the RMAB simulatiom will end, and the RMAB will incur a (discounted) cost of at least
2α^2.
Therefore the optimal policy is to play the special MDP if and only if the Turing machine halts on input x.
Every other strategy is at least a multiplicative factor of α worse than the optimal strategy.
Then we have that any (α, ρ)-approximation to the optimal strategy requires determining whether the Turing machine halts in T steps.
Since the RMAB is deterministic, the statement holds even for ρ arbitrarily close to 0.
We now describe the transition, reward, and cost functions for the cell MDPs that ensure that the Turing Machine is simulated properly during the simulation phase.
Each step of the Turing machine is simulated by a full round of updates
to the indices j and k (that is, n (2 |Q|+1) steps of the MDPs).
The difficulty lies in correctly copying the information about the Turing Machine state from the cell MDP for current head position
only to the cell MDP for the next head position.
The index j corresponds to the possible current position of the head (and hence which cell MDP should be copied), while the index k corresponds to the possible current state of the Turing Machine.
The simulation of a single step of the Turing Machine corresponds to multiple steps of the RMAB, consisting of a
transition phase, copy phase, and a validation phase.
In the transition phase, the cell currently with the head is wiped and the next cell with the head is initialized.
The initialization includes computing the new state of the Turing Machine, the new symbol, and the next head position.
In what follows, we give pseudocode
describing the operation of
cell MDP i, corresponding to the ith cell of the Turing Machine tape.
Each time step of the RMAB corresponds to a single value of j and k.
The cell RMABs all have the same values for j and k at each time step,
so that there are essentially n copies of each phase running in parallel, one for each cell MDP.
For most values of j and k, the reward for a cell MDP is the same whether or not the MDP is played. (The cost is always 0.) In this case, we simply give the reward value. If the reward is received only if the MDP is played, we indicate that explicitly
(otherwise, assume there is no reward).
We use H={q_accept,q_reject}
to denote the halting states.
Recall that the RMAB minimization problem constraint requires that a reward of at least R in total be obtained from the MDPs in each time step. Thus in a given time step, if one MDP receives a negative reward, another MDP must receive a positive reward to compensate.
In the copy phase, we copy the state of the Turing Machine to the MDP for the next cell that will contain the head.
The MDP for the current cell receives R reward unless the indices align with the next head and the correct state of the Turing Machine.
Therefore, the correct next cell must be selected and the correct state copied.
Unfortunately, other cells or other states could be selected and copied so we need a validation phase.
In the validation phase, we ensure that the head and state information was only copied to the correct cell MDP.
If it is copied to a cell MDP,
then that MDP receives -R reward
on the indices j and k corresponding to its index and stored Turing Machine state.
The only way to still satisfy the constraint is if the policy receives 2R reward during that time step, which only happens if the indices j and k align with the correct next head and Turing Machine state.
Therefore, the only way to satisfy the reward constraint in every step
of the copy and validation phases is if only the correct head and Turing Machine state are copied.
Therefore, we have reduced the problem of determining whether the Turing Machine M halts on a given input x to the problem of determining an approximately optimal policy for a given RMAB minimization instance.
§ PROOF OF COROLLARIES
*
We first observe that
V_max(s,a,λ) = λ· V_min(s,a,1/λ).
Now suppose indexability holds for the decoupled maximization problem.
Consider a state s ∈𝒮 and real numbers 0 < λ' ≤λ.
By the indexability assumption, we have
V_max(s,0,1/λ) > V_max(s,1,1/λ)
V_max(s,0,1/λ') > V_max(s,1,1/λ')
since 1/λ' ≥ 1/λ.
By the observation above, this is equivalent to
V_min(s,0,λ) > V_min(s,1,λ)
V_min(s,0,λ') > V_min(s,1,λ').
By a similar argument, we can show that if indexability holds
for the decoupled minimization problem then it also holds
for the decoupled maximization problem.
The statement follows.
*
Let λ' ≤λ^+ ≤λ”.
By indexability and the definition of the Whittle index,
we have
V_max(s,0,λ') ≤ V_max(s,1,λ')
V_max(s,0,λ”) > V_max(s,1,λ”).
By the observation relating the decoupled maximization
and minimization problems, this is equivalent to
V_min(s,0,1/λ') ≤ V_min(s,1,1/λ')
V_min(s,0,1/λ”) > V_min(s,1,1/λ”).
Since the inequality holds for all 1/λ' ≥ 1/λ^+ ≥ 1/λ”,
we have that 1/λ^+ = inf{λ: V_min(s,0,λ) > V_min(s,1,λ)}.
By a similar argument, we can show that if the minimization
index is λ^- then the Whittle index is 1/λ^-.
The statement follows.
§ DATA DESCRIPTION
§.§ National Inpatient Sample
The first data set consists of real hospital visits from the National Inpatient Sample <cit.>.
Each action corresponds to selecting a patient for elective care.
We build the problem by sampling n anonymized patients from the hospital data.
The cost of a patient's elective care is the total amount they were charged for a hospital visit.
The reward of a patient's elective care is how much they needed care as measured by the severity of their condition <cit.>.
In order to make the reward function stochastic, we draw the severity of a patient's condition from a distribution over similar patients.
For each patient, we compute the probability similar patients are admitted for emergency care.
For each week a patient does not receive elective care, we compound the probability they need emergency care.
With this probability, the patient needs emergency care and the total reward from the week is decreased by the severity of their condition.
Since the problem is based on real data and does not have an analytical form, the Whittle indices cannot be computed in closed form.
Instead, we approximate the Whittle indices using Q-learning <cit.>.
Since a tabular approach is infeasible given the number of patients in the data set,
we train a shallow neural network to learn the Q-value.
In our notation, the Q-value is V_min(·), of each patient and action.
In particular, we train the neural network f_θ with parameters θ to minimize the loss
ℒ(θ) = (f_θ(s,a) - (r(s,a) - c(a) + βmax_a' f_θ(s',a') ))^2
over instances from simulations of states s, selected actions a, stochastic next states s', and next actions a'.
Then we approximate the Whittle indices by the value of λ that would set f_θ(s,0) = f_θ(s,1) + λ c(1).
We make all of our code and model weights available in the supplementary material.
Our use of the real data set—the National Inpatient Sample—is governed by a data sharing agreement with the Healthcare Cost and Utilization Project <cit.> which prevents us from sharing the data.
However, researchers may apply to access the data by following the process and training outlined on the HCUP website.
§.§ Hidden Two-State MDP
The second data set consists of synthetic data from a well-studied two-state hidden MDP problem <cit.>.
We choose this synthetic data set because we can compute the Whittle index in closed form and compare the algorithms when the rewards are well-spread.
There are are n MDPs, each in state 0 or state 1.
The ith MDP transitions from state x to y with probability
p^xy_i ∈ (0,1) for x,y ∈{0,1}.
The state of an MDP is hidden unless we select it;
if we select an MDP and the MDP is in state 1,
we receive positive reward r_i ∈ℝ_>0.
The challenge is that we want a strategy to be exploiting MDPs that are likely in the 1 state while exploring new MDPs to learn if they are in the 1 state.
We consider the problem with unit costs because we are not aware of results for the Whittle index in the more general non-negative cost setting.
We make the first group of reliable MDPs have reward r_i=1 and long-term expected reward 1.
We make the second group of unreliable MDPs have reward r_i≈ 10n^2 and long-term expected reward 2.
We make all our code and data available in the supplementary material.
§ MAXIMIZATION OVERVIEW
It is computationally hard to solve the maximization problem exactly in general so prior work considers an approximate version of the problem <cit.>.
[Approximate Maximization]
Consider an approximation factor α≥ 1.
A policy π is an α-approximation to the maximization
problem if the expected discounted reward
(Equation <ref>)
is within a factor of α of the optimal policy π^+
and the budget constraint is satisfied.
Since evaluating the optimal strategy for the maximization problem
is PSPACE-hard <cit.>,
the classical approach is to make a series of relaxations
to get a decoupled problem for each MDP.
When we consider a single MDP i, we typically drop the subscript
for notational brevity.
[Decoupled Maximization <cit.>]
Consider a particular MDP in the RMAB instance.
Fix λ≥ 0.
The decoupled maximization problem for a particular MDP
with parameter λ is to find the policy
max_π𝒮→𝒜_{a^(t), s^(t)}_t=1^∞[ ∑_t=1^∞β^t-1
(r(s^(t), a^(t)) - λ c(a^(t)))
].
Before we define indexability, we will introduce notation
to describe the expected value of taking an action
in a state and then following the optimal policy.
Let
V_max(s,a,λ) = [ r(s, a) - λ c(a) ]
+ _s' ∼τ(s,a)[βmax_a' V_max(s', a', λ) ].
An MDP is indexable if for all states s ∈𝒮
and real numbers λ' ≥λ,
V_max(s,0,λ) > V_max(s,1,λ)
V_max(s,0,λ') > V_max(s,1,λ').
In words, if it is optimal to take the passive action under
a subsidy λ in state s,
then it must also be optimal to take the passive
action under a larger subsidy λ' in state s.
|
http://arxiv.org/abs/2409.03410v1 | 20240905105215 | Error bounds of Median-of-means estimators with VC-dimension | [
"Yuxuan Wang",
"Yiming Chen",
"Hanchao Wang",
"Lixin Zhang"
] | math.ST | [
"math.ST",
"stat.TH"
] |
Article Title]Error bounds of Median-of-means estimators with VC-dimension
1]Yuxuan [email protected]
2]Yiming [email protected]
2]Hanchao [email protected]
[1,3]Lixin [email protected]
*[1]School of Mathematical Sciences, Zhejiang University, Hangzhou, 310027, China
[2]Institute for Financial Studies, Shandong University, Jinan, 250100, China
[3]School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China
We obtain the upper error bounds of robust estimators for mean vector, using the median-of-means (MOM) method. The method is designed to handle data with heavy tails and contamination, with only a finite second moment, which is weaker than many others, relying on the VC dimension rather than the Rademacher complexity to measure statistical complexity. This allows us to implement MOM in covariance estimation, without imposing conditions such as L-sub-Gaussian or L_4-L_2 norm equivalence. In particular, we derive a new robust estimator, the MOM version of the halfspace depth, along with error bounds for mean estimation in any norm.
[
*
Received Month dd, yyyy; accepted Month dd, yyyy
====================================================
§ INTRODUCTION
Inspired by applications in machine learning and data science, there has been a growing interest in constructing μ mean estimators in recent years. As the most basic method of estimation, the sample mean μ̅_N=1/N∑_i=1^n X_i on a sample (X_1, …, X_N) of N independent and identically distributed random variables possesses favorable statistical properties established by the central limit theorem. However, the asymptotic properties often require a large sample size in practical applications, significantly increasing the difficulty level. Alternatively, non-asymptotic estimators with faster convergence rates have emerged, implying the need for fewer samples.
Simultaneously, in situations where the distribution exhibits heavy-tailed characteristics or outliers present in the data, the empirical mean may no longer be sufficient to meet the requirements. There is an urgent need to enhance the quality of mean estimation, particularly in a non-asymptotic context. It is worth mentioning that in mean estimation, statistical optimality, including convergence rates and computational complexity, are crucial factors. We will primarily focus on estimators that maintain high accuracy while providing substantial confidence in mean estimation.
Unlike asymptotic estimators, we consider a non-asymptotic estimator known as the "L-sub-Gaussian" estimator as below. For a mean estimator μ_N and any δ∈(0,1), let σ^2 be the variance, then there exists a constant L > 0, for any sufficiently large sample size N, the following inequality holds with at least 1 - δ probability:
|μ_N-μ| ≤L σ√(log (2 / δ))/√(N).
This definition comes from the accuracy of empirical mean estimation in the context of sub-Gaussian distributions. Many well-known robust mean estimators exhibit this property, with the most common being the median-of-means (MOM) estimator. MOM method has experienced rapid development, as evidenced by works such as <cit.>, <cit.> and <cit.>.
The estimation of vector means and real-valued sample means are fundamentally different, as is illustrated in <cit.>. The former involves transforming into a problem of concentration inequalities for the upper bounds of a stochastic process indexed by vectors in ℝ^d. This represents an essential distinction and also marks a departure from previous work.
For the estimation of the mean in the multivariate sub-Gaussian case, the empirical mean of independent and identically distributed samples with mean μ and covariance matrix Σ satisfies, with at least 1-δ probability:
∑_i=1^NX_iN-μ≤√(Tr(Σ)N)+ √(Σlog(1/δ)N),
where Tr(Σ) is the trace of Σ and Σ is the operator norm of Σ. A mean estimator is considered sub-Gaussian, as defined in <cit.>, if it satisfies an inequality of the form above (with possibly different constant factors). <cit.> used the MOM estimator and obtained near-optimal confidence bounds for mean estimation under general heavy-tailed conditions, specifically when only the second moment is finite. This means that for δ∈ (0,1), the optimal confidence upper bound holds with at least a probability of 1-δ:
μ_N-μ≤c/√(N)(max{𝔼Y_N, 𝔼G+R √(log (2 / δ))}),
where c is an absolute constant and
R=sup _x^* ∈ℬ^∘(𝔼(x^*(X-μ))^2)^1 / 2,
Y_N=1/√(N)∑_i=1^N ε_i(X_i-μ),
where ℬ^∘ is the unit ball of the dual space to (ℝ^d,), (ε_i)_i=1^N are i.i.d. symmetric {-1,1}-valued random variables that are also independent of (X_i)_i=1^N, and 𝔼Y_N/√(N) is called the Rademacher complexity. Also, observe that by the central limit theorem, Y_N tends, in distribution, to the centered Gaussian random vector G that has the same covariance as X.
<cit.> furthered this line of research by constructing estimators for the covariance matrix of random vectors that are robust to heavy tails and outliers. However, their approach imposes stricter conditions, such as the L_4-L_2 norm equivalence.
Some other covariance estimators (<cit.>) adopt a two-step approach: firstly, estimating μ by MOM or other methods, and then employing truncation techniques to estimate the covariance, aiming to mitigate the influence of heavy-tails. This two-step method is often used to improve the robustness and accuracy of covariance matrix estimation in the presence of heavy-tailed distributions or outliers.
As one of the earliest studies on sub-Gaussian mean estimator, <cit.> introduced a sharp example for distributions with known variances and distributions with finite fourth moments and known kurtosis upper bounds. Utilizing a specific function ϕ, the M-estimator, also known as the Catoni estimator, provides a well-performed confidence interval, naturally extended to mean estimation of heavy-tailed random vectors (<cit.>) and covariance matrix estimation (<cit.>). However, the construction of Catoni estimator inevitably requires the distribution variance σ^2 or Σ to be known.
In computer science and logic design, Boolean functions are basic, representing functions with binary outputs. In machine learning, they are used to model simple classification problems. The VC dimension, first introduced by Vapnik and Chervonenkis (<cit.>), measures the maximum complexity a model's hypothesis space can handle, especially in classification tasks. While it is primarily used for assessing classifier complexity, directly computing the VC dimension can be challenging, especially in high-dimensional and complex models. It serves as a theoretical guide for understanding a model's learning and generalization abilities.
<cit.> introduced a novel general approach to constrain the estimation error of MOM estimators. The author applied VC dimension instead of Rademacher complexity to measure statistical complexity, which does not take into account
the unknown structure of the covariance matrix, but is related only to the dimension
of the dual space.
In the context of multivariate analysis, the varying definitions of the median give rise to distinct MOM estimators. Among those, geometric median is proved, by <cit.>, to be available in constructing robust MOM estimators.
<cit.> then discussed the construction of sub-Gaussian estimators of a mean vector by VC dimension, using the MOM versions of the Stahel-Donoho outlyingness (SDO) and Median Absolute Deviation (MAD) functions.
Inspired by the groundbreaking work of Depersin, our objective is to establish the median-of-means (MOM) procedure to attain a sub-Gaussian rate in estimating mean vectors. This MOM approach holds the promise of being implemented for covariance estimation directly, bypassing the need for L-sub-Gaussian or L_4-L_2 norm equivalence conditions. Moreover, we aim to extend its utility to practical applications such as Principal Component Analysis (PCA).
Furthermore, ever since <cit.> introduced the concept of data depth (also known as halfspace depth), it has emerged as a fundamental tool for assessing the centrality of data points in multivariate datasets. Consequently, in this paper, we endeavor to delve into the error bounds achievable by a novel estimator: the MOM adaptation of Tukey's median. We anticipate exploring the bounds within certain confidence levels, leveraging the VC dimension as a guiding framework. Through this exploration, we aim to shed light on the robustness and efficacy of MOM estimators in practical data analysis scenarios.
The structure of this paper is as follows: In Section <ref>, we will provide necessary symbol explanations and introduce the definitions and lemmas; in Section <ref> and Section <ref>, we present error bounds of mean estimation introduced by <cit.> and covariance estimation, respectively; finally, in Section <ref>, we will give a Tukey MOM estimator and discuss the applications of this technique in regression problems.
§ PRELIMINARY
§.§ Notation
In this paper, we assume that the covariance matrix of interest is non-degenerate. We use ||·|| to represent a norm on ℝ^d, and assume the existence of an inner product ⟨·, ·⟩ that induces this norm. ||·||_* denotes its dual norm. ℬ represents the unit ball of the norm ||·||, and ℬ^* represents the unit ball of the norm ||·||_*. ℬ_0^* is defined as the set of extreme vectors of ℬ^*. We also introduce an operator norm ||A||=sup_u ∈ℬ^* ||Au||_2, where ||·||_2 is the Euclidean norm on ℝ^d. In particular, for a vector u = (u_i), ||u||_2 = √(∑_i u_i^2) represents the ℓ^2 norm. The set S^d-1 = {u ∈ℝ^d : ||u|| = 1} is the unit sphere in ℝ^d. For a matrix A = (A_ij), when A = A^T ∈ℝ^p× p is symmetric, we use λ_j(A) to denote its j-th largest singular value. The operator norm of A is represented as ||A||_op = λ_1(A), and the Frobenius norm is denoted as ||A||_F = √(∑_ij A^2_ij).
Given an integer d and a, b ∈ℝ, we use [d] to denote the set {1, 2, …, d} and write a ∨ b = max(a, b) and a∧ b = min(a, b). For two non-negative sequences {a_n}, {b_n}, for some constant C > 0 independent of n, a_n≲ b_n means a_n≤ C b_n, and a_n≳ b_n means a_n≥ C b_n. Throughout the entire paper, C, c and their variations, whose specific values may vary, represent universal constants independent of n. Additionally, the indicator function 1_B(·) is defined as
1_B(u)= 1, if u∈ B,
0, if u ∉ B .
§.§ Median of mean
Recall the definition of the classic median-of-means (MOM). First, we randomly divide the data into K equally sized blocks B_1, …, B_K (if K does not divide N evenly, we discard some data). Then, we calculate the empirical mean within each block. For k=1, …, K,
X̅_k=1/m∑_i ∈ B_k X_i.
In the one-dimensional case, for x_1,…,x_n∈ℝ, Med(x_k)=x_i, such that
#{j ∈[n]: x_j ≤ x_i}≥n/2 and #{j ∈[n]: x_j ≥ x_i}≥n/2,
where #(·) denotes the cardinality of the set, and if there are multiple i satisfying this condition, the median is defined as the smallest among them.
Let the MOM estimator be μ̃_0Med(X̅_k), then it can be shown that under suitable second-moment conditions, μ̃_0 is a sub-Gaussian estimator.
Now, let Y_1, …, Y_N denote N independent and identically distributed random vectors in ℝ^d. Our goal is to estimate 𝔼Y_1 = μ, assuming that Y_1 has finite second moments. Define Σ = 𝔼((Y_1-μ)(Y_1-μ)^T), sometimes also denoted as 𝔼((Y_1-μ) ⊗(Y_1-μ)), to represent the unknown covariance matrix of Y_1.
§.§ Contamination model
As in practice, sometimes, we cannot directly observe the vectors Y_1, …, Y_N. Instead, this dataset may already be contaminated or corrupted. One of the most famous examples is the so-called Huber's contamination model. In this setting, instead of observing samples directly from the true distribution P, we observe samples drawn from P_ε, which for an arbitrary distribution Q is defined as a mixture model,
P_ε = (1-ε)P +ε Q.
This setting is called the ε-contamination model, first proposed in a path-breaking paper by <cit.>.
More generally, our problem is that the contamination may be adversarial (<cit.>). This means that when an ε fraction of all observed values is maliciously tampered with by an adversary, who is aware of both the "clean" samples and our estimators, there exists a (possibly random) set 𝒪 such that for any i ∈𝒪^c, X_i=Y_i. Here, the size of 𝒪 satisfies |𝒪| ≤⌊ε N ⌋. Thus, the dataset we observe is {X_i: i=1, …, N}, and this model is commonly referred to as a strong contamination model. The contaminated samples {X_i: i=1, …, N} will be called ε-contaminated samples. Furthermore, our task is to recover μ and Σ.
§.§ VC dimension
Boolean classes ℱ arise in the problem of classification, where ℱ can be taken to consist of all functions
f of the form 1_{g(X) Y} for mappings g. VC dimension was first studied by Vapnik and Cervonenkis
in the 1970s, and let us recall the classical definitions.
Every f ∈ℱ, taking values in {0, 1}, is called a boolean function. And ℱ is called a boolean class of functions.
Let 𝒞 be a class of subsets of any set 𝒳. We say that 𝒞 picks out a certain subset from {x_1, . . . , x_n} if this can be formed as a set of the form C ∩{x_1, . . . , x_n} for some C∈𝒞. The collection 𝒞 is said to shatter {x_1, . . . , x_n} if each of its 2^n subsets can be picked out by 𝒞. The VC dimension VC(𝒞) is the largest cardinality of a set shattered by 𝒞, more formally,
VC (𝒞) = sup{ n : max_x_1 ,...,x_n∈𝒳#{C∩{x_1, . . . , x_n}:C∈𝒞}= 2^n} ,
and in particular, VC(𝒞)=-1 if 𝒞 is empty.
The definition of VC dimension can be easily extended to a function class ℱ in which every
function f is binary-valued, taking the values within {0, 1}. In this case, we define
VC (ℱ) = sup{ n : max_x_1, . . . , x_n∈𝒳#{( f(x_1) ,...,f(x_n)) :f∈ℱ}= 2^n}.
In particular, we derive the equivalent definition of the VC dimension as for a set C be a subset of Euclidean space E, the VC dimension of the set of half-spaces generated by the vectors of C,
VC(C)=VC( { x∈ E→1_⟨ x,v⟩≥ 0: v∈ C}) .
There are some basic facts about VC dimension from Section 7 in <cit.>.
* VC(ℝ^d)=d+1. If F is a set of real-valued functions in a k-dimensional linear space, then Pos(F):={x →1_f(x) ≥ 0, f ∈ F} has VC dimension k+1 .
* For a function g: 𝒴→𝒳, if we note ℱ∘ g={f ∘ g | f ∈ℱ}, then we have VC(ℱ∘ g) ≤VC(ℱ).
* For any r>0, VC({x ∈ E →1_⟨ x, v⟩≥ r, v ∈ C}) ≤VC(C-C) ≲VC(C).
The following lemma can be found in <cit.>:
If VC(𝒞_i)=V_i , i=1,…,m,
let V ≡∑_j=1^m V_j, and let
⊔_j=1^m 𝒞_j ≡{∪_j=1^m C_j: C_j ∈𝒞_j, j=1, …, m},
⊓_j=1^m 𝒞_j ≡{∩_j=1^m C_j: C_j ∈𝒞_j, j=1, …, m},
⊠_j=1^m 𝒞_j ≡{C_1 ×…× C_m: C_j ∈𝒞_j, j=1, …, m}.
Then the following bounds hold:
{[ V(⊔_j=1^m 𝒞_j); V(⊓_j=1^m 𝒞_j); V(⊠_1^m 𝒞_j) ]}≤ c_1 V log(c_2 m),
where c_1 = e/(e-1) log (2)≈ 2.28231.
The following is a classical result of Vapnik-Chervonenkis theory (one can see more details in <cit.>), which shows the connection between the VC dimension and empirical processes.
Let ℱ be a class of Boolean functions on a probability space (Ω, 𝒜, ℙ) with finite VC dimension VC(ℱ) ≥ 1. Let X, X_1, X_2, …, X_N be independent random points in Ω distributed according to the law ℙ. Then
𝔼sup _f ∈ℱ|1/N∑_i=1^N f(X_i)-𝔼 f(X)| ≤ C √(VC(ℱ)/N) .
§ MEAN ESTIMATION
For the mean estimation of a multi-dimensional random vector, we have the following class of median-of-means (MOM) estimators:
For ϵ>0, the sample (X_i)_i=1^N can be partitioned into K blocks B_k, each of size m=N / K. Let X̅_k=1/m∑_i ∈ B_k X_i.
For each x^* ∈ℬ^*, we obtain the set
S_x^*={y ∈ℝ^d: | Med(x^*(X̅_k) : k∈ [K])-x^*(y) | ≤ϵ} .
Let 𝕊(ϵ)=⋂_x^* ∈ℬ_0^* S_x^*, and μ_K(ϵ, δ) can be taken as any point in 𝕊(ϵ).
It can be shown that the proposed estimator satisfies the following.
For any δ∈[e^-c N, 1 / 2], there exists an estimator μ_δ such that, with probability at least 1-δ,
μ_δ-μ≲ R(√(VC(ℬ_0^*)/N)+√(128log (1 / δ)/N)+√(ε)) ,
where R^2=sup _v ∈ℬ_0^*𝔼(⟨ Y_1-μ, v⟩^2).
We will show that, in fact, we obtain the concentration inequality about the MOM estimator
ℙ(μ-μ≥ 8R√(K/N)) ≤exp(-K/128),
whenever K≥ C(VC(ℬ_0^*)∨|𝒪|), where C is a universal constant. Hence, by taking K≥ 128log (1/δ), we get the upper bound of the error.
Let ℱ={(x_i)_i ≤ m→1_⟨1/m∑_i x_i -μ, v⟩≥ r_K, v ∈ℬ_0^*}, where
r_K=4 sup _v ∈ℬ_0^*𝔼(⟨ Y_1-μ, v⟩^2)^1 / 2√(K/N).
For any k ∈ [K], let 𝐗_k:=(X_i)_i ∈ B_k and 𝐘_k:=(Y_i)_i ∈ B_k. The functions f ∈ℱ are compositions of the function x →1/m∑_i x_i-μ and of the functions x →1_⟨ x, v⟩≥ r_K for v ∈ℬ_0^*. The VC-dimension of the set of these compositions is smaller than the VC-dimension of the set of indicator functions indexed by ℬ_0^*. We just get VC(ℱ) ≤ c_0 VC(ℬ_0^*) for some constant c_0.
Notice that, the definition in (<ref>) is equivalent to
S_x^*={y ∈ℝ^d:|x^*(X̅_k)-x^*(y)| ≤ϵ for more than K/2 blocks } .
For any f∈ℱ, or, for the corresponding v ∈ℬ_0^*, there exist at least (K-∑_k=1^K f(𝐗_k)) blocks B_k, where
|⟨X̅_k-μ,v⟩| ≤ r_K.
So it is sufficient to compute the sum of f(𝐗_k). Now we write
sup _f ∈ℱ∑_k=1^K f(𝐘_k)= [ sup _f ∈ℱ∑_k=1^K f(𝐘_k)-𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k))]
+𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k)) .
Let f_j(𝐘)=sup _f ∈ℱ( ∑_k j f(𝐘_k)+f(𝐘)), since f is binary-valued, we have | f_j(𝐘)-f_j(𝐘')|≤ 1 , for any j∈ [K] and 𝐘, 𝐘'∈ℝ^d× m. By the bounded difference inequality (see <cit.>),
ℙ(sup _f ∈ℱ∑_k=1^K f(𝐘_k)-𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k)) ≥ t) ≤exp(-2 t^2/K).
Therefore, by taking t=K/16, we can derive that, with probability at least 1-exp (-K/128) , the first term in (<ref>) is bounded above by K/16.
For the second term, we further write
𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k)) ≤𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k)-K 𝔼(f(𝐘_k)))+sup _f ∈ℱ K 𝔼(f(𝐘_k)) .
By Markov's inequality, for any v ∈ℬ_0^*,
ℙ(|⟨1/m∑_i ∈ B_1 Y_i-μ, v⟩| ≥ r_K) ≤𝔼(∑_i ∈ B_1⟨ Y_i-μ, v⟩^2)/m^2 r_K^2≤1/16.
Then, sup _f ∈ℱ K 𝔼(f(𝐘_k)) ≤ K/16. And by Lemma <ref>, there exists a universal constant C^' such that
𝔼(sup _f ∈ℱ1/K∑_k=1^K f(𝐘_k)- 𝔼(f(𝐘_k))) ≤ C^'√(VC(ℱ)/K) .
Hence, if K ≥ 256 C^' 2VC(ℱ),
𝔼(sup _f ∈ℱ∑_k=1^K f(𝐘_k)-K 𝔼(f(𝐘_k))) ≤K/16 .
Thus, the second term in (<ref>) is no more than K/16+K/16=K/8.
Back to the possibly contamination model, if C ≥ 16, then K ≥ 16|𝒪| and one can deduce that
sup _f ∈ℱ∑_k=1^K f(𝐗_k) ≤sup _f ∈ℱ∑_k=1^K f(𝐘_k)+ K /16.
Putting everything together, we derive that, if C ≥ 256 C^' 2∨ 16, the following event ℰ has probability ℙ(ℰ) ≥ 1-exp (-K / 128): for all f ∈ℱ,
∑_k=1^K f(𝐗_k) ≤K /16+K /8+K /16=K /4.
Hence, we have proved that, for any x^* ∈ℬ_0^*,
Med|x^*(X̅_k-μ) | ≤ r_K,
whenever K≥ C(VC(ℬ_0^*)∨|𝒪|).
We conclude that taking ϵ=r_K, 𝕊(ϵ) is nonempty as it contains μ, at least on ℰ. By definition, μ_K∈𝕊(ϵ) for this choice of ϵ. Observe that for every x^* ∈ℬ_0^* there is some index j such that
|x^*(X̅_j)-x^*(μ_K)| ≤ r_K and |x^*(X̅_j)-x^*(μ)| ≤ r_K,
because both conditions hold for more than half of the indices j. Thus,
|x^*(μ_K)-x^*(μ)| ≤|x^*(X̅_j)-x^*(μ_K)|+|x^*(X̅_j)-x^*(μ)| ≤ 2 r_K .
Finally, recalling that v=sup _x^* ∈ℬ_0^* x^*(v), one has that
μ_K-μ=sup _x^* ∈ℬ_0^*|x^*(μ)-x^*(μ)| ≤ 2 r_K.
Thus, for μ_K∈𝕊(r_K), and K≥ C(VC(ℬ_0^*)∨|𝒪|∨ 128log (1 / δ)), we obtain that with probability at least 1-δ,
μ_K-μ≤ 8sup _v ∈ℬ_0^*𝔼(⟨ Y_1-μ, v⟩^2)^1 / 2√(K/N).
Note that the construction of our estimator μ_K depends on the number of splitting blocks K, which is relative to the level δ. Besides, the drawback is that this estimator is more theoretical than practical, for it is not convenient to construct a set like 𝕊(ϵ). As is shown in <cit.>, for every ϵ>0, the sets 𝕊(ϵ) are compact and nested set and nonempty for a sufficiently large ϵ. Therefore, the set
𝕊=⋂_ϵ>0: 𝕊(ϵ)≠∅𝕊(ϵ)
is not empty. We can define the mean estimator as any element μ∈𝕊.
Since the result is true for any norm equipped with inner product, if we consider the Euclidean norm, it shows that with probability no more than 1-δ, there exists an estimator such that
μ-μ_δ_2≲Σ^1 / 2(√(VC(ℬ_0^*)/N)+√(128log (1 / δ)/N)+√(ε)) .
We refer to the first two terms on the right-hand side of the above inequality as the weak and strong terms, respectively: the strong term is a global component with the order of √(λ_1(Σ)d)/√(N), compared with the error of sub-Gaussian empirical mean in (<ref>), where the corresponding term is √(Tr(Σ))/√(N), it matches only when Σ≃λI_d; and the weak term with directional information, corresponds to the largest variance of a one dimensional marginal of Y_1, that is, sup _v ∈B_0^*σ(v):=sup _v ∈ℬ_0^*√(𝔼(⟨ Y_1-μ, v⟩^2)); while the third terms, reflects the corruption level. Strong-weak inequalities are an important notion in high-dimensional probability, and for more improvements, one can see <cit.>, which constructs an estimator that, up to the optimal strong term, preforms robust in every direction.
§ COVARIANCE ESTIMATION
Let 0<δ<1 and consider the given sample X_1, …, X_N. Again, the sample (X_i)_i=1^N can be partitioned into K blocks B_k, each of size m=N / K.
Set M_k=1/m∑_i ∈ B_kX_i ⊗X_i. Recall the well-known fact that the dual norm to the operator norm is the nuclear norm. And, since a linear functional z acts on the matrix x via trace duality—that is, z(x)=[z, x]:=Tr(z^T x). It follows that
T={u⊗ u| u ∈ℬ_0^*(ℝ^d)} is the set of extreme points of the corresponding dual unit ball B^∘. For ϵ>0 and a fixed u∈ℬ_0^*, let U=u⊗ u, and
S_u(ϵ)={Y ∈ℝ^d × d:|[ M_k-Y, U] | ≤ε for more than K / 2 blocks } .
Set
S(ϵ)=⋂_U ∈ T S_u(ϵ) .
The estimator Σ_δ is taken as any points in S(ϵ). Again, we derive the bound of robust covariance estimator.
For any δ∈[e^-c N, 1 / 2], there exists an estimator Σ_δ∈ S(ϵ) such that
Σ-Σ_δ≲σ(√(VC(ℬ_0^*)/N)+√(128log (1 / δ)/N)+√(ε)) ,
where σ^2=sup _u ∈ℬ_2𝔼(⟨ u,(Σ-Y_1 Y_1^T) u⟩^2)<∞.
In comparison with the similar estimator in <cit.>, our estimator does not require a two-step estimation for the trace and truncation level. Additionally, it imposes fewer assumptions. For example, L-sub-Gaussian or L_4-L_2 norm equivalence for the sample distribution are not necessary any more.
The proof of Theorem <ref> is similar to that of Theorem <ref>, and we just give the first part where we adjust the dual space and some coefficients.
Let
ℱ={(x_i)_i ≤ m→1_[1/m∑_i x_i⊗ x_i -Σ, U] ≥ r_K, u ∈ℬ_0^*},
where
r_K=4 σ√(K/N). The function f ∈ℱ are compositions of the function x →1/m∑_i x_i⊗ x_i -Σ and of the functions x →1_[ x, U] ≥ r_K for U ∈ T. The VC-dimension of the set of these compositions is smaller than the VC-dimension of the set of indicator functions indexed by T. We just get VC(ℱ)≤VC(T) ≤VC(ℬ_0^*⊗ℬ_0^*) , which is, by Lemma <ref>, bounded by c_0 VC(ℬ_0^*) for some constant c_0.
By Markov's inequality, for any u ∈ℬ_0^*, that is, U∈ T,
ℙ(|[1/m∑_i Y_i⊗ Y_i -Σ, U]| ≥ r_K) ≤𝔼(∑_i ∈ B_1[Y_i⊗ Y_i -Σ, U]^2)/m^2 r_K^2≤1/16 .
The rest of this proof is the same as that of the preceding theorem.
There exists an absolute constant C such that, if K ≥ C(d ∨|𝒪|), then, with probability larger than 1-exp (-K / 128),
Σ_δ-Σ≤ 8 σ√(K/N) .
Similarly, we can immediately obtain expressions regarding the Frobenius norm as follows:
For any δ∈[e^-c N, 1 / 2], there exists an estimator Σ_δ such that
Σ-Σ_δ_F≲σ(√(VC(ℬ_0^*)/N)+√(128log (1 / δ)/N)+√(ε)) ,
where σ is the same as that in Theorem <ref>.
Another immediate corollary of Theorem <ref> is the quantitative result for the performance of PCA based on the estimator Σ_δ. Let Proj_k be the orthogonal projector on a subspace corresponding to the k largest positive eigenvalues λ_1, …, λ_k of Σ (here, we assume for simplicity that all the eigenvalues are distinct), and Proj_k be the orthogonal projector of the same rank as Proj_k corresponding to the k largest eigenvalues of Σ_δ. The following bound follows from the Davis-Kahan perturbation theorem in <cit.>.
Let Δ_k=λ_k-λ_k+1, and assume that Δ_k ≥ 16 σ√(K/N). Then
Proj_k-Proj_k≤8/Δ_kσ√(K/N),
with probability at least (1-exp (-K / 128)).
§ TUKEY'S MEDIAN
We consider a special type of median of mean, where we take the Tukey's median (see <cit.>), as a robust estimator. First, we need to introduce halfspace depth function. For any η∈ℝ^d and a distribution ℙ on ℝ^d, the halfspace depth of η with respect to ℙ is defined as
𝒟(η, ℙ)=inf _u ∈ S^d-1ℙ{u^T X ≤ u^T η} where X ∼ℙ .
Given i.i.d. observations {X_i}_i=1^N, the halfspace depth of η with respect to the observations {X_i}_i=1^N is defined as
𝒟(η,{X_i}_i=1^N)=𝒟(η, ℙ_N)=min _u ∈ S^p-11/N∑_i=1^N 1_{u^T X_i ≤ u^T η},
where ℙ_N=1/N∑_i=1^N δ_X_i is the empirical distribution. Then Tukey's median is defined to be the deepest point with respect to the observations, that is,
θ̂=max _η∈ℝ^d𝒟(η,{X_i}_i=1^N) .
§.§ Halfspace depth
In the matter of the coherence with the preceding expressions, we consider the use of its equivalent definition
θ̂=min _η∈ℝ^dsup_v ∈ℬ_0^*∑_i=1^N1_⟨ X_i-η,v⟩ > 0,
when (<ref>) has multiple maxima, θ̂ is understood as any vector that attains the deepest level. As is known from <cit.>, the maximum depth N𝒟(θ̂,{X_i}_i=1^N) is bounded below by ⌈ N /(d+1)⌉. Cause the natural form of boolean functions, it suggests us to use the VC-dimension technique. From now on, we take our estimator, the Tukey median of mean, as
μ̂=min _η∈ℝ^dsup_v ∈ℬ_0^*∑_k=1^K1_⟨X̅_k-η,v⟩ > 0.
Then we obtain the following dimension dependent bound:
There exists an universival constant C such that if K ≥ C(VC(ℬ_0^*) ∨|𝒪|), then, with probability larger than 1-exp (-K/32d^2),
μ̂_K-μ≤√(8d)sup _v ∈ℬ_0^*𝔼(⟨ Y_1-μ, v⟩^2)^1 / 2√(K/N) .
In particular, for any δ∈[e^-c N, 1 / 2], and if we use the Euclidean distance, there exists an estimator μ̂ such that
μ̂-μ≲√(d)Σ^1 / 2(√(VC(ℬ_0^*)/N)+d√(log (1 / δ)/N)+√(ϵ)).
Before providing the proof of the theorem, for ease of manipulation, we first provide a lemma in the general form without proof, which can in fact be derived from the proof process of Theorem <ref>.
For any k ∈ [K], let 𝐗_k:=(X_i)_i ∈ B_k and 𝐘_k:=(Y_i)_i ∈ B_k. Let ℱ be a Boolean class of functions satisfying the following two assumptions:
* For all f ∈ℱ, ℙ(f(Y_1)=1) ≤1 4α,
* K ≥ C(VC(ℱ) ∨|𝒪|) where C is a universal constant,
where α>1 can be any constant. Then, with probability at least 1-exp (-K/8α^2), for all f ∈ℱ, there is at least α-1α K blocks B_k on which f(X_k)=0.
Let K ≥ C(VC(ℱ) ∨|𝒪|) with C the universal constant from Lemma <ref>, and let ℱ={(x_i)_i ≤ m→1_⟨1/m∑_i x_i -μ, v⟩≥ r_K,d, v ∈ℬ_0^*}, where
r_K,d=√(8d)sup _v ∈ℬ_0^*𝔼(⟨ Y_1-μ, v⟩^2)^1 / 2√(K/N).
Based on the analysis in the previous Section <ref>, VC(ℱ) ≤ c_0 VC(ℬ_0^*) for some constant c_0.
By Markov's inequality, for any v ∈ℬ_0^*,
ℙ(|⟨1/m∑_i ∈ B_1 Y_i-μ, v⟩| ≥ r_K,d) ≤𝔼(∑_i ∈ B_1⟨ Y_i-μ, v⟩^2)/m^2 r_K,d^2≤1/8d .
By Lemma <ref>, applied with ℱ and α=2d, we derive that, with probability ≥ 1-exp (-K/32d^2), there is at least 2d-1 2d K blocks B_k on which the following event happens:
sup _v ∈ℬ_0^*|⟨1/m∑_i ∈ B_k X_i-μ, v⟩| ≤ r_K,d.
We claim that, for any v ∈ℬ_0^*,
⟨μ-μ̂, v⟩≤ r_K,d.
That is because, for any η∈ℝ^d attains the deepest level, if there exists v^* ∈ℬ_0^* such that ⟨μ-η, v^*⟩> r_K,d, then, on the above event,
⟨X̅_k-η, v^*⟩ =⟨X̅_k-μ, v^*⟩+⟨μ-η, v^*⟩
>⟨X̅_k-μ, v^*⟩+r_K,d
≥ 0
holds for at least 2d-1 2d K blocks B_k, which means that,
sup_v ∈ℬ_0^*∑_k=1^K1_⟨X̅_k-η,v⟩ > 0≥∑_k=1^K1_⟨X̅_k-η,v^*⟩ > 0≥ d K / (d+1).
Therefore η≠μ̂. Hence, we have proved the claim, ⟨μ-μ̂, v⟩≤ r_K,d. Take the supremum over v ∈ℬ_0^* and since sup _v ∈ℬ_0^*⟨μ-μ̂, v⟩=μ-μ̂, we conclude the proof.
The results obtained from Theorem <ref> in inequality (<ref>) are not entirely satisfactory due to the influence of the dimension d. In comparison to the traditional empirical mean, it provides a better approximation rate only for the contamination model. However, this upper bound is not ideal for high-dimensional heavy-tailed data, especially when d≥ N.
While <cit.> point out that the depth of our Tukey's median lies between ⌈ N /(d+1)⌉ and ⌈ N / 2⌉, and if the dataset is nearly symmetric (about some point x_0), the maximum depth will be much larger than N /(d+1), in fact approximately N / 2. According to the above proof, it can be inferred that, under the assumption of a symmetric enough distribution and the level 1-δ, Tukey MOM estimator can achieve the same error, as our initially proposed estimator, of Σ^1 / 2(√(VC(ℬ_0^*)/N)+√(log (1 / δ)/N)+√(ε)) .
§.§ Discussion on regression
The problem of regression function estimation involves estimating conditional expectations, making it a natural extension of the mean estimation concepts discussed in this paper. This section explores recent advancements in regression problems driven by uniform median-of-means estimators.
The standard framework for regression function estimation is as follows. Consider a pair of random variables (Y, V), where Y takes values in the set 𝒳 and V is real-valued. In a class ℱ comprising real-valued functions defined on 𝒳, the goal is to identify f ∈ℱ such that f(Y) serves as a reliable prediction of V. The efficacy of a predictor f ∈ℱ is measured through the mean-squared error 𝔼(f(Y)-V)^2, known as the risk. The optimal performance within the class is achieved by the risk minimizer
f^*=f ∈ℱargmin𝔼(f(Y)-V)^2.
The joint distribution of (Y, V) is usually unknown. Instead, an i.i.d. sample 𝒟_N=(Y_i, V_i)_i=1^N is provided, distributed according to the joint distribution of Y and V. Given a sample size N, a learning procedure is a mapping assigning to each sample 𝒟_N a function in ℱ, denoted as f.
The effectiveness of f is assessed based on the trade-off between accuracy ϵ and confidence δ in which f achieves that accuracy. In other words, one seeks f that satisfies the condition
ℙ(𝔼((f(Y)-V)^2 |𝒟_N) ≤inf _f ∈ℱ𝔼(f(Y)-V)^2+ϵ) ≥ 1-δ
for values of ϵ and δ as small as possible. The exploration of this accuracy/confidence trade-off has been the focus of extensive research(see <cit.>, <cit.>, and <cit.>).
Consider the standard linear regression setting, where, ℱ={⟨β, Y⟩: β∈ℝ^d}, and
β^*=β∈ℝ^dargmin l(β)=β∈ℝ^dargmin𝔼(V_1-⟨β, Y_1⟩)^2 .
Since for all β,
l(β)-l(β^*)=2 𝔼(ξ_1⟨β-β^*, Y_1⟩)+(β-β^*) Σ(β-β^*) ≤(β-β^*) Σ(β-β^*),
the key to control excess risk is to bound the β-β^*_Σ= (β-β^*) Σ(β-β^*) . One can see the natural decomposition of the quadratic component and the multiplier one (see <cit.>) leads to applying the small-ball method to learning problems. It can thus be suggested that we could introduce Tukey MOM along with VC-dimension.
The advantage of using halfspace depth is that it offers rich geometric properties related to depth, which are valuable for exploration. Additionally, there are existing algorithms that simplify the computation process. We expect that there will be favorable properties for depth functions related to linear regression problems. These properties are crucial for obtaining robust estimates of regression coefficients β, thereby constraining the excess risk in regression (learning) problems.
§ CONCLUDING REMARKS
As noted earlier, the method we present may be implemented in mean estimation, covariance estimation, and other learning problems. There has been a substantial amount of prior work on MOM estimation related to depth. In this context, we present an estimation method of Tukey MOM. Besides, <cit.> offers a theoretical framework for random approximations of Tukey depth, establishing consistency and convergence rates for their consistency with theoretical depths under general probability measures, the computation of Tukey depth constitutes a scholarly pursuit, as elucidated in <cit.>.
The introduction of VC dimension has provided new ideas for estimating the upper bounds of empirical processes. It must be reiterated that we have circumvented Rademacher complexity and, correspondingly, replaced it with the VC dimension term. The critical issue in the proof of bound errors is to find the proper boolean functions class and the use of Lemma <ref>. Consequently, in the estimation of the error bound, the focus is more on the dimensional structure of the set/space, thereby mitigating the impact of heavy tails and contamination in the samples. At the same time, we have solely addressed the statistical convergence rates in the estimation problem, without considering efficiency concerns in computation.
<cit.> shows that, using the generic chaining method, Catoni estimator can also be linked to the upper bounds of random processes, which can be used to analyze and control the upper bounds of risks in the empirical risk minimization process. Moreover, in the multivariate setting, different definitions of the median lead to different MOM estimators. Apart from Tukey median,
many other types of median have been developed, such as the coordinate-wise median, the geometric (or spatial) median, the Oja median, and the Liu median, among others; see <cit.> for a survey. Therefore, one can also consider similar results of the Catoni estimator and other types of MOM estimators with VC dimension.
§ ACKNOWLEDGEMENT
Hanchao Wang was supported by the National Natural Science Foundation of China (No. 12071257 and No. 11971267); National Key R&D Program of China (No. 2018YFA0703900 and No. 2022YFA1006104); Shandong Provincial Natural Science Foundation (No. ZR2019ZD41).
Lixin Zhang was supported by grants from the NSF of China (Grant Nos. U23A2064 and 12031005).
|
http://arxiv.org/abs/2409.03416v1 | 20240905110602 | A thermo-flow-mechanics-fracture model coupling a phase-field interface approach and thermo-fluid-structure interaction | [
"Sanghyun Lee",
"Henry von Wahl",
"Thomas Wick"
] | math.NA | [
"math.NA",
"cs.NA"
] |
1]Sanghyun Lee
[email protected]
2]Henry von Wahl
[email protected]
3]Thomas Wick
[email protected]
[1]organization=Department of Mathematics, Florida State University,
addressline=1017 Academic Way,
city=Tallahassee,
postcode=32306-4510,
state=FL,
country=USA
[2]organization=Friedrich-Schiller-Universität, Fakultät für Mathematik und Informatik,
addressline=Ernst-Abber-Platz 2,
city=Jena,
postcode=07743,
country=Germany
[3]organization=Leibniz Universität Hannover, Institut für Angewandte Mathematik,
addressline=Welfengarten 1,
city=Hannover,
postcode=30167,
country=Germany
§ ABSTRACT
Geothermal energy, a promising renewable source, relies on efficiently utilizing geothermal reservoirs, especially in Enhanced Geothermal Systems (EGS), where fractures in hot rock formations enhance permeability. Understanding fracture behavior, influenced by temperature changes, is crucial for optimizing energy extraction. To address this, we propose a novel high-accuracy phase-field interface model integrating temperature dynamics into a comprehensive hydraulic-mechanical approach, aiming for a thermo-fluid-structure interaction representation. Therein, the key technical development is a four-step algorithm. This consists of computing the fracture width, reconstructing the sharp interface geometry, solving the thermo-fluid-structure interaction (TFSI) problem, and employing a phase-field approach coupled to the temperature and pressure from the TFSI problem. By coupling temperature-hydraulic-mechanical processes with our newly proposed high-accuracy phase-field interface approach, we investigate how temperature impacts fracture width values, which are crucial for permeability in EGS reservoirs. Through this model and three different numerical simulations, we aim to provide an approach to deepen understanding of the complex interplay between temperature, mechanical deformation, and permeability evolution. Therein, we substantiate our formulations and algorithms through mesh convergence results of crack width and total crack volumes for static fractures, and crack lengths in the case of propagating fractures.
phase-field fracture thermo-hydro-mechanics thermo-fluid-structure interaction smeared interface sharp interface
§ INTRODUCTION
Geothermal energy is a promising renewable energy source, offering sustainable power generation with minimal environmental impact. Understanding the behavior of geothermal reservoirs is crucial for efficient and sustainable exploitation of this resource. In addition, Enhanced Geothermal Systems (EGS) represent a frontier in geothermal energy development, where the creation and stimulation of fractures within hot rock formations are essential for enhancing permeability and facilitating fluid circulation <cit.>. In such systems, understanding the behavior of fractures and their response to changes in temperature is essential to optimize reservoir performance and energy extraction efficiency <cit.>.
The crack opening displacement or fracture width values within these fractured reservoirs serve as critical permeability indicators, directly influencing fluid flow rates and heat transfer capabilities <cit.>. However, accurately predicting these fracture width values requires comprehensive models that account for the coupled effects of temperature, hydraulic processes, mechanical deformation, and fracture propagation.
In numerous works over the last two decades, the phase-field fracture method has been shown to be one of the most effective approaches due to the existence of the diffusive zone <cit.>. Furthermore, monographs and extended papers of phase-field methods for fracture propagation with multi-physics applications include <cit.>. These methods are in contrast to sharp interface approaches such as XFEM <cit.>. However, coupling different physical phenomena may be of interest near or across the interface. The challenge with the phase-field approach lies in the accurate modeling of fundamental physical principles, due to the inherently diffusive nature of the fracture zone, which complicates precise crack boundary localization for modeling interface-related physics. As a result, the phase-field method, while powerful for propagation and representing fracture patterns in two and three dimensions, requires careful consideration when applied to problems where complex physics at the interface between the fluid-filled fracture and the surrounding solid are critical to the fracture width <cit.>. This is particularly the case when interactions between the fluid and the surrounding solid and temperature effects need to be included. This results in a classical fluid-structure interaction problem with thermal effects in our case, where a sharp interface is required. As a result, the phase-field method, while powerful, requires careful consideration when applied to problems where the exact fracture width plays a critical role.
To address this issue, we build upon a split approach introduced in <cit.>. This approach combines a diffusive interface phase-field model for fracture dynamics and an interface resolving fluid-structure interaction problem. To combine these two somewhat opposing approaches, we consider a geometry reconstruction approach <cit.>. To reconstruct the geometry of the open fluid-filled fracture, we use the crack opening displacements or the fracture width to describe the interface between the fluid and the solid. This gives a flexible method for switching between the interface-capturing phase-field method and the interface-tracking fluid-structure interaction approach.
In this paper, we extend the algorithm to consider temperature dynamics into a comprehensive hydraulic-mechanical model using a phase-field approach. By coupling the temperature-hydraulic-mechanical (THM) processes with the phase-field approach, we aim to provide a more realistic representation of fluid-structure interaction processes within geothermal reservoirs, particularly in the context of EGS. Other works considering THM related to phase-field and porous media, include <cit.>. Specifically, we model the flow-temperature part through a Boussinesq approximation <cit.>. The temperature then enters into the solid equation via the stress tensor; see, e.g., <cit.>, <cit.> and <cit.>. The THM process is modeled and simulated in the reconstructed domain, which fully resolves the fluid reservoir. However, this differs from the domain in which classical phase-field fractures are considered. The latter is a problem usually posed using a smeared zone of a very thin fracture. This contrasts with the resolved fluid-filled reservoir considered for the THM problem. Therefore, to take quantities, such as the fluid pressure and the temperature from the THM problem, and couple these to a phase-field model, we must derive a novel phase-field fracture model. Consequently, this allows us to switch back to the phase-fracture model on the reconstructed geometry and directly use information from the THM model in the phase-field model.
With this new model at hand, we present a detailed investigation into the influence of temperature on fracture width values using our coupled THM-phase-field model. We hypothesize that temperature variations play a significant role in altering the mechanical properties of the reservoir rock, thereby influencing the opening and closure of fractures and, consequently, permeability enhancement. By incorporating temperature effects into our model, we anticipate gaining deeper insights into the thermal behavior of geothermal reservoirs and its implications for production optimization and reservoir management in EGS.
The remainder of this paper is structured as follows. First, we introduce the system of equations that model the coupled non-isothermal fluid-solid system in <Ref>. Then, we propose our novel algorithm based on a coupled iteration between the thermo-fluid-structure interaction problem and a phase-field fracture approach. In <Ref>, we then present a detailed derivation of our fluid-filled fracture phase-field model, which incorporates both pressure and temperature effects at the fracture interface. We then describe the details of how we recover the geometric information from the diffusive phase-field model to reconstruct the sharp interface geometry in <Ref>. Furthermore, we present the weak formulation of the thermo-fluid-structure-interaction problem in this section. In <Ref>, we then compute several numerical examples to validate and demonstrate the capabilities of the proposed algorithm. Finally, we give some concluding remarks in <Ref>.
§ GOVERNING EQUATIONS AND OVERALL CONCEPT
§.§ Modeling Overview
Consider the computational domain Ω⊂ℝ^d (d∈{2,3}).
We assume that this is subdivided into the fluid domain Ω_f and the solid domain Ω_s, respectively, as shown in <Ref>. In addition, the solid domain Ω_s is considered a porous medium, while the fluid domain Ω_f is considered a fracture. The concept of our model is to deal with a sharp interface between Ω_f and Ω_s, while that interface is moved with the help of a phase-field approach.
§.§.§ Thermo-Fluid-Structure-Interaction problem
In our domain, we start with a thermo-fluid-structure-interaction (TFSI) problem. For the fluid and its temperature within the fluid domain Ω_f, we employ the Boussinesq equation. This is a widely adopted approximation to address nonisothermal flow phenomena such as natural convection, circumventing the need to solve the complete compressible formulation of the Navier-Stokes equations. This approximation holds true when density fluctuations are minor, thereby diminishing the problem's nonlinearity. It assumes that density fluctuations minimally influence the flow field, except for their contribution to buoyancy forces.
The displacement is modeled by linear elasticity in the solid domain Ω_s while accounting for thermal effects. Consequently, the temperature is solved in the fluid and the solid domain Ω_s ∪Ω_f. In total, we search for the vector-valued velocity v: Ω_f→ℝ^d, the vector-valued displacement u : Ω_s →ℝ^d, the scalar-valued fluid pressure p : Ω_f →ℝ, and the scalar-valued temperature θ : Ω→ℝ. These are determined through the following set of equations: Find the velocity, pressure, temperature, and displacement (v,p,u,θ), such that
ρ (v·∇) v -
∇·_Bou
- α_θ (θ-θ_0) _f = 0 in Ω_f,
∇·v = 0 in Ω_f ,
-∇·_R(u,p,θ) = _s in Ω_s,
-∇· (κ∇θ) + v·∇θ = f_θ in Ω_f∪Ω_s.
Equations (<ref>),(<ref>), and (<ref>) correspond to a Boussinesq approximation <cit.> with the stress tensor
_Bouρν (∇v + ∇v^T) - (p-p_0)I,
where the fluid density is given as ρ >0 and the fluid viscosity is ν>0. Here, α_θ is the thermal expansion coefficient, and the external force per unit of mass is given as _f.
To consider non-isothermal effects in porous media in the solid domain Ω_s, we assume thermo-poroelasticity <cit.> with the effective solid stress tensor _R defined as
_R 2 μ() + λ tr(()) I
- α_B (p-p_0)I
- 3α_θ K_dr (θ- θ_0)I,
with the Lamé parameters μ,λ>0, the identity matrix
I∈ℝ^d× d and the linearized strain tensor
() 1/2 (∇ + ∇^T).
Moreover, α_B∈ [0,1] is Biot's coefficient, K_dr = 2/3μ + λ is the bulk modulus, and _s is an external forcing source term. The values p_0 and θ_0 are reference values for the pressure and the temperature, for instance obtained at some initial time or a background state. We remark that specifically, θ_0 will play an important role: θ - θ_0 < 0 means that we inject colder fluid than the existing fluid in the porous media, and θ - θ_0 > 0 means that we inject warmer fluid. The reference value p_0 plays a relatively less important role in this work (since we only consider the injection) and is set to p_0 = 0.
Finally, for the temperature, we consider steady state convection-diffusion heat transfer in the entire domain Ω where κ is the heat conductivity coefficient, with κ_| Ω_f = κ_f and κ_| Ω_s =κ_s, and f_θ is an external forcing source term.
§.§.§ Interface and Boundary Conditions
The coupling of Ω_s and Ω_f involves integrating the solid domain equations and the fluid domain equations on the interface , resulting in a thermo-fluid-structure interaction (TFSI) problem. Here, the sharp interface between Ω_s and Ω_f is often referred as the fracture interface in our setup and defined as
Ω_f ∩Ω_s.
First, we define the following notations
p_s = pΩ_s, p_f = pΩ_s, and θ_s = θΩ_s, θ_f = θΩ_f,
to specify the pressure and the temperature values for each subdomains. Furthermore, we propose
continuity of the temperature and pressure, as well as continuity of the normal stresses interface conditions
p_s = p_f, θ_s = θ_f, κ_s ∇θ_s = κ_f ∇θ_f , _R = _,
where the normal vector points into the fluid domain Ω_f (fracture region).
We note that the stress tensor _ in Ω_f is defined as
_ -(p-p_0)I - α_θ(θ-θ_0)I,
where we neglect the displacement in the fluid (fracture) domain. The phase-field formulations of the interface conditions are further discussed in <Ref>.
Finally, we assume the domain boundary ∂Ω∂Ω_s ∖ contained in the solid boundary, and the system is supplemented by the following (outer) boundary conditions
u = u_D, p = p_D, θ = θ_D on ∂Ω,
κ_s ∇θ· = 0 on ∂Ω,
where u_D, p_D, θ_D are corresponding Dirichlet boundary conditions for the displacement, the pressure, and the temperature, respectively.
§.§.§ Phase-Field Fracture Problem
As we consider the domain Ω_f as a propagating fracture, we need a method
to compute its changing width and changing length.
In this work, we account for dynamic changes in the subdomains Ω_s and Ω_f by considering fracture propagation due to variations in pressure and temperature.
Here, we employ the phase-field fracture (PFF) approach to track the propagation of fractures. Consequently, the fracture (fluid) domain Ω_f evolves due to the propagation of fractures and the variation in their width and length; see <Ref> for an illustration.
The PFF problem not only help to propagate the fracture and tracks the change of Ω_f, but also provides the fracture width (or crack opening displacement) values to create a sharp geometry representation of the fracture. Fracture width, also known as fracture aperture or crack opening displacement (COD), refers to the perpendicular distance between the two opposing faces of a fracture. In the context of geological formations and fluid-structure interactions, it is a critical parameter that influences the flow of fluids through the fracture. The width of a fracture can change over time due to various factors such as stress, pressure/temperature changes, and the propagation of the fracture itself. In modeling and simulations, accurately determining the fracture width is essential for predicting the behavior of fluids within the fractured medium and for understanding the mechanical properties of the fractured solid.
Thus, one of the main goal of this work is to couple the phase-field approach with the thermo-fluid-structure interaction approach to accurately assess the fracture width, the deformation of the fracture, and related physics across the fracture interface.
We reconstruct the fractured domain Ω_f from the phase-field variable φ to fully resolve the fracture interface between the fluid and the intact solid.
Here, the scalar-valued phase-field function, φ: Ω→ [0,1], acts as an indicator function. For example, the fracture domain is defined where φ = 0, and the intact domain is defined where φ = 1. The sharp fracture interface becomes a diffusive area/volume domain because the phase-field has a diffusive zone where φ∈ (0,1) with a characteristic length scale ε; see <Ref> (a). The phase-field approach solves the fracture problem to propagate the fracture by tracking the phase field values to simulate the fracture propagation as illustrated in <Ref> (b).
§.§.§ Representation of Fractures
For the coupling of the TFSI and PFF problems, careful definitions of the fracture are required. The TFSI problems involve a sharp interface between Ω_f and Ω_s, whereas the PFF problems consider a diffusive interface.
First, the classical diffusive phase-field fracture (PFF) is illustrated in <Ref>. This is the diffusive fracture obtained by solving the classical PFF problem. We note the diffusive zone where φ∈ (0,1) around the thin fracture zone (where φ = 0).
Secondly, <Ref> presents the sharp interface, ellipse-shaped fracture. This fracture is obtained by computing the COD values from the PFF shown in <Ref>, and it has no diffusive zone. We utilize this fracture to solve the TFSI problem.
Finally, the sharp interface ellipse fracture is converted back to a diffusive interface ellipse fracture by employing the PFF problem.
§.§ Overall Coupled Algorithm
In this section, we discuss our proposed algorithm, which considers coupling the non-isothermal TFSI problem to the classical phase-field fracture (PFF) problem. The overall concept can be summarized as follows <cit.>:
enumi-1
* Initialization. This step is only done for the initial time, where we obtain the classical diffusive phase-field fracture φ^0 (initial phase field) by solving the PFF problem with given initial pressure p^0 and temperature θ^0.
* Step 1. Compute the fracture width (crack opening displacement (COD)) values to create a sharp geometry representation of the sharp interface ellipse fracture.
* Step 2. Reconstruct the geometry of the open fluid-filled fracture Ω_s and Ω_f, based on the previously computed COD values.
* Step 3. Solve coupled thermo-fluid-structure interaction (TFSI) problem to get (v, p, θ) in Ω_f and (u, θ) in Ω_s.
* Step 4. Given the pressure p and the temperature θ, solve the PFF problem to obtain the displacement and the phase field (u, φ). This step is considered to be the prediction of the phase-field fracture, and provides the new fracture domain Ω_f.
The difference between the initialization and and Step 4 is that the phase-field fracture representation in Step 4 is diffusive interface ellipse fracture whereas the initialization is the thin classical PFF.
A sketch of the algorithm can be seen in <Ref>.
The above algorithm considers multiple couplings between the different variables. First, the temperature and velocity are coupled through the convective and buoyancy terms. Secondly the temperature couples to the solid stress. Finally, the displacement couples to the fluid and temperature equations through the boundary between the fluid and the solid domains.
§ GOVERNING SYSTEM FOR A PHASE-FIELD FRACTURE MODEL
In the previous section, we introduced our overall solution concept. Here, we derive the phase-field fracture approach (PFF problem) with the governing equations used in
this work to model the fluid-filled non-isothermal fracture (Initialization and Step 4).
This PFF problem solves the system to obtain the unknown vector-valued displacement field and the scalar-valued phase-field function φ.
In this section begin with a classical formulation and then present the linearized and regularized formulation used in our implementation.
Crucially, the latter takes into account that the fluid-pressure driving the fracture is only available inside the open crack from an FSI problem. In the following, starting from quasi-static phase-field fracture modeling based on the original work from <cit.>, we recapitulate how pressure interface conditions are included such
that a pressurized phase-field fracture model is obtained <cit.>.
Let us assume we have a fracture 𝒞⊂ℝ^d-1 in the domain Ω, then note we have Ω_s = Ω\𝒞. To get the correct contributions from traction boundary forces, it is convenient to start from the energy level. Here, traction forces can be described as
∫_∂Ω_s_R(,p,θ) · s.
These forces then form the starting point to derive the driving contributions to the phase-field model.
For the following, we note that 𝒞⊂Ω_f in our current setting. However, Ω_f ⊂ℝ^d will later approximate 𝒞 by utilizing the phase-field function. Thus, = ∂Ω_f will also approximate 𝒞, and we obatin 𝒞≈ in the global formulation (<Ref>) with the phase-field function.
§.§ Interface Conditions
The boundary conditions ∂Ω may be chosen appropriately for a given problem under consideration. For example, we assume homogeneous Dirichlet boundary condition for . However, care must be taken to obtain the correct interface conditions is necessary for , since 𝒞≈, and = Ω_s ∩Ω_f becomes the interface between the fluid (fracture) and the
solid (intact) domain. We follow <cit.> to model the pressure interface
conditions between the surrounding medium and the fracture, and we refer to <cit.> and <cit.> regarding the interface conditions for both the pressure and temperature. The resulting model is a non-isothermal, pressurized phase-field fracture approach. In the following, we provide the mathematical details from prescribing the integral interface conditions and their equivalent formulation as domain integrals.
To include the traction forces (<ref>) in the phase-field model, we assume continuity of the normal stresses on . It then follows that <cit.>:
- ∫_𝒞_R· s
= -∫_𝒞_· s
= ∫_𝒞 (p-p_0)· s
+ ∫_𝒞α_θ (θ-θ_0) · s
= ∫_Ω_s∇· ((p-p_0) )
-∫_∂Ø (p-p_0)· s
+∫_Ω_sα_θ∇· ((θ-θ_0) )
-∫_∂Øα_θ(θ-θ_0)· s
= ∫_Ω_s (∇ (p-p_0) + (p-p_0)∇·)
+ ∫_Ω_sα_θ (∇ (θ-θ_0)
+ (θ-θ_0)∇·),
where Gauss' divergence theorem is applied in (<ref>), and the
homogeneous Dirichlet conditions = 0 on ∂Ω are employed
in (<ref>).
§.§
To transform integrals from subdomains to the global domain Ω, we follow the standard technique in phase-field fracture, and introduces a degradation function, given by
g(φ) (1-κ) φ^2 + κ,
with the (small) bulk regularization parameter κ>0. We note that g(φ)≈ 0 in the fracture domain (i.e., Ω_f) and g(φ)≈ 1 in the intact domain (i.e., Ω_s).
Next, phase-field models start from lower-dimensional fractures, described by
the Hausdorff measure of the fracture 𝒞. This is then approximated by an Ambrosio-Tortorelli type functional <cit.>:
G_c H^d-1(𝒞) = G_c ∫_Ω12ε (1-φ)^2 + ε2(∇φ)^2 ,
with the critical energy release rate G_c> 0. Moreover, the classical phase-field fracture model does not allow an open crack to reseal. Thus, the phase field is subject to the crack irreversibility constraint ∂_t φ≤ 0. In our phase-field model, this continuous irreversibility constraint is approximated through a difference quotient by φ≤φ^old. Thus, the phase-field fracture problem is often referred to be in a quasi-static regime. Particularly, the formulation does not contain any time derivatives. Nevertheless, temporal dependence may enter the system through factors such as time-dependent pressure and temperature, and to satisfy the irreversibility constraint.
Due to the quasi-static nature of this problem formulation, we apply some further approximations to arrive at the time-discretized problem.
Let the iteration steps be denoted by τ^m, with index m∈ℕ_0. Then, we have φ^m ≈φ(τ^m). In other works, this iteration is also known as incremental steps, pseudo-time steps, or time steps. We choose our nomenclature to distinguish this from (time) steps t^n with index n used to advance the overall coupled system of phase-field mesh reconstruction and fluid-structure interaction.
Finally, to formulate the weak form of global phase-field formulation,
we consider the function spaces W H^1(Ø),
[H^1_0(Ø)]^d and the convex set
K{w∈ H^1(Ø) | w≤φ^old≤ 1 a.e. on Ø}∩ L^∞(Ø).
Then, our proposed non-isothermal, pressurized phase-field fracture problem is given be the following definition.
Let the pressure p∈ W^1,∞(Ø), the temperature θ∈ W^1,∞(Ø),
Dirichlet boundary data _D on ∂Ω, and the initial condition
φ(0)φ^0 be given. Furthermore, let the phase-field
regularization parameter >0 and the critical energy release rate G_c> 0
be given. We define the interface driven coupled thermo-phase-field fracture
problem as follows. For the iteration steps m=1,2,3,…,M, find
(,φ)(^m,φ^m) ∈{ + _D}× K,
such that
[b]
(g(φ) _R(), ( ))_Ø
+ (g(φ) (p-p_0), ∇·)_Ø
+ (g(φ) ∇ (p-p_0), )_Ø
+ (α_θg(φ) (θ-θ_0), ∇·)_Ø
+ (α_θg(φ) ∇(θ-θ_0), )_Ø = 0 ∀∈,
[b]
(1-κ) (φ _R():(), ψ-φ)_Ø
+ 2(1-κ) (φ (p-p_0) ∇·,ψ-φ)_Ø
+ 2(1-κ) (φ ∇ (p-p_0) ,ψ)_Ø
+ 2(1-κ)(α_θφ (θ-θ_0) ∇·, ψ)_Ø
+ 2(1-κ)(α_θ (φ-θ_0) ∇θ , ψ)_Ø
+ G_c ((∇φ, ∇ (ψ - φ))_Ø
-1/(1-φ,ψ-φ)_Ø)
≥ 0 ∀ψ∈ K.
This formulation uses the above interface law formulated as a domain integral
using the Gauss divergence theorem <cit.>, as derived in
<Ref>.
§.§
Now, the above formulation assumes that the temperature and pressure are given in Ω=(Ω_s ∖𝒞). However, our aim is to couple the temperature and pressure from a thermo-fluid-structure interaction problem to this phase-field model. Consequently, the pressure will only be available in Ω_f and the temperature will be defined in Ω=Ω_f∪Ω_s. Consequently, we need to derive a formulation involving the fracture boundary =∂Ω_f∩∂Ω_s.
To this end, we recall _R from (<ref>), and split it into
_R = _s - α_B (p-p_0)I - 3α_θ K_dr (θ - θ_0)I in Ω_s,
where _s = 2 μ() + λ tr(()) I is the linear elasticity part with the displacement.
As in (<ref>), we do not consider the stress
contributions, such that only the pressure and temperature components interact
from Ω_s to , i.e
- α_B (p-p_0)I - 3α_θ K_dr (θ - θ_0)I in Ω_s
As shown from (<ref>) to (<ref>), which
transforms the interface integrals to the domain integrals, we perform the similar procedure.
Here, we transform (<ref>) into an interface integral.
To this end, we work again on the energy level with as variation,
we go backwards the chain and obtain
- α_B ∫_ (p-p_0)s - 3α_θ K_dr∫_ (θ- θ_0)s.
We note that we are now employing instead of 𝒞 due to the
given phase-field fracture domain. To transition from the sharp
interface to the diffusive phase-field representation, we have to include the
phase-field variable in (<ref>). As for the case φ = 0 the entire integral would vanish,
we add the regularization κ >0 such that the discrete system matrices remain well-posed.
We then have
- α_B ∫_ g(φ) (p-p_0)s - 3α_θ K_dr∫_ g(φ)(θ- θ_0)s.
Differentiating in in the direction and in φ in the direction ψ yields
- α_B ∫_ g(φ) (p-p_0)s - 3α_θ K_dr∫_ g(φ)(θ- θ_0)s,
and
- 2(1-κ)α_B ∫_φ (p-p_0)ψs
- 2(1-κ) 3α_θ K_dr∫_φ(θ- θ_0)ψs,
respectively. We notice that _s remains as the domain integral. Thus,
the pressure and temperature contributions from _R enter in the formulation
as interface integrals on and the solid stress _s enters as domain
integral contribution. With the above derivation, we finally have the following
non-isothermal, pressurized interface phase-field problem.
Let the data from <Ref> be given, and denote the unit normal
vector pointing into the crack. We define the semi-linearized
interface driven coupled thermo-phase-field problem as follows. For the
iteration steps m=1,2,3,…,M, find (,φ)(^m,φ^m)
∈{ + _D}× W, such that
(g(φ^m-1) _s(), ())_Ø+∫_(1 - α_B)(p - p_0) ·s+∫_(α_θ - 3 α_θ K_dr) (θ - θ_0) ·s = 0
∀∈,
[b]
(1-κ) (φ _s():(), ψ)_Ø
+ G_c ((∇φ, ∇ψ)_Ø -1/(1-φ,ψ)_Ø)
+ (γ(φ - φ^m-1)^+,ψ)_Ø
+ 2 (1-κ)∫_(1 - α_B)(p - p_0) ·ψs
+ 2 (1-κ) ∫_(α_θ - 3 α_θ K_dr) (θ - θ_0) ·ψs =0 ∀ψ∈ W.
To the best of our knowledge, a phase-field formulation based on
interface couplings, such as <Ref> has only been used in
<cit.>. This is because the interface formulation seems to contradict
the phase-field concept, where the interface is not known exactly.
In this paper, we extend the idea in <cit.> to consider thermal effects.
In <Ref> we have relaxed the non-linear behavior in the
first term in (<ref>) by using
the approximation φ≈φ^m-1, i.e.,
g(φ) _R() ↦ g(φ^m-1) _R().
This follows the extrapolation introduced in <cit.> and is numerically
justified for slowly growing fractures <cit.>.
In the case of fast-growing fractures, this is known to fail <cit.> due
to the time lagging errors. In addition, this approximation could introduce the
fix point iteration error which vanishes with the number of iterations.
Alternatively, fully monolithic schemes <cit.> or an additional
iteration <cit.> must be introduced, to avoid this error.
The second approximation in <Ref> addresses
irreversibility constraint. We relax this inequality
constraint by considering a simple penalization, see <cit.>
or <cit.>), i.e.,
φ≤φ^m-1 ↦ γ(φ - φ^m-1)^+.
Here, (x)^+ = x for x>0 and (x)^+ = 0 for x≤ 0,
and where γ>0 is a penalty parameter.
Moreover, we have the following relation for the temperature
interface terms. We have K_dr>1, and so it holds
(α_θ - 3 α_θ K_dr) < 0.
This means that the injection of colder fluid than the existing fluid in the
porous media, i.e., (θ - θ_0)<0, will cause the fracture to
increase in width and length. On the other hand, for warmer water injection,
i.e., (θ - θ_0)>0, the fracture width will decrease <cit.>.
In this work, we assume α_B = 0. If the full coupling in poroelasticity
with α_B =1 is assumed, one needs to solve for the pressure by utilizing
the poroelastic coupling, then the interface integrals
∫_(1 - α_B)(p - p_0) ·s
and 2(1-κ) ∫_ (1 - α_B)(p - p_0) ·ψs
vanish. However, in our derivation, we still have the pressure contributions
(from the TFSI problem) at the interface as p still contributes to _R.
§ INTERFACE RECONSTRUCTION, REMESHING, AND COUPLED PFF-FSI FRAMEWORK
With the phase-field fracture model presented above, we may compute a fracture
if the pressure and temperature data are given. However, to obtain these
quantities from the considered TFSI problem (<ref>), we must first
obtain the geometry in which the problem is posed.
Following our previous work in <cit.>, we use the geometry
reconstruction approach of the fluid-solid interface presented in <cit.>.
In this approach, we construct a fitted mesh of the open crack and the
surrounding solid are then able to pose our TFSI
problem on this geometry using an interface tracking approach discussed next.
§.§ Step 1 and Step 2: Computing COD and Remeshing
The geometry reconstruction is based crack opening displacement (COD), or aperture
of the crack <cit.>. This can be computed by
() = ·≃∫_ℓ^,v() ·∇φ() s,
where ℓ^,v is a line through along the vector v
<cit.>. See also <cit.> for a
simplification of the formula when the crack is aligned with a Cartesian axis.
We assume that the centerline of the crack is known, i.e., the line such that
half the fracture width lies on either side of this line, c.f. the dashed line
on the left of <Ref>. With the knowledge of
this line, the crack opening displacements give a set of points on the boundary
of the open crack domain, c.f the blue points on the left of <Ref>.
These can be connected by line segments or higher-order splines, c.f.
the green line segments on the left of <Ref>. This then forms an
approximation of the crack interface. This geometry can the be remeshed
using an automated meshing tool, resulting in an appropriate mesh for the
a finite element based fluid-structure interaction solver, c.f. the right of
<Ref>.
We further assume that the number of CODs computed is sufficiently large, such
that the geometry of the open crack is sufficiently well resolved. This can be
achieved by computing O(h^-1) crack opening displacements
along the centerline of the crack.
This re-meshing approach has the advantage
that we can consider propagating and merging cracks, while driving the
crack using accurate quantities from a fluid-structure interaction model
with a resolved interface between the solid and the fluid-filled crack.
Computing the COD can become numerically unstable near the tips of the crack, c.f. <Ref> below. This is especially the case when iterating between the phase-field fracture and thermo-fluid-structure interaction problems. To avoid this issue, it can become necessary to preprocess the COD values to smooth out oscillations in the COD. Details of this are given below in <Ref>.
§.§ Step 3: Stationary Thermo-Fluid-Structure Interaction in the Reference Configuration
To obtain a weak formulation suitable for a finite element simulation,
we model the thermo-fluid-structure interaction (TFSI)
problem from <Ref>
in arbitrary Lagrangian-Eulerian (ALE) coordinates. This uses
variational monolithic coupling in a reference configuration
<cit.>.
For our model, we assume a stationary flow inside the fluid-filled crack and,
therefore, consider the stationary Navier-Stokes equations for the fluid.
This goes beyond the previously considered Stokes flow in
<cit.>.
For the TFSI problem, let the domain Ø⊂^d be divided into
a d-dimensional fluid domain , a d-dimensional solid domain
and a d-1-dimensional interface between the two, such that
Ø=∪̇∪̇. Furthermore, we also require these domains
in a reference configuration, which we denote by Ø̂, , and . Similarly, we denote by , , and
the velocity, pressure, deformation and coordinates in the reference
configuration.
In the present setting of a fluid-filled crack, the fluid domain
is the interior of the crack =, the solid is the intact medium
and the interface is the crack boundary =.
Formulating the problem in the domains and leads to the
well-established formulation in ALE coordinates <cit.>. To
obtain a monolithic formulation, we need a transformation _f from the
reference configuration to the physical domain in the fluid-domain. This
transformation is given on the interface by the structure displacement:
_f(,t)|_ = +_s(,t)|_.
On the outer boundary of the fluid domain
∂∖, it holds _f=id.
Inside , the only requirement on the transformation is that it should
be as smooth and regular as possible. To this end, we use a harmonic
extension of _s|_ to the fluid domain and
define _fid+ on . That is,
id() =, i.e., _fid+
= _f(,t) = id()+(,t),
such that
(∇̂_f,∇̂)_ = 0, _f=_s on , _f=0 on ∂∖.
Consequently, we define a continuous deformation on all Ω,
which coincides with the solid deformation in and gives the
appropriate transformation in . Skipping the subscripts and
because _f coincides with the solid transformation _s,
we define on the entire domain :
(, t) + (, t), F̂(, t) ∇̂= I+∇̂(, t), Ĵdet(F̂).
With this transformation into the reference configuration at hand, we
present weak formulation of the stationary thermo-fluid-structure
interaction problem, see also <cit.> for the formal derivation.
Let be a subspace H^1() with trace zero on
Γ̂^DΓ̂_f^D∪Γ̂_s^D and
L̂ L^2()/ℝ. Furthermore, let
_D, _D∈H^1() be prolongations of the Dirichlet data
for the velocity and deformation, a right-hand side fluid force
∈ L^2() be given. We define the stationary
thermo-fluid-structure interaction problem as follows.
Find ∈{ + _D}, ∈{+_D}, p̂∈L̂, and θ̂∈, such that
[b]
(Ĵ_BouF̂^-T,∇̂)_
+ ρ((ĴF̂^-1·∇̂) ,)_
- (Ĵα_θ (θ̂- θ̂_0) _f,)_
+ (Ĵ_RF̂^-T,∇̂)_ = (ρ_f Ĵ,)_ ∀ ∈,
- (,)_ +
(α_u ∇̂,∇̂)_ =0 ∀ ∈,
( (ĴF̂^-1_f),)_ =0 ∀ ∈L̂,
(Ĵσ̂_θF̂^-T,∇̂)_
+ (ĴF̂^-1·∇̂θ̂,)_ = (Ĵ f_θ,)_ ∀ ∈,
with the harmonic mesh extension parameter α_u>0, the stress
tensor in the solid as defined in (<ref>) and
Ĵ_RF̂^-T_R.
The thermal stress is σ_θ = k(θ)∇θ with
Ĵσ̂_θF̂^-Tσ_θ.
The temperature dependent ALE fluid stress tensor _Bou is given by
_Bou -p̂_fI +ρ_f ν(∇̂_f F̂^-1
+ F̂^-T∇̂_f^T),
with the kinematic viscosity ν>0 and the fluid's density ρ_f>0.
Let us comment on the above system in more detail. In (<ref>),
we combined the momentum equations of the fluid and the solid into one single equation.
This possible with variational-monolithic coupling
in which the interface conditions _f = _s (Dirichlet) and
_Bou = _R (Neumann) are fulfilled in an exact fashion
on the variational level. Moreover, the geometric condition
_f = _s (Dirichlet) is fulfilled as well. The Dirichlet type conditions
are built into the function spaces as usual. The Neumann type condition
cancels out on the interface; see e.g., <cit.>.
In the second equation in (<ref>), the ALE mapping is realized
and for implementational reasons by using globally defined functions, we also work
with = 0 in Ω_s. The third equation (<ref>)
is the mass conservation of the fluid. The last equation (<ref>)
is the weak form of the temperature equation.
§ NUMERICAL TESTS
In this section, we provide several numerical examples to validate and demonstrate the capabilities of the proposed algorithm. The numerical realisation is performed using Netgen/NGSolve[See also <https://ngsolve.org>] <cit.> and the add-on package ngsxfem <cit.>.
§.§ Numerical Approximation
In total, our algorithm requires four different steps.
Phase-field Fracture A PFF problem must be computed both during initialization and in Step 4 of our algorithm. Specifically, we need to numerically solve the weak formulation of the phase-field fracture problem as described in <Ref>. For this purpose, we use a finite element discretization, where both the phase-field and displacement field spaces are discretized using continuous piecewise linear finite elements. The pressure and temperature are provided as external parameters for this problem.
To initialize each phase-field computation, we set φ=0 in Ω_f and φ = 1 else, and then solve (<ref>) without the coupling terms, applying a homogeneous Neumann boundary condition. This ensures that the initial condition satisfies the phase-field equation, preventing artificial strength at the crack tips.
COD computation
In Step 1 of our algorithm, to compute the crack opening displacements (CODs) from the approximated phase-field function and displacement field, we use the unfitted finite element technology provided by ngsxfem to evaluate (<ref>) over an arbitrary line defined by a level set function. Notably, this level set does not need to be aligned with the mesh.
Geometry reconstruction To construct the geometry from the computed CODs, we use Netgen's OCC (OpenCascade) interface to create a piecewise linear approximation of the interface, which is then meshed. Consequently, this step can also be viewed as an automated CAD model generation process. This is Step 2 of our algorithm.
Thermo-fluid-structure-interaction
In Step 3 of our algorithm, we numerically solve the weak formulation of the thermo-fluid-structure interaction problem in ALE coordinates as defined in <Ref>. We use the given mesh and discretize the spaces with inf-sup stable elements. Specifically, the velocity space is discretized using continuous piecewise quadratic elements, the pressure with continuous piecewise linear elements, and both the displacement and temperature with continuous piecewise quadratic elements.
§.§ Example 1: Temperature Sensitivity Study
As a first example, we consider a basic test inspired by Sneddon's test <cit.>. Here, we do not consider the full model, but only the phase-field problem given in <Ref>, to validate our model as used in Step 0 of our algorithm.
Set-up
The set-up for this is as follows. We consider Ω=(-2,2)^2 and an the initial crack (where φ^0=0) is given in (-0.2, 0.2)×(-,). The material parameters are E=1, ν_s=0.3, G_c=1. The Lamé parameters are then obtained by μ = E/2(1+ν_s) and λ = ν_s E/(1+ν_s)(1-2ν_s).
The pressure is p=4×10^-2 and the reference pressure is p_0=0. Both the temperature and reference temperature are θ=θ_0=0. The discretizations parameters are κ=10^-10, ε= 2 h, where h is the local mesh size, and γ = 100 h^-2.
Convergence Results
We consider this set up over a series of eight meshes, constructed such that the local mesh size of the crack = / 100. On the coarsest mesh level, we take =1.28. Each subsequent mesh is constructed by halving both mesh parameters. We compute the crack opening displacement (COD) in the center of the crack and the total crack volume (TCV) and compare the results with those obtained the finest mesh as the reference solution. This reference solution was computed with a total of 2.3× 10^6 degrees of freedom, and therefore close to the limit of the direct solver used to solve the resulting linear systems within each Newton-Step. The results for this can be seen in <Ref>. We see that both the COD and TCV converge with a rate between 1 and 1.5.
Temperature effects
To study the effect of the temperature difference on the crack opening displacement, we consider the previous set up on mesh level six with temperatures θ∈{240, 160, 80, 0, -80, -160, -240}. Note that with our chosen material parameters we obtain K_dr = 5/6 and α_θ - 3α_θ K_dr = - 1.5× 10^-5. Therefore, as we consider both constant pressure and temperature in this example, an increase in temperature of 2/3× 10^5 corresponds to a pressure decrease of 1.
We present the results for the change of temperature in <Ref>. As expected, an increase in temperature relative to the reference temperature causes the crack width to shrink, while a decrease in temperature causes the crack to open further <cit.>.
§.§ Example 2: Fully Coupled Stationary Test Case
As our second example, we consider the fully coupled algorithm in a stationary setting.
Set-up
The basic set up is the same as in [sec.numerical_approx:subsec.ex1]Example 1.
That is, we consider Ω=(-2,2)^2, the initial crack is (-0.2, 0.2)×(-,), the material parameters are E=1, ν_s=0.3, G_c=1, the pressure is p=p^0=4×10^-2, the reference pressure is p_0=0, and both initial temperature and reference temperature are θ=θ_0=0. For the thermo-fluid-structure interaction problem we have κ_f = 0.01, κ_s=1.0 and the force per unit mass is 1_z. The fluid is driven by the body force = 0.2 exp(-1000||^2)_x and the temperature is driven by the external forcing term f̂_θ = 100 exp(-10||^2). The maximal mesh size chosen is h=0.16 and the crack has a local mesh size of = 0.0016. This corresponds to mesh level 3 in the previous example. The remaining discretisations parameters are as in [sec.numerical_approx:subsec.ex1]Example 1.
In Step 4, the temperature from the TFSI problem is then used as the temperature in the PFF problem, i.e., θ^n = θ^n, TFSI. As the pressure in the TFSI problem is normalized to be mean zero, we add this to the initial temperature as the driving pressure in the PFF problem, i.e., p^n= p^0 + p^n, TFSI.
Results
The resulting crack opening displacements for eight iterations between the phase-field-fracture and thermo-fluid-structure-interaction problem can be seen in <Ref>, and the resulting total crack volume for every iteration in <Ref>. A visualization of the phase-field, temperature and pressure after the final iteration can be seen in <Ref>. As the FSI temperature is positive (relative to the reference temperature), we have the expected reduction in the fluid/crack volume as seen through the COD plot and TCV values. Furthermore, the FSI pressure is negative in the left half of the domain and positive in the right half of the crack, resulting in the observed skewness of the crack. Furthermore, we see that after eight iterations, there is very little change in the COD.
Domain reconstruction
For fine-meshes and and near the singularity of the crack tip, the computation of the COD becomes numerically unstable. Due to the increasing number of necessary CODs computed with decreasing mesh size, this causes a rough boundary of the approximated crack and with each iteration of the coupled loop, this effect increases. To smooth out the crack boundary for the FSI (and subsequent PFF) computations, we process the COD data before the domain is reconstructed. To this end, we use the function to compute a least-squares polynomial approximation of the crack boundary. The resulting polynomial values at the O(h^-1) points where the COD was originally computed, plus the roots of this polynomial, are then used to define the crack boundary.
In <Ref>, we illustrate the process on the COD data resulting after four iterations of our coupling loop where the smoothing was not applied using a mesh with =0.04 and =0.0004. Here, we see the visible oscillations towards the crack tip are smoothed out effectively while the shape of the crack is maintained.
§.§ Example 3: Propagating Crack
We test our novel phase-field model in the case of a propagating crack in two situations: A spatially constant and increasing pressure and finally with a time-dependent temperature and pressure resulting from a TFSI problem. The first example aims to study pressure-driven crack propagation for our novel interface phase-field approach, while the latter aims to study the effect of the temperature coupling to the crack's propagation.
§.§.§ Example 3a: Pressure driven propagation without TFSI coupling
To test the present interface phase-field model regarding crack propagation, we consider a series of loading steps (pseudo time-steps) and apply an increasing pressure in each iteration, rather than taking the pressure and temperature from a TFSI problem. As the temperature and pressure enter the phase-field model through the same interface integrals, it is sufficient to just consider the pressure in this example.
Set-up
The basic set-up is as before. We have, Ω=(-2,2)^2, the initial crack is (-0.2, 0.2)×(-,), material parameters are E=1, ν_s=0.3, G_c=1, the initial pressure is p=4×10^-2 and the temperature is θ=0. The pressure in each iteration of our loop is chosen as p^n = 4× 10^-2 + n × 10^-4 and, we consider a total of 100 loading steps.
We consider a series of three meshes. The coarsest mesh is constructed with = 0.5 and = 0.005. Each subsequent mesh is constructed by halving both mesh parameters. The penalization parameter is again γ = 100 h^-2, but the phase-field regularization parameter is fixed to ε=0.01 for all meshes. The latter corresponds to the previous choice on the coarsest mesh in this example.
Results
The resulting horizontal position of the crack tips and the total crack volume from each iteration are shown in <Ref>. We can see the consistent propagation of the crack over each of the four meshes, thereby showing that crack propagation is also feasible for our phase-field model. In <Ref>, we show the phase-field at initialization and for four iterations. Here, we see that the phase-field is well behaved and that the crack increases both in the horizontal and vertical directions.
§.§.§ Example 3b: Propagation with TFSI coupling
In this example, we consider the fully coupled scheme.
Set-up
The initialization step is as in <Ref>. In the reconstructed domain, the TFSI problem is driven by the fluid and temperature forcing terms
= 0.02 exp( -400(x-0.1)^2 )_x and f̂_θ= -800exp(-||^2).
The heat conductivity parameters are κ_f = 0.005, κ_s=1.0 and the force per unit mass is 0.5_z. The remaining material parameters are as in [sec.numerical_approx:subsec.ex3a]Example 3a.
In each iteration of our coupling loop, the phase-field is then driven by the pressure p^n = 4× 10^-2 + 5n×10^-5 + p^n,TFSI and temperature θ^n=θ^n, TFSI. We again consider 100 loading steps. The remaining phase-field parameters are again chosen as in [sec.numerical_approx:subsec.ex3a]Example 3a.
Results
We consider a series of three meshes. First, we see in <Ref> the resulting temperature and pressure from the TFSI computation in the first iteration. Here we see that the temperature is negative and that the TFSI pressure is positive at the right tip of the crack and negative at the left tip of the crack. Due to the cooling of the medium and the higher pressure at the right tip of the crack, we expect the fracture to grow faster than in [sec.numerical_approx:subsec.ex3a]Example 3a. Furthermore, we expect it to grow faster towards the left than the right.
In <Ref>, we see the position of the left and right tips of the crack for each considered mesh. First we note, that the crack does indeed grow faster than in [sec.numerical_approx:subsec.ex3a]Example 3a and that the results are consistent over the series of meshes. Furthermore, we see that on finer meshes, the trajectory becomes smoother. After twenty iterations, the left and right positions of the crack tips are -0.2591, 0.2656, -0.2609, 0.2651 and -0.2571, 0.2598 for the three meshes, respectively. We, therefore, initially observe the expected faster growth toward the right than the left. However, this difference is not very large and after 100 iterations, the left tip has moved further than the right in some cases. We attribute this to numerical error, since this inconsistency occurs earlier on coarser meshes, which we observe to be less stable in <Ref>.
§.§ Example 4: Two orthogonal cracks with TFSI coupling
In this final example, we consider two orthogonal, connected cracks, illustrating that this approach is applicable to multiple cracks. While this situation is not challenging for phase-field computations, it is more involved with respect to the geometry reconstruction.
To study the effects of including the temperature in our model, we compare two cases here. First, we only couple the TFSI pressure back to the phase-field computation, and denote the resulting phase-field fracture deformation by _h^p. Secondly, we couple both the temperature and pressure back to the phase-field computation, and denote the resulting phase-field fracture deformation by _h^p, θ.
The following example is an extension of the example presented in <cit.>.
Set-up
The background domain is Ø=(-2,2)^2, and as before, we consider homogeneous Dirichlet boundary conditions for the displacement and homogeneous Neumann conditions for the phase-field. The initial phase field is given by a flipped "T", i.e.,
(-0.2, 0.2)×(-h, h)∪(0.2-h, 0.2+h)×(-0.2, 0.2).
As before, the material parameters are as above E=1, ν_s=0.3, and G_c=1, while the initialization pressure and temperature are p=4× 10^-2 and θ=0, respectively.
The thermo-fluid-structure interaction problem in the reconstructed domain is then driven by the forcing terms
= 0.2exp(-1000( (x - 0.2)^2 + y^2))_x and f̂_θ= 100 exp(-10 (x-0.2)^2 -5 y^2).
The reference temperature is set as θ_0 = 0, the heat conductivity parameters are κ_f = 0.01, κ_s=1.0 and the force per unit mass is 1_y.
Results
The resulting TFSI temperature in the reconstructed domain can be seen in <Ref>. This temperature is positive and larger inside the crack than the surrounding material. We, therefore, expect the crack to open less when the temperature is coupled to the phase-field fracture model
in addition to the pressure. Furthermore, we note that while the maximum temperature gets smaller with each smaller mesh size, the results are overall comparable and consistent. This difference appears to be driven by the different of domain from to the geometry reconstruction with changing CODs on each mesh.
The difference between the resulting deformations _h^p - _h^p,θ is shown in <Ref>. We see that this difference points away from the crack, indicating that the deformation _h^p resulting from just the pressure coupling is indeed larger than the deformation _h^p,θ from both pressure and temperature coupling.
§ CONCLUSIONS
In this work, both a new mathematical model and a new numerical approach for thermo-flow-mechanics-fracture are derived. The key idea is to utilize a phase-field approach for fracture opening and fracture propagation. Having the fracture subdomain at hand, a geometry reconstruction approach is employed. This then results in a mesh with which resolves the boundary between the fluid filled crack and the intact solid domains. This allows us to formulate sharp interface problems for the fracture subdomain and the surrounding medium. The resulting framework is a mixture of interface-capturing and interface-tracking approaches that are conveniently combined. It is substantiated for thermo-flow-mechanics-fracture, which is on the one hand thermo-flow-mechanics (THM) phase-field fracture coupled with thermo fluid-structure interaction (TFSI). The latter is prescribed on moving domains (as the fracture moves) for which the arbitrary Lagrangian-Eulerian (ALE) technique is employed. The governing physics are newly developed, specifically the interface conditions for the phase-field sub-problem. These ingredients yield an overall coupling algorithm with four principle steps after an initialization step: fracture width computation (step 1), re-meshing of the reconstructed subdomains (step 2), solving the TFSI problem (step 3), solving the THM phase-field fracture problem (step 4). The algorithmic details and resulting sub-problems are carefully worked out.
In order to substantiate our new model and new algorithms, we conducted several numerical experiments. Therein, a key component are mesh refinement studies in which the computational robustness is investigated. This was done for the total crack volume (TCV) and the crack opening displacements (COD) as well as the fracture length. All yielded satisfactory findings in view of the complexity of the problem statement. It should be mentioned that specifically the re-meshing of the crack tips required additional algorithmic developments and obtaining computational convergence was a challenge. From the physics point of view, we emphasized the temperature's influence, where there is agreement in the literature that fractures open due to cold water injection and close due to warm water injection. Single fracture as well as two joining fracture were considered.
The beauty of our overall framework is that different physics can be easily exchanged as long as the interface conditions are correctly modeled. Therefore, our framework presents the opportunity for future extensions, which could include, for example, two-phase flows, or more complicated mechanics. Furthermore, the extension to three spatial dimensions will remain a challenge due to the geometry reconstruction and requires more extensive future work.
§ DATA AVAILABILITY STATEMENT
The code used to realize the presented results, as well as the resulting raw data, is freely available on github under <https://github.com/hvonwah/repro-tfsi-pff> and is archived on zenodo <https://doi.org/10.5281/zenodo.13685486>.
§ ACKNOWLEDGEMENTS
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while the HvW was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Numerical PDEs: Analysis, Algorithms, and Data Challenges program.
SL acknowledges support within his research stay in May 2024 at the Leibniz University Hannover.
The work of S. Lee was partially supported by the U.S.
National Science Foundation Grant DMS-2208402 and by the U.S. Department of Energy, Office of Science,
Energy Earthshots Initiatives under Award Number DE-SC-0024703.
10
url<#>1urlprefixURL href#1#2#2 #1#1
moore2019utah
J. Moore, J. McLennan, R. Allis, K. Pankow, S. Simmons, R. Podgorney,
P. Wannamaker, J. Bartley, C. Jones, W. Rickard, The Utah Frontier
Observatory for p in Geothermal Energy (FORGE): An international
laboratory for enhanced geothermal system technology development, in: 44th
Workshop on Geothermal Reservoir Engineering, Stanford University, 2019, pp.
11–13.
olasolo2016enhanced
P. Olasolo, M. C. Juárez, M. P. Morales, I. A. Liarte, et al., Enhanced
geothermal systems (EGS): A review, Renewable Sustainable Energy Rev. 56
(2016) 133–144.
https://doi.org/10.1016/j.rser.2015.11.031
doi:10.1016/j.rser.2015.11.031.
lu2018global
S.-M. Lu, A global review of enhanced geothermal system (EGS), Renewable
Sustainable Energy Rev. 81 (2018) 2902–2921.
https://doi.org/10.1016/j.rser.2017.06.097
doi:10.1016/j.rser.2017.06.097.
mcclure2014investigation
M. W. McClure, R. N. Horne, An investigation of stimulation mechanisms in
Enhanced Geothermal Systems, Int. J. Rock Mech. Min. 72 (2014)
242–260.
https://doi.org/10.1016/j.ijrmms.2014.07.011
doi:10.1016/j.ijrmms.2014.07.011.
caulk2016experimental
R. A. Caulk, E. Ghazanfari, J. N. Perdrial, N. Perdrial, Experimental
investigation of fracture aperture and permeability change within Enhanced
Geothermal Systems, Geothermics 62 (2016) 12–21.
https://doi.org/10.1016/j.geothermics.2016.02.003
doi:10.1016/j.geothermics.2016.02.003.
zhang2022thermal
X. Zhang, Z. Li, X. Wang, H. Wang, B. Li, Y. Niu, Thermal effect on the
fracture behavior of granite using acoustic emission and digital image
correlation: An experimental investigation, Theor. Appl. Fract. Mec. 121
(2022) 103540.
https://doi.org/10.1016/j.tafmec.2022.103540
doi:10.1016/j.tafmec.2022.103540.
donahue1972crack
R. J. Donahue, H. M. Clark, P. Atanmo, R. Kumble, A. J. McEvily, Crack opening
displacement and the rate of fatigue crack growth, Int. J. Fract. Mech. 8
(1972) 209–219.
https://doi.org/10.1007/BF00703882
doi:10.1007/BF00703882.
burdekin1966crack
F. M. Burdekin, D. E. W. Stone, The crack opening displacement approach to
fracture mechanics in yielding materials, J. Strain Anal. Eng. Des. 1 (2)
(1966) 145–153.
https://doi.org/10.1243/03093247V012145
doi:10.1243/03093247V012145.
zhou2022thermal
L. Zhou, W. Gao, L. Yu, Z. Zhu, J. Chen, X. Wang, Thermal effects on fracture
toughness of cracked straight-through Brazilian disk green sandstone and
granite, J. Rock Mech. Geotech. Eng. 14 (5) (2022) 1447–1460.
https://doi.org/10.1016/j.jrmge.2022.02.016
doi:10.1016/j.jrmge.2022.02.016.
BourFraMar00
B. Bourdin, G. A. Francfort, J.-J. Marigo, Numerical experiments in revisited
brittle fracture, J. Mech. Phys. Solids 48 (4) (2000) 797–826.
https://doi.org/10.1016/S0022-5096(99)00028-9
doi:10.1016/S0022-5096(99)00028-9.
KuMue10
C. Kuhn, R. Müller, A continuum phase field model for fracture, Eng. Fract.
Mech. 77 (18) (2010) 3625–3634, Computational Mechanics in Fracture and
Damage: A Special Issue in Honor of Prof. Gross.
https://doi.org/10.1016/j.engfracmech.2010.08.009
doi:10.1016/j.engfracmech.2010.08.009.
MieWelHof10a
C. Miehe, F. Welschinger, M. Hofacker, Thermodynamically consistent phase-field
models of fracture: Variational principles and multi-field FE
implementations, Internat. J. Numer. Methods Engrg. 83 (10) (2010)
1273–1311.
https://doi.org/10.1002/nme.2861
doi:10.1002/nme.2861.
MieWelHof10b
C. Miehe, M. Hofacker, F. Welschinger, A phase field model for rate-independent
crack propagation: Robust algorithmic implementation based on operator
splits, Comput. Methods Appl. Mech. Engrg. 199 (2010) 2765–2778.
https://doi.org/10.1016/j.cma.2010.04.011
doi:10.1016/j.cma.2010.04.011.
BoVeScoHuLa12
M. J. Borden, C. V. Verhoosel, M. A. Scott, T. J. R. Hughes, C. M. Landis, A
phase-field description of dynamic brittle fracture, Comput. Methods Appl.
Mech. Engrg. 217 (2012) 77–95.
https://doi.org/10.1016/j.cma.2012.01.008
doi:10.1016/j.cma.2012.01.008.
AmGeraLoren15
M. Ambati, T. Gerasimov, L. De Lorenzis, A review on phase-field models of
brittle fracture and a new fast hybrid formulation, Comput. Mech. 55 (2)
(2015) 383–405.
https://doi.org/10.1007/s00466-014-1109-y
doi:10.1007/s00466-014-1109-y.
ARRIAGA201833
M. Arriaga, H. Waisman, Stability analysis of the phase-field method for
fracture with a general degradation function and plasticity induced crack
generation, Mech. Mater. 116 (2018) 33–48, iUTAM Symposium on Dynamic
Instabilities in Solids.
https://doi.org/10.1016/j.mechmat.2017.04.003
doi:10.1016/j.mechmat.2017.04.003.
SARGADO2018458
J. M. Sargado, E. Keilegavlen, I. Berre, J. M. Nordbotten, High-accuracy
phase-field models for brittle fracture based on a new family of degradation
functions, J. Mech. Phys. Solids 111 (2018) 458–489.
https://doi.org/10.1016/j.jmps.2017.10.015
doi:10.1016/j.jmps.2017.10.015.
WheWiLee20
M. F. Wheeler, T. Wick, S. Lee, IPACS: Integrated Phase-Field Advanced Crack
Propagation Simulator. An adaptive, parallel, physics-based-discretization
phase-field framework for fracture propagation in porous media, Comput.
Methods Appl. Mech. Engrg. 367 (2020) 113124.
https://doi.org/10.1016/j.cma.2020.113124
doi:10.1016/j.cma.2020.113124.
BourFraMar08
B. Bourdin, G. A. Francfort, J.-J. Marigo, The variational approach to
fracture, J. Elasticity 91 (1–3) (2008) 1–148.
https://doi.org/10.1007/s10659-007-9107-3
doi:10.1007/s10659-007-9107-3.
WNN20
J.-Y. Wu, V. P. Nguyen, C. T. Nguyen, D. Sutula, S. Sinaie, S. P. A. Bordas,
Phase-field modeling of fracture, Advances in Applied Mechanics, Elsevier,
2020, Ch. 1, pp. 1–183.
https://doi.org/10.1016/bs.aams.2019.08.001
doi:10.1016/bs.aams.2019.08.001.
Wi20
T. Wick, Multiphysics Phase-Field Fracture, Vol. 28 of Radon Series on
Computational and Applied Mathematics, De Gruyter, Berlin, Boston, 2020.
https://doi.org/10.1515/9783110497397
doi:10.1515/9783110497397.
HEIDER2021107881
Y. Heider, A review on phase-field modeling of hydraulic fracturing, Eng.
Fract. Mech. 253 (2021) 107881.
https://doi.org/10.1016/j.engfracmech.2021.107881
doi:10.1016/j.engfracmech.2021.107881.
DiLiWiTy22
P. Diehl, R. Lipton, T. Wick, M. Tyagi, A comparative review of peridynamics
and phase-field models for engineering fracture mechanics, Comput. Mech. 69
(2022) 1259–1293.
https://doi.org/10.1007/s00466-022-02147-0
doi:10.1007/s00466-022-02147-0.
LeeWheWi16
S. Lee, M. F. Wheeler, T. Wick, Pressure and fluid-driven fracture propagation
in porous media using an adaptive finite element phase field model, Comput.
Methods Appl. Mech. Engrg. 305 (2016) 111–132.
https://doi.org/10.1016/j.cma.2016.02.037
doi:10.1016/j.cma.2016.02.037.
lee2016phase
S. Lee, A. Mikelić, M. F. Wheeler, T. Wick, Phase-field modeling of
proppant-filled fractures in a poroelastic medium, Comput. Methods Appl.
Mech. Engrg. 312 (2016) 509–541.
https://doi.org/10.1016/j.cma.2016.02.008
doi:10.1016/j.cma.2016.02.008.
MDB99
N. Moës, J. Dolbow, T. Belytschko, A finite element method for crack growth
without remeshing, Internat. J. Numer. Methods Engrg. 46 (1) (1999) 131–150.
https://doi.org/10.1002/(sici)1097-0207(19990910)46:1<131::aid-nme726>3.0.co;2-j
doi:10.1002/(sici)1097-0207(19990910)46:1<131::aid-nme726>3.0.co;2-j.
zi2003new
G. Zi, T. Belytschko, New crack-tip elements for XFEM and applications to
cohesive cracks, Internat. J. Numer. Methods Engrg. 57 (15) (2003)
2221–2240.
https://doi.org/10.1002/nme.849 doi:10.1002/nme.849.
duarte2001generalized
C. A. Duarte, O. N. Hamzeh, T. J. Liszka, W. W. Tworzydlo, A generalized finite
element method for the simulation of three-dimensional dynamic crack
propagation, Comput. Methods Appl. Mech. Engrg. 190 (15-17) (2001)
2227–2262.
https://doi.org/10.1016/S0045-7825(00)00233-4
doi:10.1016/S0045-7825(00)00233-4.
FB10
T.-P. Fries, T. Belytschko, The extended/generalized finite element method:
An overview of the method and its applications, Internat. J. Numer. Methods
Engrg. 84 (3) (2010) 253–304.
https://doi.org/10.1002/nme.2914
doi:10.1002/nme.2914.
LWW17
S. Lee, M. F. Wheeler, T. Wick, Iterative coupling of flow, geomechanics and
adaptive phase-field fracture including level-set crack width approaches, J.
Comput. Appl. Math. 314 (2017) 40–60.
https://doi.org/10.1016/j.cam.2016.10.022
doi:10.1016/j.cam.2016.10.022.
YNK20
K. Yoshioka, D. Naumov, O. Kolditz, On crack opening computation in variational
phase-field models for fracture, Comput. Methods Appl. Mech. Engrg. 369
(2020) 113210.
https://doi.org/10.1016/j.cma.2020.113210
doi:10.1016/j.cma.2020.113210.
WaWi24_RINAM
H. von Wahl, T. Wick, A coupled high-accuracy phase-field fluid–structure
interaction framework for stokes fluid-filled fracture surrounded by an
elastic medium, Results Appl. Math. 22 (2024) 100455.
https://doi.org/10.1016/j.rinam.2024.100455
doi:10.1016/j.rinam.2024.100455.
WaWi23_CMAME
H. von Wahl, T. Wick, A high-accuracy framework for phase-field fracture
interface reconstructions with application to Stokes fluid-filled fracture
surrounded by an elastic medium, Comput. Methods Appl. Mech. Engrg. 415
(2023) 116202.
https://doi.org/10.1016/j.cma.2023.116202
doi:10.1016/j.cma.2023.116202.
NoiiWi19
N. Noii, T. Wick, A phase-field description for pressurized and non-isothermal
propagating fractures, Comput. Methods Appl. Mech. Engrg. 351 (2019)
860–890.
https://doi.org/10.1016/j.cma.2019.03.058
doi:10.1016/j.cma.2019.03.058.
HEIDER2018116
Y. Heider, S. Reiche, P. Siebert, B. Markert, Modeling of hydraulic fracturing
using a porous-media phase-field approach with reference to experimental
data, Eng. Fract. Mech. 202 (2018) 116–134.
https://doi.org/10.1016/j.engfracmech.2018.09.010
doi:10.1016/j.engfracmech.2018.09.010.
NgHeiMa23
C.-L. Nguyen, Y. Heider, B. Markert, A non-isothermal phase-field hydraulic
fracture modeling in saturated porous media with convection-dominated heat
transport, Acta Geotech. 50 (6) (2023) 821–833.
https://doi.org/10.1007/s11440-023-01905-5
doi:10.1007/s11440-023-01905-5.
SUH2021114182
H. S. Suh, W. Sun, Asynchronous phase field fracture model for porous media
with thermally non-equilibrated constituents, Comput. Methods Appl. Mech.
Engrg. 387 (2021) 114182.
https://doi.org/10.1016/j.cma.2021.114182
doi:10.1016/j.cma.2021.114182.
Dai2024
Y. Dai, B. Hou, S. Lee, T. Wick, A thermal–hydraulic–mechanical–chemical
coupling model for acid fracture propagation based on a phase-field method,
Rock Mech. Rock Eng. (2024).
https://doi.org/10.1007/s00603-024-03769-x
doi:10.1007/s00603-024-03769-x.
LIU2024117165
Y. Liu, K. Yoshioka, T. You, H. Li, F. Zhang, A phase-field fracture model in
thermo-poro-elastic media with micromechanical strain energy degradation,
Comput. Methods Appl. Mech. Engrg. 429 (2024) 117165.
https://doi.org/10.1016/j.cma.2024.117165
doi:10.1016/j.cma.2024.117165.
lee4920845phase
S. Lee, M. Wheeler, T. Wick, https://ssrn.com/abstract=4920845A
phase-field diffraction model for thermo-hydro-mechanical propagating
fractures (Aug. 2024).
<https://ssrn.com/abstract=4920845>
LoBo96
S. A. Lorca, J. L. Boldrini, Stationary solutions for generalized Boussinesq
models, J. Differ. Equ. 124 (2) (1996) 389–406.
https://doi.org/10.1006/jdeq.1996.0016
doi:10.1006/jdeq.1996.0016.
FARHAT1991349
C. Farhat, K. C. Park, Y. Dubois-Pelerin, An unconditionally stable staggered
algorithm for transient finite element analysis of coupled thermoelastic
problems, Comput. Methods Appl. Mech. Engrg. 85 (3) (1991) 349–365.
https://doi.org/10.1016/0045-7825(91)90102-C
doi:10.1016/0045-7825(91)90102-C.
Coussy2004
O. Coussy, Poromechanics, Wiley, 2004.
https://doi.org/10.1002/0470092718
doi:10.1002/0470092718.
mayeli2021buoyancy
P. Mayeli, G. J. Sheard, Buoyancy-driven flows beyond the Boussinesq
approximation: A brief review, Int. Commun. Heat Mass 125 (2021) 105316.
https://doi.org/10.1016/j.icheatmasstransfer.2021.105316
doi:10.1016/j.icheatmasstransfer.2021.105316.
MiWheWi13a
A. Mikelić, M. F. Wheeler, T. Wick,
www.oden.utexas.edu/media/reports/2013/1315.pdfA phase-field approach
to the fluid filled fracture surrounded by a poroelastic medium, iCES Report
13-15 (Jun. 2013).
<www.oden.utexas.edu/media/reports/2013/1315.pdf>
MWW19
A. Mikelić, M. F. Wheeler, T. Wick, Phase-field modeling through
iterative splitting of hydraulic fractures in a poroelastic medium, GEM -
Int. J. Geomath. 10 (1) (Jan. 2019).
https://doi.org/10.1007/s13137-019-0113-y
doi:10.1007/s13137-019-0113-y.
AmTo90
L. Ambrosio, V. M. Tortorelli, Approximation of functionals depending on jumps
by elliptic functionals via γ-convergence, Comm. Pure Appl. Math.
43 (8) (1990) 999–1036.
https://doi.org/10.1002/cpa.3160430805
doi:10.1002/cpa.3160430805.
AmTo92
L. Ambrosio, V. M. Tortorelli, On the approximation of free discontinuity
problems, Boll. Un. Mat. Ital. 6 (1992) 105–123.
HWW15
T. Heister, M. F. Wheeler, T. Wick, A primal-dual active set method and
predictor-corrector mesh adaptivity for computing fracture propagation using
a phase-field approach, Comput. Methods Appl. Mech. Engrg. 290 (2015)
466–495.
https://doi.org/10.1016/j.cma.2015.03.009
doi:10.1016/j.cma.2015.03.009.
Wic17
T. Wick, An error-oriented Newton/inexact augmented Lagrangian approach for
fully monolithic phase-field fracture propagation, SIAM J. Sci. Comput.
39 (4) (2017) B589–B617.
https://doi.org/10.1137/16m1063873
doi:10.1137/16m1063873.
KMW23
L. Kolditz, K. Mang, T. Wick, A modified combined active-set Newton method
for solving phase-field fracture into the monolithic limit, Comput. Methods
Appl. Mech. Engrg. 414 (2023) 116170.
https://doi.org/10.1016/j.cma.2023.116170
doi:10.1016/j.cma.2023.116170.
MWT15
A. Mikelić, M. F. Wheeler, T. Wick, A quasi-static phase-field approach
to pressurized fractures, Nonlinearity 28 (5) (2015) 1371–1399.
https://doi.org/10.1088/0951-7715/28/5/1371
doi:10.1088/0951-7715/28/5/1371.
TrSeNg13
D. Tran, A. T. Settari, L. Nghiem, Predicting growth and decay of
hydraulic-fracture width in porous media subjected to isothermal and
nonisothermal flow, SPE J. 18 (4) (2013) 781–794.
https://doi.org/10.2118/162651-PA
doi:10.2118/162651-PA.
CHUKWUDOZIE2019957
C. Chukwudozie, B. Bourdin, K. Yoshioka, A variational phase-field model for
hydraulic fracturing in porous media, Comput. Methods Appl. Mech. Engrg. 347
(2019) 957–982.
https://doi.org/10.1016/j.cma.2018.12.037
doi:10.1016/j.cma.2018.12.037.
HrTu06a
J. Hron, S. Turek, A monolithic FEM/Multigrid solver for ALE formulation of
fluid structure with application in biomechanics, Vol. 53, Springer, Berlin,
Heidelberg, 2006, pp. 146–170.
https://doi.org/10.1007/3-540-34596-5_7
doi:10.1007/3-540-34596-5_7.
Du07
T. Dunne, Adaptive finite element approximation of fluid-structure interaction
based on Eulerian and arbitrary Lagrangian-Eulerian variational
formulations, Ph.D. thesis, University of Heidelberg (2007).
https://doi.org/10.11588/heidok.00007944
doi:10.11588/heidok.00007944.
Wi11_phd
T. Wick, Adaptive Finite Element Simulation of Fluid-Structure
Interaction with Application to Heart-Valve Dynamics, Ph.D. thesis,
University of Heidelberg (2011).
https://doi.org/10.11588/heidok.00012992
doi:10.11588/heidok.00012992.
Ri17_fsi
T. Richter, Fluid-structure interactions: Models, analysis, and finite
elements, Springer, Cham, 2017.
https://doi.org/10.1007/978-3-319-63970-3
doi:10.1007/978-3-319-63970-3.
HuLiZi81
T. J. R. Hughes, W. K. Liu, T. Zimmermann, Lagrangian-Eulerian finite
element formulation for incompressible viscous flows, Comput. Methods Appl.
Mech. Engrg. 29 (1981) 329–349.
https://doi.org/10.1016/0045-7825(81)90049-9
doi:10.1016/0045-7825(81)90049-9.
DoGiuHa82
J. Donea, S. Giuliani, J. P. Halleux, An arbitrary Lagrangian-Eulerian
finite element method for transient dynamic fluid-structure interactions,
Comput. Methods Appl. Mech. Engrg. 33 (1982) 689–723.
https://doi.org/10.1016/0045-7825(82)90128-1
doi:10.1016/0045-7825(82)90128-1.
RiWi10
T. Richter, T. Wick, Finite elements for fluid-structure interaction in ALE
and fully Eulerian coordinates, Comput. Methods Appl. Mech. Engrg. 199
(2010) 2633–2642.
https://doi.org/10.1016/j.cma.2010.04.016
doi:10.1016/j.cma.2010.04.016.
Sch97
J. Schöberl, NETGEN an advancing front 2D/3D-mesh generator based on
abstract rules, Comput. Vis. Sci. 1 (1) (1997) 41–52.
https://doi.org/10.1007/s007910050004
doi:10.1007/s007910050004.
Sch14
J. Schöberl, C++11 implementation of finite elements in NGSolve, Tech.
Rep. ASC Report No. 30/2014 (Sep. 2014).
LHPvW21
C. Lehrenfeld, F. Heimann, J. Preuß, H. von Wahl,
github.com/ngsxfem/ngsxfem: Add-on to ngsolve for
geometrically unfitted finite element discretizations, J. Open Source Softw.
6 (64) (2021) 3237.
https://doi.org/10.21105/joss.03237
doi:10.21105/joss.03237.
<github.com/ngsxfem/ngsxfem>
Sne46
I. N. Sneddon, The distribution of stress in the neighbourhood of a crack in an
elastic solid, Proc. R. Soc. A 187 (1009) (1946) 229–260.
https://doi.org/10.1098/rspa.1946.0077
doi:10.1098/rspa.1946.0077.
SneddLow69
I. N. Sneddon, M. Lowengrub, Crack problems in the classical theory of
elasticity, SIAM series in Applied Mathematics, John Wiley and Sons,
Philadelphia, 1969.
morrow2001permeability
C. A. Morrow, D. E. Moore, D. A. Lockner, Permeability reduction in granite
under hydrothermal conditions, J. Geophys. Res. Solid Earth 106 (B12) (2001)
30551–30560.
https://doi.org/10.1029/2000JB000010
doi:10.1029/2000JB000010.
yasuhara2006evolution
H. Yasuhara, A. Polak, Y. Mitani, A. S. Grader, P. M. Halleck, D. Elsworth,
Evolution of fracture permeability through fluid–rock reaction under
hydrothermal conditions, Earth Planet. Sc. Lett. 244 (1-2) (2006) 186–200.
https://doi.org/10.1016/j.epsl.2006.01.046
doi:10.1016/j.epsl.2006.01.046.
hardin1982measuring
E. Hardin, N. Barton, M. Voegele, M. Board, R. Lingle, H. Pratt, W. Ubbes,
Measuring the thermomechanical and transport properties of a rockmass using
the heated block test, in: ARMA US Rock Mechanics/Geomechanics Symposium,
ARMA, 1982, pp. ARMA–82.
rutqvist2008analysis
J. Rutqvist, B. Freifeld, K.-B. Min, D. Elsworth, Y. Tsang, Analysis of
thermally induced changes in fractured rock permeability during 8 years of
heating and cooling at the yucca mountain drift scale test, Int. J. Rock
Mech. Min. 45 (8) (2008) 1373–1389.
https://doi.org/10.1016/j.ijrmms.2008.01.016
doi:10.1016/j.ijrmms.2008.01.016.
|
http://arxiv.org/abs/2409.03441v1 | 20240905114229 | Forcing as a Local Method of Accessing Small Extensions | [
"Desmond Lau"
] | math.LO | [
"math.LO",
"03E40 (Primary) 03D30 (Secondary)",
"F.4.1"
] |
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ ABSTRACT
Fix a set-theoretic universe V. We look at small extensions of V as generalised degrees of computability over V. We also formalise and investigate the complexity of certain methods one can use to define, in V, subclasses of degrees over V. Finally, we give a nice characterisation of the complexity of forcing within this framework.
thmTheorem[section]
innercustomlemLemma
innercustomdefDefinition
lem[thm]Lemma
prop[thm]Proposition
cor[thm]Corollary
conj[thm]Conjecture
ques[thm]Question
*claimClaim
definition
defi[thm]Definition
remark
*rem*Remark
rem[thm]Remark
ex[thm]Example
ob[thm]Observation
fact[thm]Fact
con[thm]Convention
diff[thm]Difficulty
empty
§ INTRODUCTION
Within the set-theoretic universe V, there is (based on the author's work to appear) an intuitive notion of computation that canonically partitions sets into their degrees of constructibility. This lends credence to the belief that generative power over a model of set theory is a surrogate for computational power. When dealing with degrees of constructibility, the relevant model of set theory is L. Switching out L for larger inner models makes sense for coarser degree structures.
What if we swop L for V itself? Doing so will obviously result in degrees that are not subclasses of V. What then do they comprise? With meta-theoretic assumptions mildly stronger than 𝖹𝖥𝖢, we can view V as a countable transitive model of 𝖹𝖥𝖢, from which such degrees can be naturally defined as degrees of small extensions. Vaguely, each degree of small extensions is associated with (or rather, represented by) an outer model W of V generated by a set in W over V: here W is called a small extension of V. These degrees, together with the theory of their ordering, seek to capture the spirit of higher-order computations relative to V, the way higher recursion theory do for computations on sets beyond the domain of classical recursion theory. Figure <ref> illustrates this parallel.
Next, we wish to examine (necessarily non-constructive) methods of definably “accessing” small extensions of V within V, or local methods in short. Set forcing is one such method, and a very well-studied one at that. In an application of set forcing, we pick a partially ordered set — also known as a forcing notion — ℙ∈ V, and use a filter meeting all dense subsets of ℙ in V — termed a ℙ-generic filter over V — to generate an extension of V. So the small extensions of V set forcing brings about via ℙ are precisely those in
{V[g] : g is a ℙ-generic filter over V},
a set definable outside V with ℙ and its dense subsets in V as parameters. Consequently, one can view set forcing as a recipe in V for generating small extensions of V based only on parameters in V. The formal treatment of set forcing inspires a list of desiderata for a local method:
* it should be definable in V,
* it should map parameters to descriptions of how those parameters are used to define generators of small extensions, and
* the generators it produces should depend locally on the parameters used to define them.
A convenient realisation at this juncture is that a recipe and its parameters (or equivalently, the two components of <ref>) can be bundled up into a theory with constraints in interpretation (TCI). TCIs are basically first-order theories endowed with set constraints that may not be first-order expressible. Like standard first-order theories, TCIs admit models, and whether a set X is a model of a TCI 𝔗 depends locally on X and 𝔗. Defining a local method through the language of TCIs and their models thus provides immediate guarantee of <ref>, and is appealing in both its brevity and robustness.
Accompanying the formalisation of local methods, ought to be a notion of relative complexity, a measure which can be utilised to check if one local method is “more complex” than another. Akin to relative computability, we want to define relative complexity as a transitive binary relation on the class of all local methods. There is actually a straightforward way to do this: we say method Y is more complex than method X iff the small extensions of V picked out by Y are a non-trivial refinement of those picked out by X. Connecting the first-order portion of a TCI with the relative complexity relation we defined, leads to the formulation of a complexity hierarchy — the local method hierarchy — very much in line with more notable hierarchies in theoretical computer science (e.g. the arithmetical and polynomial hierarchies).
Leveraging on a novel forcing framework developed in <cit.>, we are able to show that the method of set forcing is exactly Σ_1 (or equivalently, as we shall see, Π_2) in the local method hierarchy. This is the main takeaway of our work presented here. We follow it up with the analysis of certain witnesses to set forcing being more complex than Π_2.
By applying an analogue of the Cantor-Bendixson derivative on a specific class of forcing notions, we prove that every TCI 𝔗∈Π_2 either singles out V or picks out continuum-many (as evaluated in the meta-theory) small extensions of V. The same is long known to be true for forcing notions: a trivial forcing notion gives V as its sole generic extension, whereas a non-trivial one generates continuum-many generic extensions.
One can think of this work as a rigorous foundation for some of the main ideas found in Section 5 of <cit.>. In fact, consolidated here are many results in said section, having been weaved together into a more philosophically compelling and coherent package. For self-containment, we reproduce the more concise proofs from <cit.>, of as many of these results as possible.
§ DEGREES OF SMALL EXTENSIONS
Extending a structure through means of adjoining “new” objects is commonly seen in mathematics. Here, “new” just means “existing outside of the structure in question”. For example, a standard course in algebra would talk about field extensions the likes of ℚ[√(2)]. In set theory, the subjects of study are models of set theory, often models of 𝖹𝖥𝖢. For convenience, we usually assume such models are countable and transitive. If a countable transitive model of 𝖹𝖥𝖢 (henceforth, CTM) exists, then extensions of it exist, but due to the complicated closure properties required of a model of 𝖹𝖥𝖢, the proof of their existence is much hairier than that of field extensions.
It turns out that, whenever U is a CTM and W is an extension of U, we can always find an extension of W that is generated over U by a “small set”. Methods of generating such “small extensions” include, but are not limited to, set forcing. In this section, we compare the multiverse of small extensions with the generic multiverse born from set forcing, under the assumption that both multiverses have the same centre. We also introduce the idea of theories with constraints in interpretation (TCIs) to set things up for the next section.
§.§ Small Extensions as Degrees
We avoid the usual meta-theoretic concerns regarding forcing and the set-theoretic multiverse by working in the theory
𝖹𝖥𝖢 + “there is a transitive model of 𝖹𝖥𝖢”.
The existence of CTMs can be proven in this theory.
Given U and W, we say U is an inner model of W (or equivalently, W is a outer model of U) iff
* U and W are CTMs,
* U ⊂ W, and
* ORD^U = ORD^W.
Let W be an outer model of U. Then W is a small extension of U iff for some x ∈ W, W is the smallest CTM W' satisfying
* U ⊂ W', and
* x ∈ W'.
In this case, we say, equivalently,
* x generates W from U, or
* W is a small extension of U generated by x, or
* W = U[x].
The following observation is trivial.
The binary relations
* “being an outer model of”, and
* “being a small extension of”
are both transitive.
We know that certain sets in outer models can always be used to generate small extensions.
Let W be an outer model of U, and x ∈𝒫(y) ∩ W for some y ∈ U. Then there is a smallest CTM W' satisfying
* U ⊂ W', and
* x ∈ W'.
In other words, U[x] exists.
There is a simple and useful characterisation of small extensions of a CTM.
Let M be a transitive model of 𝖹𝖥𝖢 and X ∈ M. Then there is a set of ordinals c ∈ M such that if N is any transitive model of 𝖹𝖥𝖢 containing c, then X ∈ N.
Let Y' be the transitive closure of X (under the membership relation ∈) and set Y := Y' ∪{X}. Then Y is ∈-transitive. Choose a bijection f from a cardinal κ into Y. Use ∈' to denote the unique binary relation R on κ such that
R(α, β) f(α) ∈ f(β) .
Now apply Gödel's pairing function to code ∈' as a (necessarily unbounded) subset c of κ.
To recover X from c, first apply the inverse of the pairing function followed by the Mostowski collapse to get Y. Then X is definable from Y as the unique ∈-maximal element of Y. This decoding process is absolute for transitive models of 𝖹𝖥𝖢 because all its components are.
If W is a small extension of U, then for some ordinal κ∈ U, W is generated from U by an unbounded subset of κ. Furthermore, we can choose κ such that
(U; ∈) “κ is a cardinal”.
Let x ∈ W be such that W = U[x]. By the proof of Proposition <ref>, for some cardinal κ∈ W there is an unbounded subset c of κ in W coding x. Since U is an inner model of W, κ is also a cardinal in U. By Fact <ref>, U[c] exists. Now, U[c] ⊂ W as c ∈ W; but also W ⊂ U[c] because c can be decoded in U[c] to give x and W = U[x].
Let U be a CTM. The outward multiverse centred at U is the set
𝐌(U) := {W : W is an outer model of U}.
The small outward multiverse centred at U is the set
𝐌_S(U) := {W : W is a small extension of U}.
Clearly, 𝐌_S(U) ⊂𝐌(U). By Jensen's remarkable result on “coding the universe” into a real, 𝐌_S(U) is not that much smaller than 𝐌(U).
[Jensen, <cit.>]
Every CTM has an outer model satisfying
“ V = L[r] for some r ⊂ω”.
Given a CTM U, (𝐌_S(U), ⊂) is a cofinal subposet of (𝐌(U), ⊂).
Let W ∈𝐌(U). Then by Fact <ref>, there is W' ∈𝐌(U) such that W ⊂ W' and W' satisfies
“ V = L[r] for some r ⊂ω”.
Since L^W' = L^U ⊂ U and indeed W' = L^W'[r] for some real r ∈ W' from the outside, necessarily W' = U[r]. But this means W' ∈𝐌_S(U).
We can characterise members 𝐌_S(U) in a way that is conducive to the discussion of relative computability.
Let U be a CTM. Then
𝐌_S(U) = {U[x] : x ∈⋃𝐌(U) ∩𝒫(U)}
= {U[x] : x ∈⋃𝐌(U) ∩𝒫(ORD^U)}.
By Fact <ref> and Proposition <ref>.
Proposition <ref> gives us a natural reducibility relation on
𝐍(U) := ⋃𝐌(U) ∩𝒫(ORD^U)
given a CTM U.
Let U be a CTM. Define the binary relation ≤_U on 𝐍(U) as follows: for x, y ∈𝐍(U),
x ≤_U y U[x] ⊂ U[y] .
Given x, y ∈𝐍(U), write x ≡_U y iff x ≤_U y and y ≤_U x.
One can easily check that ≤_U is a preorder, so taking its quotient by ≡_U results in a partial order we shall denote as (𝒟(U), ≤_𝒟(U)).
The case of (𝒟(U), ≤_𝒟(U)) parallels that of the constructibility degrees, in the sense that the former partial order, like the latter one, is isomorphic to a class of set-theoretic universes under inclusion. Specifically,
(𝒟(U), ≤_𝒟(U)) ≅ (𝐌_S(U), ⊂) .
This motivates viewing (𝒟(U), ≤_𝒟(U)) as degrees of computability over U. These degrees are necessarily non-constructible (and indeed, non-constructive) if U is not a model of “V = L”. Whereas the “constructible in” relation partially orders a partition of an ∈-model of 𝖹𝖥𝖢 and is definable within said model, the field of (𝒟(U), ≤_𝒟(U)) may not be realisable as a partition of any such model. We thus expect the structure of (𝒟(U), ≤_𝒟(U)) to be much more varied and dependent on U, compared to the structure of the constructibility degrees evaluated in U. Nevertheless, we will attempt to stratify (𝒟(U), ≤_𝒟(U)).
Hereon, we shall analyse and reason about (𝒟(U), ≤_𝒟(U)) by moving to (𝐌_S(U), ⊂), so that we can apply set-theoretic arguments and leverage on set-theoretic techniques.
§.§ Forcing and the Generic Multiverse
Forcing is a technique invented by Cohen in <cit.> to prove that the continuum hypothesis is independent of 𝖹𝖥𝖢. It has since taken on a life of its own, becoming an indispensable tool in set theory, and even in other branches of logic. The modern treatment of forcing is largely due to Scott, Solovay, Silver, and Rowbottom, as communicated by Shoenfield in <cit.>. We shall give a very brief and high-level introduction to forcing, following the layout found in Section 2.4 of <cit.>.
In a typical application of forcing, we start with a CTM, called the ground model. The usual forcing argument can be rewritten to occur entirely in the ground model with respect to a forcing notion that lives therein. Exactly because of this, we often forget the fact that our ground model is a CTM, or at least we eschew mentioning it. This is also why our ground model is conventionally taken to be V itself.
Forcing parlance dictates a forcing notion to just be a partial order. The crux of forcing is the analysis of generic filters (which may not exist in V) of a forcing notion ℙ∈ V via the forcing relation ⊩_ℙ defined on ℙ in V. Forcing relations are the trick to reasoning about extensions of V without needing to step out of V.
Let ℙ = (P, ≤_ℙ) be a forcing notion, D ⊂ P and A be any set. We say a subset g of P meets D with respect to ℙ in A iff
g ∩{p ∈ P : p ∈ D or ∀ q (q ≤_ℙ p q ∉D)}∩ A ≠∅.
We say g meets D with respect to ℙ iff g meets D with respect to ℙ in V.
Let ℙ = (P, ≤_ℙ) be a forcing notion and 𝔄 = (A; ∈, X) be a structure in a possibly expanded language of set theory. We say a subset g of P is ℙ-generic over 𝔄 (or g is a ℙ-generic subset over 𝔄) iff g meets D with respect to ℙ in A for all D such that
* D ⊂ P
* D is dense in ℙ, and
* D is definable over 𝔄 with parameters in A.
If in addition, g is a filter on ℙ, then we call g a ℙ-generic filter over 𝔄.
Let ℙ be a forcing notion and 𝔄 be a structure. A (ℙ, 𝔄)-generic object is a set definable from a ℙ-generic filter over 𝔄, with parameters from 𝔄.
Let ℙ = (P, ≤_ℙ) be a forcing notion and X be any set. Then there is a structure 𝔄 = (A; ∈) ∈ V such that in every outer model of V,
x is a (ℙ, 𝔄) -generic object x is a (ℙ, V) -generic object
for all x ⊂ X. In fact, we can choose A to be H(κ) for any κ > |trcl({P, X})|.
Given a forcing notion ℙ = (P, ≤_ℙ) in V, the forcing relation ⊩_ℙ^V is a binary relation that relates members of P with formulas parametrised by members of V^ℙ, where V^ℙ is the class of ℙ-names in V. Both V^ℙ and ⊩_ℙ^V are uniformly definable in V over the class of all forcing notions ℙ. ℙ-names in V are “evaluated at" a ℙ-generic filter g over V to obtain the ℙ-generic extension V[g], which is necessarily a CTM, and thus is a small extension of V. In more formal writing, if g is a ℙ-generic filter over V, then
V ⊂ V[g] := {ẋ[g] : ẋ∈ V^ℙ},
where ẋ[g] means “x evaluated at g". Of course, this evaluation procedure is done outside V because any such non-trivial g would not exist in V. Even so, the ingenuity of forcing as a technique lies in the amount of knowledge we can deduce about V[g] in V through examining ⊩_ℙ^V alone.
When it is clear that the background universe is V, we suppress mention of V when writing forcing relations in V. This means that given a forcing notion ℙ in V, ⊩_ℙ is used interchangeably with ⊩_ℙ^V.
We call W a forcing extension (or a generic extension) of V iff there exists a forcing notion ℙ in V and a ℙ-generic filter g over V, such that W = V[g].
We write “⊩_ℙϕ" to mean
“∀ p (p ∈ℙ p ⊩_ℙϕ) ”.
The next theorem is important enough to be stated here in full, but not relevant enough to the spirit of this section to warrant a reproduction of its proof.
If ℙ is a forcing notion in V, p ∈ℙ, ϕ is a formula with n free variables, and ẋ_1, ..., ẋ_n are ℙ-names in V, then
*
[t]
p ⊩_ℙϕ(ẋ_1, …, ẋ_n)
∀ g ((g is ℙ-generic over V and p ∈ g)
V[g] ϕ(ẋ_1[g], …, ẋ_n[g])), and
*
[t]
∀ g ( (g is ℙ-generic over V and V[g] ϕ(ẋ_1[g], …, ẋ_n[g]))
∃ q (q ⊩_ℙϕ(ẋ_1, …, ẋ_n) and q ∈ g)) .
Theorem <ref> intricately connects the forcing relation ⊩_ℙ^V with truth in ℙ-generic extensions and is fundamental to forcing as a technique. Colloquially known as the forcing theorem, it enables us to reason about truth in generic extensions from within the ground model, and often reduces the argument from one about semantic entailment to one pertaining to combinatorial properties of partial orders. For more details about forcing and the proof of the forcing theorem, the reader is encouraged to read Chapter IV of <cit.>.
Define the relation ≤_F on the set of CTMs as follows:
M ≤_F N N is a forcing extension of M .
A (full) generic multiverse is any set of CTMs closed under ≤_F.
Let V be a CTM. The (forcing) grounds of V is the set
{W : V is a forcing extension of W}.
Let V be a CTM. The outward generic multiverse centred at V is the set
𝐌_F(V) := {W : W is a forcing extension of V}.
The study of generic multiverses, including the coining of the term itself, arguably begins with Woodin in <cit.>. Since then, much has been studied about the structure of standard generic multiverses under ≤_F, with a particularly strong focus on the forcing grounds of fixed CTMs. On the other hand, there has been less interest in the structure of outward generic multiverses under ≤_F, perhaps due to the dearth of low-hanging fruits — a large part of what is known about this structure are essentially theorems about forcing in the traditional sense.
A careful reader might have noticed the overloading of the notation · [·] to represent both small extensions and forcing extensions. This is intentional, for the latter class is subsumed under the former.
For some ℙ∈ V, let g be a ℙ-generic filter over a CTM V. Then V[g] is the smallest CTM W for which
* V ⊂ W, and
* g ∈ W.
As a consequence, 𝐌_F(V) ⊂𝐌_S(V).
It turns out that forcing extensions of a CTM V are downward-closed in 𝐌(V) (and thus, also in 𝐌_S(V)). This is just a rephrasing of the fact below.
Let V ⊂ U ⊂ W be CTMs such that
* W is a forcing extension of V,
* U is an outer model of V, and
* U is an inner model of W.
Then
* U is a forcing extension of V, and
* W is a forcing extension of U.
An immediate follow-up question to the previous two facts is,
“Must 𝐌_F(V) always equal 𝐌_S(V)?”
There is an easy argument for the answer being “no”, if we assume a sufficiently strong large cardinal axiom in addition to 𝖹𝖥𝖢.
Let
𝖳 := 𝖹𝖥𝖢 + “0^♯ exists”.
Assume 𝖹𝖥𝖢 + “there is a transitive model of T ”. Then 𝐌_F(V) ⊊𝐌_S(V) for some CTM V.
Given the hypothesis of the proposition, there is a CTM W satisfying 𝖳. Define
V := L^W
U := L[I]^W ,
where I is an uncountable set of Silver indiscernibles in W witnessing the fact that 0^♯ exists. Then U is a small extension of V generated by I ∈ U but not a forcing extension of V.
By Proposition <ref>, it is consistent that 𝐌_S(V) ∖𝐌_F(V) is non-trivial — and includes at least a cone of (𝐌_S(V), ⊂) — under strong enough assumptions. However, the small outward multiverse example exhibited by the proof of the proposition is undesirable because it is centred at a universe that is by many measures, “too small” (e.g. it has a trivial theory of constructibility degrees).
A much stronger and much more useful statement would be
“𝐌_F(V) ≠𝐌_S(V) for all V ”.
Let us sketch how this can be true. We start with a universe V, force (with a proper class forcing notion) to an outer model V[G] of V satisfying “V[G] is not a set forcing extension of V”, then apply Fact <ref> to V[G]. The end result is an outer model W of V[G] such that W = L^V[r] for some real r ∈ W. This can even be arranged such that V is a definable class in W. Now if W is a set forcing extension of V, then so are all intermediate outer models of V, including V[G]. But we have just ensured V[G] is not a set forcing extension of V.
This argument, which includes a proof of Fact <ref>, can be formalised in a conservative second order extension of 𝖹𝖥𝖢 (the second-order portion is needed for proper class forcing), so our meta-theory suffices when V is a CTM. We have thus established — albeit sketchily so — the following.
(<ref>) holds.
In actuality, we can switch V for any U ∈𝐌_S(V) in the argument above and obtain a stronger conclusion.
Given a CTM V,
(𝐌_S(V) ∖𝐌_F(V), ⊂)
is a cofinal subposet of (𝐌_S(V), ⊂).
Intuitively, Facts <ref> and <ref> tell us that there are many objects inaccessible by forcing. Do these objects have “local first-order properties” not shared by any set in any forcing extension? Much of the rest of this paper aims for a partial answer to the aforementioned question.
§.§ Theories with Constraints in Interpretation
Theories with constraints in interpretation (henceforth, TCIs) were conceived in <cit.> as a convenient means of looking at generic objects produced by set-theoretic forcing.
(Lau, <cit.>)
A first-order theory with constraints in interpretation (first-order TCI) — henceforth, just theory with constraints in interpretation (TCI) — is a tuple (T, σ, 𝒰̇, ϑ), where
* T is a first order theory with signature σ,
* 𝒰̇ is a unary relation symbol not in σ,
* ϑ is a function (the interpretation constraint map) with domain σ∪{𝒰̇},
* if x ∈ ran(ϑ), then there is y such that
* either x = (y, 0) or x = (y, 1), and
* if ϑ(𝒰̇) = (z, a), then y ⊂ z^n for some n < ω, and
* if ϑ(𝒰̇) = (z, a), then
* z ∩ z^n = ∅ whenever 1 < n < ω, and
* z^m ∩ z^n = ∅ whenever 1 < m < n < ω.
We call members of the interpretation constraint map interpretation constraints.
For simplicity's sake, we always assume members of T are in prenex normal form.
(Lau, <cit.>)
Let (T, σ, 𝒰̇, ϑ) be a TCI. We say
ℳ := (U; ℐ) ^* (T, σ, 𝒰̇, ϑ)
— or ℳ models (T, σ, 𝒰̇, ϑ) — iff all of the following holds:
* ℳ is a structure,
* σ is the signature of ℳ,
* ℳ T,
* if ϑ(𝒰̇) = (y, 0), then U ⊂ y,
* if ϑ(𝒰̇) = (y, 1), then U = y, and
* for Ẋ∈σ,
* if Ẋ is a constant symbol and ϑ(Ẋ) = (y, z), then ℐ(Ẋ) ∈ y ∩ U,
* if Ẋ is an n-ary relation symbol and ϑ(Ẋ) = (y, 0), then ℐ(Ẋ) ⊂ y ∩ U^n,
* if Ẋ is an n-ary relation symbol and ϑ(Ẋ) = (y, 1), then ℐ(Ẋ) = y ∩ U^n,
* if Ẋ is an n-ary function symbol and ϑ(Ẋ) = (y, 0), then
{z ∈ U^n+1 : ℐ(Ẋ)(z _n) = z(n)}⊂ y ∩ U^n+1, and
* if Ẋ is an n-ary function symbol and ϑ(Ẋ) = (y, 1), then
{z ∈ U^n+1 : ℐ(Ẋ)(z _n) = z(n)} = y ∩ U^n+1.
We say (T, σ, 𝒰̇, ϑ) has a model if there exists ℳ for which ℳ^* (T, σ, 𝒰̇, ϑ).
We can define a notion of complexity on TCIs. This will help us subsequently classify method definitions according to their complexity.
Let ϕ be a first-order formula over a signature σ. We inductively define what it means for ϕ to be Π_n or Σ_n as n ranges over the natural numbers.
* If n = 0, then ϕ is Π_n iff ϕ is Σ_n iff ϕ is quantifier-free.
* If n = m + 1 for some m < ω, then
* ϕ is Π_n iff there is a Σ_m formula φ, a number k < ω, and variable symbols x_1, …, x_k such that
ϕ = ⌜∀ x_1 …∀ x_k φ⌝, and
* ϕ is Σ_n iff there is a Π_m formula φ, a number k < ω, and variable symbols x_1, …, x_k such that
ϕ = ⌜∀ x_1 …∀ x_k φ⌝.
Note that if k = 0 in <ref><ref> and <ref><ref>, then ϕ is Σ_m and Π_m respectively.
[Lau, <cit.>]
A TCI (T, σ, 𝒰̇, ϑ) is Π_n iff T contains only Π_n sentences.
A TCI (T, σ, 𝒰̇, ϑ) is Σ_n iff T contains only Σ_n sentences.
As we shall see in Proposition <ref>, the existence of models of a TCI need not be absolute between between transitive models of 𝖹𝖥𝖢. There is thus a fundamental difference between model existence of TCIs and that of first-order theories. This should reflect in our definition of what it means for a TCI to be consistent.
[Lau, <cit.>]
A TCI (T, σ, 𝒰̇, ϑ) is consistent iff (T, σ, 𝒰̇, ϑ) has a model in some outer model of V.
It might seem at first glance, that the the consistency of a TCI is not a first-order property in the language of set theory, since it involves quantifying over all outer models. This is not a real problem because said definition is semantically equivalent to a first-order sentence in V with parameters in V.
Let 𝔗 = (T, σ, 𝒰̇, ϑ) be a TCI. Then 𝔗 is consistent iff
⊩_Col(ω, λ)∃ℳ (“ℳ^* 𝔗"),
where λ≥ |H(|trcl(𝔗)|^+)|.
This is Lemma 5.25 of <cit.>.
Lemma <ref> gives us a uniform way of checking in V if a TCI is consistent, by appealing to a suitable forcing relation. For ease of expression and reading, we define the abbreviation
𝔄_𝔗 := (H(|trcl(𝔗)|^+); ∈) .
It is easily verifiable from the definitions of ℒ_𝔗 and 𝔄_𝔗 that ℒ_𝔗∈𝔄_𝔗, so we have [ℒ_𝔗]^< ω∈𝔄_𝔗 as well.
Morally speaking, the consistency of a theory — however it is defined — should be absolute in a strong enough sense. This is the case for first-order theories, any of which consistency is absolute for transitive models of set theory. The following Lemma establishes a similar absoluteness property with regards to the consistency of a TCI.
Let 𝔗 = (T, σ, 𝒰̇, ϑ) be a TCI. Then 𝔗 being consistent is absolute for transitive models of 𝖹𝖥𝖢 sharing the same ordinals.
Let V' and W be transitive models of 𝖹𝖥𝖢 with ORD^V' = ORD^W and 𝔗∈ V' ⊂ W. If 𝔗 is consistent in W, then 𝔗 has a model in some outer model of W. Said outer model is also an outer model of V', so 𝔗 is consistent in V' as well.
Now assume 𝔗 is consistent in V'. Letting
λ := |H(|trcl(𝔗)|^+)|
evaluated in V', Lemma <ref> gives us
⊩_Col(ω, λ)∃ℳ (“ℳ^* 𝔗")
in V'. Note that
ℙ := Col(ω, λ)^V'
remains a forcing notion in W, so consider g a ℙ-generic filter over W. Necessarily, g is also ℙ-generic over V', and further, V'[g] ⊂ W[g]. In V'[g], 𝔗 is forced to have a model — call it ℳ. Being a model of 𝔗 is absolute for transitive models of 𝖹𝖥𝖢, so ℳ^* 𝔗 holds in W[g] too. Since W[g] is an outer model of W, 𝔗 must be consistent in W.
A triple (𝔗, 𝔄, ℙ) is generically sensible iff
* 𝔗 is a TCI,
* 𝔄 = (A; ∈, R) is a structure expanding on a transitive model of 𝖹𝖥𝖢 - 𝖯𝗈𝗐𝖾𝗋𝗌𝖾𝗍,
* ℙ is a forcing notion, and
* {𝔗, ℙ}⊂ A.
[Lau, <cit.>]
Given a generically sensible triple (𝔗, 𝔄, ℙ), a (ℙ, 𝔄)-generic model of 𝔗 is a model ℳ of 𝔗 satisfying
Σ(𝔗, ℳ) = (⋃ g) ∩ℒ_𝔗
for some ℙ-generic filter g over 𝔄. In this case, we say g witnesses ℳ is a (ℙ, 𝔄)-generic model of 𝔗. We say g witnesses a (ℙ, 𝔄)-generic model of 𝔗 iff for some (necessarily unique) ℳ, g witnesses ℳ is a (ℙ, 𝔄)-generic model of 𝔗.
We call ℳ a 𝔄-generic model of 𝔗 iff for some ℙ, (𝔗, 𝔄, ℙ) is generically sensible and ℳ is a (ℙ, 𝔄)-generic model of 𝔗.
We call ℳ a generic model of 𝔗 iff for some 𝔄 and ℙ, (𝔗, 𝔄, ℙ) is generically sensible and ℳ is a (ℙ, 𝔄)-generic model of 𝔗.
[Lau, <cit.>]
* If g witnesses ℳ is a (ℙ, V)-generic model of 𝔗, and ⋃ g ⊂ℒ_𝔗, then V[g] = V[ℳ].
* In the same vein as Observation <ref>, we see that given any consistent Π_2 TCI 𝔗,
∀ x ( x is a (ℙ(𝔗), 𝔄_𝔗) -generic model of 𝔗
x is a (ℙ(𝔗), V) -generic model of 𝔗)
in every outer model of V. As a result, we can safely talk about (ℙ(𝔗), V)-generic models of 𝔗 without the need to quantify over all sets.
Our definition of a generic model is actually born from a nice characterisation of what we expect a generic model to be.
Let 𝔗 be a TCI. If ℳ is a model of 𝔗 in some forcing extension of V, then ℳ is a V-generic model of 𝔗.
This is Lemma 5.30 of <cit.>.
§ LOCAL METHOD DEFINITIONS
If we look at forcing as a way to uniformly describe by intension members of a subset of 𝐍(V) for some CTM V, where the evaluation of intension is done outside V, then we quickly realise that said description can be very simple. We essentially just
* shortlist a class of structures in V — forcing notions augmented with predicates for their dense subsets — and
* describe how we pick substructures — generic filters — of these structures outside V.
Note also that the description of each substructure 𝔄' involves only information about its associated superstructure 𝔄, so we expect every reasonable universe containing 𝔄 to see that 𝔄' fits the description. This is analogous to (albeit stronger than) the kind of local definablity one would expect the state transition function of a typical machine model of computation to satisfy.
§.§ Locally Definable Methods of Small Extensions
This idea of generating new structures outside the universe based locally on recipes defined in the universe can be formalised rather conveniently using the language of TCIs.
Let V be a CTM. A set X ⊂ V is a 𝐌_S(V) method definition of small extensions (henceforth, 𝐌_S(V) method definition) iff it is non-empty and contains only TCIs.
If V is a CTM and 𝔗∈ V is a TCI, then the evaluation of 𝔗, denoted Eval^V(𝔗), is the set
{V[ℳ] : ∃ W ∃ℳ (W ∈𝐌(V) ∧ℳ∈ W ∧ℳ^* 𝔗)}.
Let V be a CTM. A set X ⊂ V is a 𝐌_S(V) local method definition of small extensions (henceforth, 𝐌_S(V) local method definition) iff it is a 𝐌_S(V) method definition and X is definable (possibly as a proper class) in V with parameters in V.
It might not be immediately obvious that whenever there are W and ℳ for which ℳ∈ W ∈𝐌(V) and ℳ^* 𝔗 for some TCI 𝔗∈ V, V[ℳ] must exist. The next proposition shows that one can translate between models of a TCI 𝔗 and subsets of a set associated with 𝔗, in an absolute and uniform manner.
There are formulas ϕ and ψ in the language of set theory with the following properties:
* ϕ and ψ have two and three free variables respectively,
* ϕ defines a function from the class of all TCIs into the universe,
* ψ defines a function from the class
{(𝔗, ℳ) : 𝔗 is a TCI and ℳ^* 𝔗}
into the universe,
* ϕ and ψ are absolute for transitive models of 𝖹𝖥𝖢,
* whenever 𝔗 is a TCI, the relation
R_𝔗 := {(ℳ, x) : ψ(𝔗, ℳ, x)}
is one-one,
* for all 𝔗, ℳ and x, ψ(𝔗, ℳ, x) implies there is ℒ for which ϕ(𝔗, ℒ) and x ⊂ℒ, and
Let 𝔗 be a TCI. Using only information about 𝔗, we will constructively define a set ℒ. Set
σ' := σ∪{𝒰̇}, and
U := the unique y for which there exists z such that ϑ(𝒰̇) = (y, z).
For Ẋ∈σ', define ℒ(Ẋ) as follows:
* if Ẋ is a constant symbol and ϑ(Ẋ) = (y, z), then
ℒ(Ẋ) := {⌜Ẋ = x ⌝ : x ∈ y ∩ U},
* if Ẋ is an n-ary relation symbol and ϑ(Ẋ) = (y, z), then
ℒ(Ẋ) := {⌜Ẋ(x) ⌝ : x ∈ y ∩ U^n}, and
* if Ẋ is an n-ary function symbol and ϑ(Ẋ) = (y, z), then
ℒ(Ẋ) := {⌜Ẋ(x _n) = x(n) ⌝ : x ∈ y ∩ U^n+1}.
Then
ℒ' := ⋃{ℒ(Ẋ) : Ẋ∈σ'}, and
ℒ := the closure of ℒ' under negation.
This construction is Δ_0-definable in the language of set theory with a single parameter, 𝔗, so it is absolute for transitive models of 𝖹𝖥𝖢. Use ϕ to denote this way of defining ℒ from 𝔗.
Next let ℳ = (M; ℐ) be a model of 𝔗. Set
U(ℳ) := {⌜𝒰̇(x) ⌝ : x ∈ M}∪{⌜ 𝒰̇(x) ⌝ : x ∈ U ∖ M}.
Now define ψ as follows:
ψ(𝔗, ℳ, Σ) ( 𝔗 is a TCI and ℳ^* 𝔗 and
Σ = (U(ℳ) ∪ Diag(ℳ)) ∩ℒ_𝔗) ,
where Diag(ℳ) is the atomic diagram of ℳ and ℒ_𝔗 is the unique ℒ for which ϕ(𝔗, ℒ). Verily, ψ is a Δ_1 formula, because the binary relation ^* is Δ_1-definable and the set ℒ_𝔗 is Δ_1-definable with parameter 𝔗. As such, ψ must be absolute for transitive models of 𝖹𝖥𝖢. That <ref> holds is straightforward based on the definition of ψ.
In the presence of Fact <ref>, Proposition <ref> gives validity to Definition <ref>: the function Eval^V taking TCIs in V to subsets of 𝐌_S(V) actually exists. Fix formulas ϕ and ψ as in Proposition <ref>. Let
ℒ_𝔗 := the unique ℒ for which ϕ(𝔗, ℒ) , and
Σ(𝔗, ℳ) := the unique x for which ψ(𝔗, ℳ, x) .
Given a TCI 𝔗 = (T, σ, 𝒰̇, ϑ) and sets y, z with ϑ(𝒰̇) = (y, z), we should think of ℒ_𝔗 as containing all the possible atomic sentences over σ that involve members of y as parameters. Then for any model ℳ of 𝔗, Σ(𝔗, ℳ) (⊂ℒ_𝔗, by <ref> of Proposition <ref>) can be thought of as the “𝔗-specific atomic diagram” of ℳ. According to <ref> of Proposition <ref>, we can always recover ℳ in a transitive model of 𝖹𝖥𝖢 from 𝔗 and Σ(𝔗, ℳ) alone. As a consequence,
V[ℳ] = V[x] if V is a CTM and ψ(𝔗, ℳ, x) for some TCI 𝔗∈ V .
Next, observe that we can “cover” the entire small outward multiverse with a local method definition.
Let V be a CTM. There is a 𝐌_S(V) local method definition X containing only Π_0 (= Σ_0) TCIs such that
⋃ ((Eval^V)" X) = 𝐌_S(V) .
Fix a distinguished unary relation symbol 𝒰̇ and let s ∈ V. Define
ϑ_s := {(𝒰̇, (s, 0))}
𝔗_s := (∅, ∅, 𝒰̇, ϑ_s) .
Then 𝔗_s is a Π_0 TCI, and its models in any outer model W of V are exactly the subsets of s in W. We are done by Fact <ref> if we set X := {𝔗_s : s ∈ V}.
We end off this subsection by demonstrating that forcing can be regarded as a local method definition.
Unless stated otherwise, we work within a fixed CTM V for the rest of this section, so that all mentions of 𝐌_S(V) in (local) method definitions can be conveniently suppressed.
Let ℙ = (P, ≤_ℙ) be a partial order. Then there is a TCI 𝔗(ℙ) = (T, σ, 𝒰̇, ϑ) such that for a fixed unary relation symbol Ẋ∈σ, if ℳ is any set in an outer model of V, then
ℳ^* 𝔗(ℙ) {p : ℳẊ(p)} is a ℙ-generic filter over V .
This is found in the proof of Theorem 5.34 of <cit.>, but we reproduce it the key portions here.
Choose 𝒰̇, ≤̇ and Ġ and to be distinct relation symbols of arities 1, 2 and 1 respectively. For each dense subset D of ℙ, choose a fresh unary relation symbol Ḋ. Set σ to be
{≤̇, Ġ}∪{Ḋ : D is a dense subset of ℙ}.
We define ϑ on {𝒰̇}∪σ as follows:
ϑ(𝒰̇) := (P, 1)
ϑ(≤̇) := (≤_ℙ, 1)
ϑ(Ġ) := (P, 0)
ϑ(Ḋ) := (D, 1) for each dense subset D of ℙ.
Now, have T contain only the sentences
⌜∀ p ∀ q ∃ r ((Ġ(p) ∧Ġ(q)) (Ġ(r) ∧≤̇(r, p) ∧≤̇(r, q))) ⌝,
⌜∀ p ∀ q ((≤̇(p, q) ∧Ġ(p)) Ġ(q)) ⌝, as well as all members of
{⌜∃ p (Ġ(p) ∧Ḋ(p)) ⌝ : D is a dense subset of ℙ}.
Let 𝔗(ℙ) := (T, σ, 𝒰̇, ϑ). It is clear from our definition of 𝔗(ℙ) that whenever ℳ^* 𝔗(ℙ), the set {p : ℳẊ(p)} is a ℙ-generic filter over V, and vice versa.
Forcing is expressible as a local method definition.
The proof of Proposition <ref> is constructive, and can be made uniform across all possible ℙ by choosing the symbols 𝒰̇, ≤̇ and Ġ in advance. This allows the function
(F : {forcing notions}⟶ V) [ℙ↦𝔗(ℙ)]
to be definable in V. Obviously, dom(F) is definable in V, so ran(F) must be as well.
Let us choose a definable function F as in the proof of Corollary <ref> and name it 𝔗(·), for later use and reference. Also, we shall use 𝖥𝗀 to denote the local method definition of forcing; in other words, 𝖥𝗀 := ran(𝔗(·)).
§.§ Complexity of Local Method Definitions
What does it mean for a definition to be complex? Long, overwrought, convoluted. These are just some synonyms that may come to mind. in general, a more complex definition places more requirements on the object, or the class of objects, it defines. In the former scenario, it makes the object it defines a priori less likely to exist; in the latter one, it defines a compratively smaller class of objects. Following this intuition, we formalise a way of comparing between two local method definitions.
Let X, Y be local method definitions of a CTM V. We use
* X ≤^M_w Y to denote the statement
“for each consistent 𝔗∈ X there is 𝔗' ∈ Y such that
∅≠Eval^V(𝔗') ⊂Eval^V(𝔗) ”,
and
* X ≤^M Y to denote the statement
“ there is a function F : X ⟶ Y definable in V such that
∅≠Eval^V(F(𝔗)) ⊂Eval^V(𝔗) for all consistent 𝔗∈ X ”.
When said statement is true, we say F witnesses X ≤^M Y.
Let X, Y be local method definitions. We say
* X ≡^M_w Y iff X ≤^M_w Y and Y ≤^M_w X,
* X ≡^M Y iff X ≤^M Y and Y ≤^M X,
* X <^M_w Y iff X ≤^M_w Y and Y ≰^M_w X, and
* X <^M Y iff X ≤^M Y and Y ≰^M X.
* ≤^M_w and ≤^M are transitive relations, so ≡^M_w and ≡^M are equivalence relations.
* ≤^M_w and ≤^M, as subclasses of V, are only definable outside V, for their definitions require quantification over proper subclasses of V.
* Obviously, X ≤^M Y always implies X ≤^M_w Y, so ≤^M_w is weaker than ≤^M.
Intuitively, Y is more complex than X as a definition when X ≤^M_w Y or X ≤^M Y, because Y both refines and extends X. Refinement occurs because no matter set a description of X picks out, Y contains a description that picks out a smaller non-empty set. Extension occurs because Y may have a description pick out a set that is not covered by any description of X. The difference between the two relations then boils down to whether a witness to said refinement and extension exists in V.
It would be good if V can decide (albeit not uniformly) whether X ≤^M Y for arbitrary local method definitions X and Y. Unfortunately, there seems to be no straightforward indication of that: it is not clear if V is always privy to proof of X ≤^M Y. For certain pairs (X, Y) though, X ≤^M Y is provable in 𝖹𝖥𝖢, and so V must know it is true.
Let X, Y be local method definitions. If X ⊂ Y, then X ≤^M Y.
The identity map on X is definable in V and witnesses X ≤^M Y.
There is a greatest local method definition with respect to ≤^M.
Let Y be the class of all TCIs. Clearly Y is a local method definition. Moreover, X ⊂ Y for every local method definition X. By Proposition <ref>, X ≤^M Y for every local method definition X.
A local method definition is not smallest with respect to ≤^M iff it contains a consistent TCI.
Clearly, every local method definition containing no consistent TCIs is smallest with respect to ≤^M. For the converse, by Proposition <ref>, it suffices to show that for every x ∈ V, there is a small extension of V not generated by a subset of x. But this is implied by the forcing notion Col(|x|^+, |x|^+)^V adding no subsets of x over V.
We now define a natural hierarchy on the class of TCIs.
For n < ω, we have the following local method definitions:
Π^𝖬_𝗇 := {𝔗 : 𝔗 is a Π_n TCI}
Σ^𝖬_𝗇 := {𝔗 : 𝔗 is a Σ_n TCI}.
Let n < ω. Then
* Π^𝖬_𝗇≤^M Π^𝖬_𝗇+1,
* Σ^𝖬_𝗇≤^M Σ^𝖬_𝗇+1,
* Π^𝖬_𝗇≤^M Σ^𝖬_𝗇+1, and
* Σ^𝖬_𝗇≤^M Π^𝖬_𝗇+1.
By Proposition <ref>.
By Proposition <ref>,
{Π^𝖬_𝗇 : n < ω}∪{Σ^𝖬_𝗇 : n < ω}
forms a hierarchy of local method definitions with ≤^M-predecessor sets that grow with n. We shall call this the local method hierarchy.
Mathematics and computer science are replete with hierarchies similar to the local method hierarchy, where syntactic forms of defining formulas are used to categorise sets. Examples include the projective, arithmetical and polynomial hierarchies. If we think of TCIs as augmentations of first-order theories with added constraints that are bounded but not first-order definable, then the local method hierarchy segregates TCIs based only on their first-order parts.
It turns out that most of the Π^𝖬_𝗇's are unnecessary in this hierarchy.
Let n satisfy 1 ≤ n < ω. For every 𝔗∈Π^𝖬_𝗇+1 there are
* 𝔗' ∈Σ^𝖬_𝗇, and
* a formula θ with two free variables,
such that
* θ is absolute for outer models of V, and
* in every outer model of V, θ defines a bijection from the set of models of 𝔗 into the set of models of 𝔗'.
Let
* 1 ≤ n < ω,
* 𝔗 = (T, σ, 𝒰̇, ϑ) ∈Π^𝖬_𝗇+1, and
* ϑ(𝒰̇) = (y, z).
We shall first construct 𝔗' ∈Σ^𝖬_𝗇 from 𝔗. Expand the signature σ to σ' by adding
* a unary relation symbol Ṫ, as well as
* a constant symbol ċ for each c ∈ y,
all of which are new to σ and distinct from one another. Define ϑ' point-wise as follows:
ϑ'(𝒰̇) := (y, 1)
ϑ'(Ẋ) := (y', 0) whenever Ẋ∈σ and ϑ(Ẋ) = (y', z')
ϑ'(Ṫ) := (y, z)
ϑ'(ċ) := ({c}, 0) for all c ∈ y .
Next, initialise T^* to be
T ∪{⌜∀ x_1 …∀ x_k ∃ x_k+1 (Ḟ(x_1, …, x_k) = x_k+1) ⌝
25mu : Ḟ∈σ is a k -ary function symbol}.
For each Ẋ∈σ, so that ϑ(Ẋ) is of the form (y', z'), do the following.
* Ẋ is a k-ary function symbol. Without loss of generality, we can assume y' ⊂ y^k+1. Add to T^* every member of the set
{ ⌜Ẋ(ċ_1, …, ċ_k) = ċ_k+1Ṫ(ċ_1) ∧…∧Ṫ(ċ_k+1) ⌝
: (c_1, …, c_k+1) ∈ y'}.
If in addition, z' = 1, then also add to T^* every member of the set
{ ⌜Ṫ(ċ_1) ∧…∧Ṫ(ċ_k+1) Ẋ(ċ_1, …, ċ_k) = ċ_k+1⌝
: (c_1, …, c_k+1) ∈ y'}.
* Ẋ is a k-ary relation symbol. Without loss of generality, we can assume y' ⊂ y^k. Add to T^* every member of the set
{ ⌜Ẋ(ċ_1, …, ċ_k) Ṫ(ċ_1) ∧…∧Ṫ(ċ_k) ⌝
: (c_1, …, c_k) ∈ y'}.
If in addition, z' = 1, then also add to T^* every member of the set
{ ⌜Ṫ(ċ_1) ∧…∧Ṫ(ċ_k) Ẋ(ċ_1, …, ċ_k) ⌝
: (c_1, …, c_k) ∈ y'}.
* Ẋ is a constant symbol. Without loss of generality, we can assume y' ⊂ y. Add to T^* the sentence
⌜Ṫ(Ẋ) ⌝.
Now T^*, like T, contains only Π_n+1 sentences.
Fix any formula ϕ∈ T^*. Then ϕ is of the form
⌜∀ x_1 …∀ x_k φ⌝,
where k < ω and φ is a Σ_n formula of which leading quantifier — should it exist — is not a universal quantifier. Note that such k and φ are unique for ϕ. If a∈^k{ċ : c ∈ y}, use ϕ_a to denote the result of running the following procedure on ϕ.
* For each subformula ψ containing at least one quantifier, in descending order of length (which is necessarily linear due to ϕ being in prenex normal form), do as per the cases below.
* ψ = ⌜∀ x ψ' ⌝ for some x and ψ'. In this case, replace ψ' with the string
⌜ (Ṫ(x) ψ') ⌝.
* ψ = ⌜∃ x ψ' ⌝ for some x and ψ'. In this case, replace ψ' with the string
⌜ (Ṫ(x) ∧ψ') ⌝.
* For each i such that 1 ≤ i ≤ k, remove all instances of the string ⌜∀ x_i ⌝.
* Substitute a(i - 1) for every instance of x_i whenever 1 ≤ i ≤ k.
It is not hard to verify that the aforementioned procedure is well-defined and produces a Σ_n sentence over σ'. As a result,
T_ϕ := {ϕ_a : a∈^k{ċ : c ∈ y}}
is a set of Σ_n sentences over σ'. We set
T' := ⋃{T_ϕ : ϕ∈ T^*}.
so that 𝔗' := (T', σ', 𝒰̇, ϑ') ∈Σ^𝖬_𝗇. Then the following hold true in every outer model of V.
* Given any model ℳ of 𝔗',
(Ṫ^ℳ; σ^ℳ)
is a model of 𝔗.
* Every model of 𝔗 can be uniquely and constructively extended and expanded to a model of 𝔗'.
It is clear that the transformations involved in <ref> and <ref> are absolute for outer models of V.
An important upshot of Lemma <ref> is the corollary below.
Π^𝖬_𝗇+1≤^M Σ^𝖬_𝗇 for all n satisfying 1 ≤ n < ω.
We are interested in how 𝖥𝗀 might fit into the local method hierarchy. To that end, let us first make a simple observation.
𝖥𝗀≤^M Σ^𝖬_1.
For all forcing notions ℙ, the TCI 𝔗(ℙ) is always a member of Π^𝖬_2 by the proof of Proposition <ref>. That 𝔗(·) is definable in V makes it a witness to 𝖥𝗀≤^M Π^𝖬_2. Lemma <ref> then implies 𝖥𝗀≤^M Σ^𝖬_1.
Let 𝔗∈ V be a consistent TCI and ℙ = (P, ≤_ℙ) be a forcing notion such that
* ≤_ℙ = ⊃∩ P,
* every member of P is a finite set, and
* ⊩_ℙ^V“⋃Ġ⊂ℒ_𝔗 and Ġ witnesses a (ℙ, V̇)-generic model of 𝔗”,
where Ġ and V̇ are the canonical names for the generic filter on ℙ and for the ground model, respectively. Then ∅≠Eval^V(𝔗(ℙ)) ⊂Eval^V(𝔗).
First, Eval^V(𝔗(ℙ)) is exactly the set of all ℙ-generic extensions of V, so ∅≠Eval^V(𝔗(ℙ)).
Let U ∈Eval^V(𝔗(ℙ)), so that U = V[g] for some ℙ-generic filter g over V. By <ref>, there is ℳ∈ U such that ℳ^* 𝔗, which implies V[ℳ] ⊂ U and V[ℳ] ∈Eval^V(𝔗). That
⊩_ℙ^V “⋃Ġ⊂ℒ_𝔗”
means ⋃ g is definable from ℳ in any transitive model of 𝖹𝖥𝖢, so ⋃ g ∈ V[ℳ]. To show U ⊂ V[ℳ], it suffices to show g is recoverable from ⋃ g in V[ℳ] with the help of parameters in V.
We claim g = [⋃ g]^< ω∩ P. Clearly g ⊂ [⋃ g]^< ω∩ P due to <ref>. Next assume p ∈ [⋃ g]^< ω∩ P. Then for each x ∈ p, there must be some q_x ∈ g containing x. As p is a finite set and g is a filter on ℙ,
S := {q_x : x ∈ p}
has a common extension, say q, in g. By <ref>, p ⊂⋃ S ⊂ q, so also p ∈ g. This proves our claim as well as the lemma.
Lemma <ref> provides a direction towards proving Σ^𝖬_1≤^M 𝖥𝗀: we can try to define a function F on Σ^𝖬_1 such that whenever 𝔗∈Σ^𝖬_1 is consistent, F(𝔗) = 𝔗(ℙ) for some forcing notion ℙ satisfying the hypothesis of Lemma <ref> in conjunction with 𝔗.
Putting aside 𝖥𝗀 for a moment, let us consider the local method hierarchy in and of itself. Notice that we have neither proven nor disproven anything about the size of
{Π^𝖬_𝗇 : n < ω}∪{Σ^𝖬_𝗇 : n < ω} modulo ≡^M ,
or equivalently,
{Π^𝖬_1}∪{Σ^𝖬_𝗇 : n < ω} modulo ≡^M ,
besides the obvious fact that it is countable and non-zero. Indeed, there seems to be no easy way of separating the rungs of the hierarchy as yet. This appears in stark contrast with the more renowned arithmetical and projective hierarchies, where separation happens “everywhere”. However, by no means is this a reason to dismiss (our definition of) the hierarchy, or discourage the study thereof. One need not look far to find a well-studied hierarchy of the same ilk with the same “issue”: the polynomial hierarchy, in which separation of any kind is equivalent to 𝖯≠𝖭𝖯.
* Are there m, n < ω for which Σ^𝖬_𝗆≢^M Σ^𝖬_𝗇?
* Is it true that Π^𝖬_1≤^M Σ^𝖬_0?
* Is there a TCI 𝔗 such that {𝔗}≰^M 𝖥𝗀?
§ CATEGORISING FORCING
In this section, we complete what we started in Subsection <ref>, and associate set forcing with a rung of the local method hierarchy. Additionally, we will study different witnesses to the fact that Π^𝖬_2≤^M 𝖥𝗀.
§.§ Forcing is Σ-1 (is Π-2)
This subsection is dedicated to showing 𝖥𝗀≡^M Σ^𝖬_1. One direction of the proof is done in Proposition <ref>. For the other direction, we will identify a witness to Π^𝖬_2≤^M 𝖥𝗀. This witness can be defined without referencing any witness to Π^𝖬_2≤^M Σ^𝖬_1.
Let 𝔗∈ V be a TCI. Define
P(𝔗) := {p ∈ [ℒ_𝔗]^< ω : ∃ W ∈𝐌(V) ∃ℳ (ℳ∈ W , ℳ^* 𝔗 and p ⊂Σ(𝔗, ℳ))}
It may not be clear that P(𝔗) is a member of V for arbitrary 𝔗∈ V. We prove this in the next lemma.
Let 𝔗∈ V be a TCI. Define
P'(𝔗) := {p ∈ [ℒ_𝔗]^< ω : ⊩^V_Col(w, |𝔄_𝔗|)∃ℳ (“ℳ^* 𝔗 and p ⊂Σ(𝔗, ℳ) ”)}
Let 𝔗∈ V be a TCI. Then P(𝔗) = P'(𝔗), so there is a definition of P(𝔗) uniform over all TCIs 𝔗 in V.
Noting that |𝔄_𝔗| = |trcl(𝔄_𝔗)|, this is essentially the proof of Lemma 3.35 of <cit.> with different nomenclature.
For each TCI 𝔗∈ V, set
ℙ(𝔗) := (P(𝔗), ⊃∩ P(𝔗)) .
Now that ℙ(𝔗) has been defined for all TCIs 𝔗, it is clear from Lemma <ref> and Proposition <ref> that Question 5.70 of <cit.> can be answered in the affirmative. In other words, letting
𝒞 := {𝔗 : 𝔗 is a Σ_1 TCI},
we can conclude the following:
* 𝒞⊊{𝔗 : 𝔗 is a Π_2 TCI}, and
* for each forcing notion ℙ, there is 𝔗∈𝒞 for which
* ℙ and ℙ(𝔗) are forcing equivalent, and
*
[t]
{V[ℳ] : ℳ^* 𝔗 in an outer model of V}
70mu = {V[g] : g is ℙ-generic over V},
so that also
{V[ℳ] : ℳ^* 𝔗 in an outer model of V}
= {V[g] : g is ℙ(𝔗) -generic over V}.
The definable function 𝔗↦𝔗(ℙ(𝔗)), restricted to Π^𝖬_2, will be our witness to Π^𝖬_2≤^M 𝖥𝗀. A trivial observation is that <ref> and <ref> of Proposition <ref> hold for ℙ(𝔗) and P(𝔗) in place of ℙ and P respectively. Furthermore, it is always true that
⊩_ℙ(𝔗)⋃Ġ⊂ℒ_𝔗,
so we are left to prove
⊩_ℙ(𝔗)“Ġ witnesses a (ℙ(𝔗), V̇) -generic model of 𝔗”
for every consistent Π_2 TCI 𝔗. In <cit.> this is done by appealing to the more general framework of forcing with language fragments. Fix a consistent Π_2 TCI 𝔗. Let us hereby briefly outline the proof of (<ref>).
In the aforementioned forcing framework, we allow potentially any set to be interpreted as a language, by extending the negation operator from classical first order logic to all sets. In other words, we define a canonical negation function on V as follows:
x := (x) =
y if x = ⌜ y ⌝ for some y
⌜ x ⌝ otherwise.
A set ℒ is closed under negation iff for each ϕ∈ℒ, ϕ∈ℒ. The aim of the framework is to study certain definable subsets of a set closed under negation from the perspective of a larger structure.
A structure 𝔄 is ℒ-suitable iff it expands on a model of a sufficiently strong set theory (𝖹𝖥𝖢 - 𝖯𝗈𝗐𝖾𝗋𝗌𝖾𝗍 is more than enough) and ℒ is a definable class in 𝔄. We define the language ℒ^*_𝔄 by enlarging the signature of 𝔄 with members of its base sets as constants and a fresh unary relation symbol ⌜ E ⌝. Morally, ⌜ E ⌝ is to be interpreted as a subset of ℒ when 𝔄 is ℒ-suitable. ℒ^*_𝔄 thus enables us to impose first-order requirements on subsets of ℒ.
We say “Σ Γ(ℒ, 𝔄)-certifies p” iff
* ℒ is closed under negation and 𝔄 is ℒ-suitable,
* 𝔄, augmented with the predicate Σ that interprets ⌜ E ⌝, satisfies the theory Γ⊂ℒ^*_𝔄, and
* p ⊂Σ.
Syntactically, it makes sense to talk about Π_n and Σ_n formulas and sentences in ℒ^*_𝔄, although these classes are defined a little differently from their counterparts in a typical set-theoretic context. Such a syntactic classification turns out to have interesting implications on forcing constructions.
There are a few lemmas, in varying degrees of generality, that connect genericity to relations akin to “Γ(ℒ, 𝔄)-certification”. The following is the most relevant to our intended use case.
Let W, λ, 𝔄, ℒ, Γ, P, ℙ and g be such that
* Γ contains only Π_2 ℒ^*_𝔄 sentences,
* |trcl(𝔄)| ≤λ,
* P = {p ∈ [ℒ]^< ω : ⊩_Col(ω, λ)∃Σ (“Σ Γ(ℒ, 𝔄) -certifies p")}≠∅,
* ℙ = (P, ⊃∩ P),
* ℙ is a definable class in 𝔄,
* W is an outer model of V, and
* g ∈ W is a ℙ-generic filter over 𝔄.
Then ⋃ g Γ(ℒ, 𝔄)-certifies ∅.
In particular, if g is ℙ-generic over V, then ⋃ g Γ(ℒ, 𝔄)-certifies ∅ in V[g] = V[⋃ g].
This is (implied by) Lemma 3.39 of <cit.>.
The next lemma allows us to transform an arbitrary Π_2 TCIs into a form amenable with our forcing framework, so that Lemma <ref> can be applied.
For each Π_2 TCI 𝔗 there is Γ_𝔗 such that
* Γ_𝔗 contains only Π_2 (ℒ_𝔗)^*_𝔄_𝔗 sentences, and
* for every set x,
∃ℳ (ℳ^* 𝔗 and Σ(𝔗, ℳ) = x) x Γ_𝔗 (ℒ_𝔗, 𝔄_𝔗)-certifies ∅.
This is implied by Lemmas 5.17 and 5.22 of <cit.> (cf. Proposition <ref>).
We can now derive (<ref>) from Lemmas <ref> and <ref>.
Let 𝔗 = (T, σ, 𝒰̇, ϑ) be a consistent Π_2 TCI. Then every ℙ(𝔗)-generic filter over V witnesses a (ℙ(𝔗), V)-generic model of 𝔗, or equivalently, (<ref>) holds.
We first apply Lemma <ref> to obtain a Γ_𝔗 such that
* Γ_𝔗 contains only Π_2 (ℒ_𝔗)^*_𝔄_𝔗 sentences, and
* for every set x,
∃ℳ (ℳ^* 𝔗 and Σ(𝔗, ℳ) = x) x Γ_𝔗 (ℒ_𝔗, 𝔄_𝔗)-certifies ∅.
Next, note that
* Observation <ref> holds,
* |𝔄_𝔗| = |trcl(𝔄_𝔗)|, and
* for all x and p,
x Γ_𝔗 (ℒ_𝔗, 𝔄_𝔗)-certifies p (x Γ_𝔗 (ℒ_𝔗, 𝔄_𝔗)-certifies ∅ and p ⊂ x) .
Let g be a ℙ(𝔗)-generic filter over V. Then the theorem follows directly from Lemma <ref>, as the hypotheses of said lemma are satisfied with
* V[g] in place of W,
* |𝔄_𝔗| in place of λ,
* 𝔄_𝔗 in place of 𝔄,
* ℒ_𝔗 in place of ℒ,
* Γ_𝔗 in place of Γ,
* P(𝔗) in place of P, and
* ℙ(𝔗) in place of ℙ,
bearing in mind <ref> to <ref>.
It should be emphasised that the proof of Lemma <ref> provides a uniform way of constructing Γ_𝔗 from any TCI 𝔗, such that <ref> of the lemma is satisfied. If in addition, 𝔗 is Π_2, then the Γ_𝔗 constructed also satisfies <ref> of Lemma <ref>. We shall hereby have Γ_𝔗 denote the result of the aforementioned construction with 𝔗 as its starting point.
As a corollary, we observe a rather strong failure of the converse of Proposition <ref>.
There are local definitions X and Y such that X <^M Y and
⋃ ((Eval^V)" Y) ⊊⋃ ((Eval^V)" X) .
Let 𝖲𝗍 := {𝔗_s : s ∈ V} be as in Proposition <ref>. By (<ref>),
⋃ ((Eval^V)" 𝖥𝗀) ⊊⋃ ((Eval^V)" 𝖲𝗍) .
Since 𝖲𝗍⊂Π^𝖬_0, by Proposition <ref> and Theorem <ref>, 𝖲𝗍≤^M 𝖥𝗀. We are left to show 𝖥𝗀≰^M 𝖲𝗍.
Choose any forcing notion ℙ satisfying V ∉Eval^V(𝔗(ℙ)). If s is finite, then Eval^V(𝔗_s) = {V}⊄Eval^V(𝔗(ℙ)). Now assume s is infinite, and let f be a bijection from |s| into s. Apply an argument similar to that through which (<ref>) was justified, to obtain some r ⊂ω such that L^V[r] is an outer model of V, but no outer model of L^V[r] is a forcing extension of V. Then V[f" r] ∈Eval^V(𝔗_s) is an outer model of L^V[r], so Eval^V(𝔗_s) ⊄Eval^V(𝔗(ℙ)). We have thus proved that 𝖥𝗀≰^M_w 𝖲𝗍, and this completes the proof.
The proof of Corollary <ref> intimates that for TCIs with very simple theories, we can always construct a non-generic model. We cannot do the same for all Π_2 TCIs because of Proposition <ref>. Together, they make us wonder if a clear line can be drawn in V. Let
NG := {𝔗∈ V : 𝔗 is a Π_2 TCI and
∃ W ∃ℳ∈ W ∀ x ∈ W (W is an outer model of V and
ℳ^* 𝔗 and x ≇ℳ whenever x is a V -generic model of 𝔗)}.
[Lau, <cit.>]
Is NG definable in V?
§.§ A Strengthening
In this subsection, we build on Theorem <ref> to achieve a strengthening of the statement “Π^𝖬_2≤^M 𝖥𝗀”. This stronger statement appears in the form of Theorem <ref>. To start, let us recall some definitions and facts from order theory.
Let ℙ = (P, ≤_ℙ) be a forcing notion. A set Q is an upward closed subset of ℙ iff Q ⊂ P and for all p, q ∈ P,
(q ∈ Q and q ≤_ℙ p) p ∈ Q .
If ℙ = (P, ≤_ℙ) is a forcing notion and p ∈ P, we let g_p (ℙ) denote the set
{q ∈ P : p _ℙ q}.
Let ℙ = (P, ≤_ℙ) be a forcing notion. A member p of P is an atom of ℙ iff
∀ q_1 ∀ q_2 ((q_1 ≤_ℙ p and q_2 ≤_ℙ p) q_1 _ℙ q_2).
If ℙ = (P, ≤_ℙ) is a forcing notion and p is an atom of ℙ, then g_p (ℙ) is a ℙ-generic filter over V.
If D is dense in ℙ, then there is q ∈ D with q ≤_ℙ p. Obviously, q ∈ g_p (ℙ). Therefore g_p (ℙ) is a ℙ-generic subset over V. To see that g_p (ℙ) is a filter, let q_1 and q_2 be members of g_p (ℙ). By the definition of g_p (ℙ), there are r_1 and r_2 such that
* r_1 ≤_ℙ q_1,
* r_1 ≤_ℙ p,
* r_2 ≤_ℙ q_2,
* r_2 ≤_ℙ p.
As p is an atom of ℙ, it must be the case that r_1 _ℙ r_2, which means q_1 _ℙ q_2.
A forcing notion ℙ = (P, ≤_ℙ) is atomless iff no member of P is an atom of ℙ.
A non-empty atomless forcing notion gives rise to many forcing extensions.
Let V be a CTM such that
V “ℙ = (P. ≤_ℙ) is a non-empty atomless forcing notion”.
Then |Eval^V(𝔗(ℙ))| = 2^ℵ_0.
As all members of Eval^V(𝔗(ℙ)) are countable, each one of them contains only countably many ℙ-generic filters over V. By Proposition <ref>, we just need to show there are 2^ℵ_0 many ℙ-generic filters over V.
The idea is to construct a Cantor scheme differentiating the generic filters in question. Outside V, there are countably many dense subsets of ℙ, so let {D_n : n < ω} enumerate them. Define members of the set {p_s : s ∈ 2^< ω} such that
* p_∅∈ P,
* p_s ∈ D_n if |s| = n + 1,
* p_s_0≤_ℙ p_s_1 if s_1 ⊂ s_0, and
* p_s_0 _ℙ p_s_1 if s_1 ⊄s_0 and s_0 ⊄s_1.
This can be done by induction on the length of s. Choose any condition of ℙ to be p_∅. Assume next that p_s has been defined. Since p_s is not an atom of ℙ, we can find q_0 and q_1 extending p_s in ℙ such that q_0 _ℙ q_1. The density of D_|s| guarantees there are q'_0, q'_1 ∈ D_|s| extending q_0 and q_1 in ℙ, respectively. Set
p_s^⌢⟨ 0 ⟩ := q'_0
p_s^⌢⟨ 1 ⟩ := q'_1 .
It is not hard to verify <ref> to <ref> hold for the p_ss defined as such.
Given r ∈ 2^ω, use g_r to denote the set
{q ∈ P : ∃ n < ω (p_r n≤_ℙ q)}.
Now g_r is a ℙ-generic filter over V whenever r ∈ 2^ω. If r_0, r_1 ∈ 2^ω and r_0 ≠ r_1, then r_0 n ≠ r_1 n for some n < ω. By <ref> we have p_r_0 n _ℙ p_r_1 n, so g_r_0≠ g_r_1. We are done because obviously, |2^ω| = 2^ℵ_0.
Models of a TCI 𝔗 across all outer models of V can be very complicated. However, there are certain models of which atomic diagrams can be easily read off ℙ(𝔗).
[Lau, <cit.>]
Given a TCI 𝔗 and any ℳ, we say ℳ is a finitely determined model of 𝔗 iff ℳ^* 𝔗 and for some quantifier-free sentence φ with parameters in ℳ,
∀ W ∀ℳ' ( (W is an outer model of V and ℳ' ∈ W and ℳ' ^* 𝔗 and ℳ' φ)
ℳ' = ℳ).
In this case, ℳ is finitely determined by φ.
Naturally, all finite models of any TCI are finitely determined. As it turns out, if a TCI is consistent, then all its finitely determined models are already in V
Let 𝔗 be a TCI and ℳ be a finitely determined model of 𝔗 in some outer model of V. Then for some atom p of ℙ(𝔗), Σ(𝔗, ℳ) = g_p (ℙ(𝔗)). In particular, ℳ∈ V.
Let ℳ be finitely determined by φ. Without loss of generality, we can assume φ is the conjunction of a set of literals {l_i : i < n} for some n < ω. This means
p := {⌜ E(l_i) ⌝ : i < n}
is an atom of ℙ(𝔗). Proposition <ref> tells us that g_p (ℙ(𝔗)) is ℙ(𝔗)-generic over V, so necessarily Σ(𝔗, ℳ) = g_p (ℙ(𝔗)) by Theorem <ref>. Then according to Proposition <ref>, ℳ∈ V because g_p (ℙ(𝔗)) ∈ V.
It is possible to have an analogue of Lemma <ref> for models that are “close to being finitely determined”.
[Lau, <cit.>]
Let 𝔗 be a TCI. Inductively define Γ_𝔗^(α), P(𝔗)^(α) and ℙ(𝔗)^(α) for all ordinals α≤ |[ℒ_𝔗]^< ω|^+ as follows:
Γ_𝔗^(0) := Γ_𝔗,
P(𝔗)^(0) := P(𝔗) ,
Γ_𝔗^(α) := Γ_𝔗^(α - 1)∪{⌜⋁_x ∈ p ( E(x)) ⌝ : p is an atom of ℙ(𝔗)^(α - 1)}
if α is a successor ordinal,
Γ_𝔗^(α) := ⋃_β < αΓ_𝔗^(β)
if α is a limit ordinal,
P(𝔗)^(α) := {p ∈ [ℒ_𝔗]^< ω : ⊩_Col(ω, |trcl(𝔄_𝔗)|)∃Σ (“Σ Γ_𝔗^(α) (ℒ_𝔗, 𝔄_𝔗)-certifies p")},
ℙ(𝔗)^(α) := (P(𝔗)^(α), ≤_ℙ(𝔗)) .
By a simple cardinality argument, there must exist some α < |[ℒ_𝔗]^< ω|^+ for which Γ_𝔗^(α) = Γ_𝔗^(α + 1), whence ℙ(𝔗)^(α) = ℙ(𝔗)^(α + 1).
[Lau, <cit.>]
Let Γ_𝔗^⊤ denote the unique Γ such that Γ = Γ_𝔗^(α) = Γ_𝔗^(α + 1) for some α < |[ℒ_𝔗]^< ω|^+. Similarly, ℙ(𝔗)^⊤ = (P(𝔗)^⊤, ≤_ℙ(𝔗)^⊤) shall denote the unique ℙ such that ℙ = ℙ(𝔗)^(α) = ℙ(𝔗)^(α + 1) for some α < |[ℒ_𝔗]^< ω|^+.
It is not hard to see that P(𝔗)^⊤ is an atomless upward closed subset of ℙ(𝔗) and Γ_𝔗⊂Γ_𝔗^⊤.
In constructing the ℙ(𝔗)^(α)'s, we are inductively removing atoms of ℙ(𝔗). These atoms correspond to isolated points in a Stone-type space generated by models of a TCI. By looking at Definition <ref> in this way, we can draw obvious parallels between ℙ(𝔗)^(α) and the α-th-order Cantor-Bendixson derivative of a set. Such parallels culminate in ℙ(𝔗)^⊤ being analogous to the “perfect core” of ℙ(𝔗).
[Lau, <cit.>]
Given a TCI 𝔗 and any ℳ, we say ℳ is an almost finitely determined model of 𝔗 iff ℳ^* 𝔗 and for some α < |[ℒ_𝔗]^< ω|^+ and an atom p of ℙ(𝔗)^(α),
p ⊂Σ(𝔗, ℳ).
We have as our next lemma, the promised analogue of Lemma <ref>.
Let 𝔗 be a TCI and ℳ be an almost finitely determined model of 𝔗 in some outer model of V. Then for some α < |[ℒ_𝔗]^< ω|^+ and some atom p of ℙ(𝔗)^(α), Σ(𝔗, ℳ) = g_p (ℙ(𝔗)^(α)). In particular, ℳ∈ V.
Choose any model ℳ of 𝔗 in an outer model of V. It suffices to prove by induction on α≤ |[ℒ_𝔗]^< ω|^+ that
∀ q ∃β≤α ∃ p ( (q is an atom of ℙ(𝔗)^(α) and q ⊂Σ(𝔗, ℳ))
(p is an atom of ℙ(𝔗)^(β) and Σ(𝔗, ℳ) = g_p (ℙ(𝔗)^(β)))) .
The base case where α = 0 is just Lemma <ref>. For the inductive case, assume 0 < α≤ |[ℒ_𝔗]^< ω|^+. and let q be an atom of ℙ(𝔗)^(α) with q ⊂Σ(𝔗, ℳ). Then by Lemma <ref> and the definition of ℙ(𝔗)^(α), either Σ(𝔗, ℳ) = g_q (ℙ(𝔗)^(α)) or there is β' < α and an atom q' of ℙ(𝔗)^(β') such that q' ⊂Σ(𝔗, ℳ). In the latter case, the inductive hypothesis gives us β≤β' and an atom p of ℙ(𝔗)^(β) for which Σ(𝔗, ℳ) = g_p (ℙ(𝔗)^(β)). Either way we are done.
The way ℙ(𝔗) and ℙ(𝔗)^⊤ are defined from a TCI 𝔗 allows us to establish a nice dichotomy on the (ℙ(𝔗), V)-generic models of 𝔗 when 𝔗 is Π_2.
Let 𝔗 be a Π_2 TCI and ℳ be a (ℙ(𝔗), V)-generic model of 𝔗. Then one of the following must hold:
* ℳ is almost finitely determined.
* ℳ is a (ℙ(𝔗)^⊤, V)-generic model of 𝔗.
Let g be a ℙ(𝔗)-generic filter over V and assume 𝒜∩ g = ∅, where
𝒜 := {p : ∃α (α < |[ℒ_𝔗]^< ω|^+ and p is an atom of ℙ(𝔗)^(α))}.
This latter assumption is equivalent to saying that the unique model ℳ of 𝔗 for which ⋃ g = Σ(𝔗, ℳ) is not almost finitely determined. By Theorem <ref>, it suffices to show that g is a ℙ(𝔗)^⊤-generic filter over V. Clearly, ⋃ g Γ_𝔗^⊤ (ℒ_𝔗, 𝔄_𝔗)-certifies p, so g ⊂ℙ(𝔗)^⊤. That ℙ(𝔗)^⊤ is a suborder of ℙ(𝔗) means g is a filter on ℙ(𝔗)^⊤.
To see g is ℙ(𝔗)^⊤-generic over V, let E be predense in ℙ(𝔗)^⊤. Note that if p ∈ℙ(𝔗) is incompatible in ℙ(𝔗) with every member of 𝒜, then p ∈ℙ(𝔗)^⊤. As such, E ∪𝒜 must be predense in ℙ(𝔗). But this implies E ∩ g ≠∅ because g is ℙ(𝔗)-generic and 𝒜∩ g = ∅.
The following is a stronger version of Theorem <ref>.
Let 𝔗 be a Π_2 TCI. Then one of the following must hold.
* All models of 𝔗 are almost finitely determined.
* ℙ(𝔗)^⊤ is non-empty and every ℙ(𝔗)^⊤-generic filter over 𝔄_𝔗 witnesses ℳ is a (ℙ(𝔗)^⊤, 𝔄_𝔗)-generic model of 𝔗 for some ℳ.
Assume not all models of 𝔗 are almost finitely determined, and let ℳ be a model of 𝔗 not almost finitely determined in some outer model of V. Then Σ(𝔗, ℳ) Γ_𝔗^⊤ (ℒ_𝔗, 𝔄_𝔗)-certifies ∅, so ℙ(𝔗)^⊤ is non-empty.
Let g be a ℙ(𝔗)^⊤-generic filter over V. Check that the hypotheses of Lemma <ref> are satisfied when we have
* V[g] in place of W,
* |𝔄_𝔗| in place of λ,
* 𝔄_𝔗 in place of 𝔄,
* ℒ_𝔗 in place of ℒ,
* Γ_𝔗^⊤ in place of Γ,
* P(𝔗)^⊤ in place of P, and
* ℙ(𝔗)^⊤ in place of ℙ,
A direct application of said lemma, coupled with Lemma <ref> and the knowledge that Γ_𝔗⊂Γ_𝔗^⊤, completes the proof.
The strengthening we were aiming for can now be proven.
Fix 𝔗^* ∈𝖥𝗀. Then there is F_𝔗^* witnessing Π^𝖬_2≤^M 𝖥𝗀 such that
* F_𝔗^*(𝔗) = 𝔗^* if 𝔗 is inconsistent,
* F_𝔗^*(𝔗) = 𝔗((∅, ∅)) if 𝔗 is consistent and all models of 𝔗 are almost finitely determined, and
* F_𝔗^*(𝔗) = 𝔗(ℙ) for some non-empty atomless forcing notion ℙ if 𝔗 is consistent and not all models of 𝔗 are almost finitely determined.
Define F_𝔗^* point-wise as follows:
F_𝔗^*(𝔗) :=
𝔗^* if 𝔗 is inconsistent
𝔗(ℙ(𝔗)^⊤) otherwise,
noting Lemma <ref>, Theorem <ref> and the fact that ℙ(𝔗)^⊤ = (∅, ∅) if all models of 𝔗 are almost finitely determined.
Notice that any F_𝔗^* satisfying Theorem <ref> must also satisfy
|Eval^V(𝔗)| = |Eval^V(F_𝔗^*(𝔗))|
for all 𝔗∈ dom(F_𝔗^*). As a corollary, we get a trichotomy for the number of small extensions a Π_2 TCI can pick out.
Let V be a CTM and 𝔗∈ V be a Π_2 TCI. Then
* Eval^V(𝔗) = ∅ if 𝔗 is inconsistent,
* Eval^V(𝔗) = {V} if 𝔗 is consistent and all models of 𝔗 are almost finitely determined, and
* |Eval^V(𝔗)| = 2^ℵ_0 if 𝔗 is consistent and not all models of 𝔗 are almost finitely determined.
<ref> follows from the definition of Eval^V and what it means for a TCI to be (in)consistent. <ref> follows from Lemma <ref> and <ref> from Proposition <ref> and Theorem <ref>.
§ REFERENCES
[heading=none]
|
http://arxiv.org/abs/2409.03540v1 | 20240905140256 | Diversity in hydrogen-rich envelope mass of type II supernovae (II): SN 2023ixf as explosion of partially-stripped intermediate massive star | [
"Qiliang Fang",
"Takashi J. Moriya",
"Lucía Ferrari",
"Keiichi Maeda",
"Gaston Folatelli",
"Keila Y. Ertini",
"Hanindyo Kuncarayakti",
"Jennifer E. Andrews",
"Tatsuya Matsumoto"
] | astro-ph.HE | [
"astro-ph.HE"
] |
0000-0002-1161-9592]Qiliang FangNational Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0003-1169-1954]Takashi J. Moriya
National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Graduate Institute for Advanced Studies, SOKENDAI, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia
0009-0000-6303-4169]Lucía Ferrari
Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, B1900FWA La Plata, Argentina
Instituto de Astrofísica de La Plata (IALP), CCT-CONICET-UNLP, Paseo del Bosque S/N, B1900FWA, La Plata, Argentina
0000-0003-2611-7269]Keiichi MaedaDepartment of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan
0000-0003-2611-7269]Gaston Folatelli
Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, B1900FWA La Plata, Argentina
Instituto de Astrofísica de La Plata (IALP), CCT-CONICET-UNLP, Paseo del Bosque S/N, B1900FWA, La Plata, Argentina
Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo, Kashiwa 277-8583, Chiba, Japan
0000-0001-7251-8368]Keila Y. Ertini
Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, B1900FWA La Plata, Argentina
Instituto de Astrofísica de La Plata (IALP), CCT-CONICET-UNLP, Paseo del Bosque S/N, B1900FWA, La Plata, Argentina
0000-0002-1132-1366]Hanindyo Kuncarayakti
Tuorla Observatory, Department of Physics and Astronomy, FI-20014 University of Turku, Finland Finnish Centre for Astronomy with ESO (FINCA), FI-20014 University of Turku, Finland
0000-0003-0123-0062]Jennifer E. Andrews
Gemini Observatory/NSF's NOIRLab, 670 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-9350-6793]Tatsuya Matsumoto
Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan
Hakubi Center, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 Japan
§ ABSTRACT
SN 2023ixf is one of the most well-observed core-collapse supernova in recent decades, yet there is inconsistency in the inferred zero-age-main-sequence (ZAMS) mass M_ ZAMS of its progenitor. Direct observations of the pre-SN red supergiant (RSG) estimate M_ ZAMS spanning widely from 11 to 18 M_⊙. Additional constraints, including host environment and the pulsation of its progenitor RSG, suggest a massive progenitor with M_ ZAMS > 17 M_⊙. However, the analysis of the properties of supernova, from light curve modeling to late phase spectroscopy, favor a relatively low mass scenario (M_ ZAMS < 15 M_⊙). In this work, we conduct systematic analysis of SN 2023ixf, from the RSG progenitor, plateau phase light curve to late phase spectroscopy. Using + to simulate the RSG progenitor and their explosions, we find that, despite the zero-age-main-sequence (ZAMS) mass of the RSG models being varied from 12.0 to 17.5 M_⊙, they can produce light curves that well match with SN 2023ixf if the hydrogen envelope mass and the explosion energy are allowed to vary. Using late phase spectroscopy as independent measurement, the oxygen emission line [O I] suggests the ZAMS is intermediate massive (∼ 16.0 ), and the relatively weak Hα emission line indicates the hydrogen envelope has been partially removed before the explosion. By incorporating the velocity structure derived from the light curve modeling into an axisymmetric model, we successfully generated [O I] line profiles that are consistent with the [O I] line observed in late phase spectroscopy of SN 2023ixf. Bringing these analyses together, we conclude that SN 2023ixf is the aspherical explosion of an intermediate massive star (M_ ZAMS = 15 - 16 M_⊙) with the hydrogen envelope being partially stripped to 4 - 5 prior to its explosion.
§ INTRODUCTION
SN 2023ixf is a type II supernova (SN II) discovered in the nearby galaxy M101 on May 19, 2023. After its discovery by <cit.>, SN 2023ixf attracted the attention of the community, and extensive observations were being conducted, including photometry and spectroscopy covering ultraviolet (UV), optical, to infrared (IR) bands. The explosion site is also observed by Hubble Space Telescope (HST), Spitzer Space Telescope, and ground-based telescopes. These observations confirm that the progenitor of SN 2023ixf is a dusty red supergiant (RSG), surrounded by confined circumstellar medium (CSM) ().
Being one of the most well-observed SN II, SN 2023ixf holds significant potential for testing modern theories of stellar evolution and core-collapse. For this purpose, it is important to accurately measure the zero-age-main-sequence (ZAMS) mass of its progenitor. However, the estimated M_ ZAMS using different methods are inconsistent. Imaging of the progenitor prior to the explosion is one of the most direct method to estimate M_ ZAMS, while the estimations based on different assumptions on the properties of the dust and the models differ significantly: 11 ± 2 (); 12 - 14 (); 16.2 - 17.4 (); 17 ± 4 (); 18.1^+0.7_-1.4 (). The monitoring of the 2023ixf progenitor with Spitzer Space Telescope and ground-based telescopes reveal mid-IR variability with a period of ∼ 1000 days. Making use of the period-luminosity relation of RSG in M31 (), estimate M_ ZAMS to be 20 ± 4 M_⊙. Further, the analysis of the stellar population in the vicinity of the explosion site favor massive progenitor, from 16.2 ∼ 17.4 () to around 22 ().
Hydrodynamic and radiative transfer modeling of the expelled material (ejecta) after the explosion is another useful way to constrain the properties of the progenitor.
<cit.> shows that the plateau phase light curve can be well-fitted by the model with M_ ZAMS = 12 M_⊙ and explosion energy E = 1.2 × 10^51 erg (hereafter we refer 1.0 × 10^51 erg as 1.0 foe). Progenitor model with M_ ZAMS = 15 M_⊙ cannot provide the right plateau duration and magnitude at the same time. <cit.> and <cit.> employ the RSG models from <cit.>, and they also find the model with M_ ZAMS = 10 M_⊙ and E = 2.0 foe best matches the light curve. The late-phase (nebular) spectroscopy, derived at 250 days after explosion, supports the relatively low mass scenario: when the ejecta becomes transparent, the spectroscopy is dominated by forbidden emission lines. Among them, the [O I] line can be used to measure the oxygen mass in the ejecta and constrain M_ ZAMS (). <cit.> found that the oxygen yield of SN 2023ixf is more consistent with M_ ZAMS = 12 - 15 M_⊙.
Table 1 summarizes the inferred ZAMS mass of the progenitor of SN 2023ixf from different representative studies.
In this work, we aim to solve the inconsistency seen in pre-SN images, light curve modeling and nebular spectroscopy by bringing the uncertainty of pre-SN mass-loss into consideration. In 2, we construct RSG models that have the same T_ eff and L as pre-SN images from <cit.>, <cit.> and <cit.> using . The hydrogen-rich envelope of these RSG models are then artificially removed to mimic binary interaction or late stellar activities that may induce strong mass-loss. The partial removal of the hydrogen-rich envelope hardly change their positions on the Hertzsprung - Russell diagram (HRD) but can significantly affect the resulting light curves. The progenitor models are then used as the input of + to trigger SNe explosions and the radiative transfer modeling of the light curves. In 3, the model light curves are compared with the observational data of SN 2023ixf, which reveals that, progenitor models with M_ ZAMS larger than 15 M_⊙ can produce light curves that closely match with observation if their hydrogen-rich envelopes are partially removed to ∼ 4 M_⊙. Light curve modeling therefore cannot constrain M_ ZAMS without knowing the amount of the hydrogen-rich envelope. In 4, we use nebular spectroscopy as an independent constraint on M_ ZAMS. By taking γ-photon leakage into consideration, we find evidence for intermediate massive star (M_ ZAMS ∼ 16 M_⊙) and small hydrogen-rich envelope mass (M_ Henv⪅ 5 M_⊙). The double-peaked [O I] can be interpreted as an axisymmetric explosion. The conclusion is left to 5.
ccc[t]
Method M_ ZAMS () References
Host environment 17 ∼ 19 <cit.>
∼ 22 <cit.>
11 ± 2 <cit.>
17 ± 4 <cit.>
pre-SN images 16.2 ∼ 17.4 <cit.>
18.1^+0.7_-1.4 <cit.>
12 ∼ 14 <cit.>
Pulsation 20 ± 4 <cit.>
17 ∼ 21 <cit.>
12 <cit.>
Light curve 10 <cit.>
10 <cit.>
Nebular spectroscopy < 15 <cit.>
Inferred ZAMS mass of the progenitor of SN 2023ixf in the literature.
§ NUMERICAL SETUP
We use the one-dimensional stellar evolution code, Modules for Experiments in Stellar Astrophysics (, ), version r23.05.1, to simulate progenitor models with varied ZAMS mass, starting from pre-main-sequence phase to the moment when the mass fraction of carbon X_ C in the innermost cell drops below 10^-3. In this work, our light curve analysis focuses on the plateau phase, which is not affected by late stage evolution after core carbon depletion. We employ the mixing scheme similar to <cit.>, i.e., Ledoux criterion for convection, exponential overshooting parameters f_ ov = 0.004 and f_ ov, 0 = 0.001, semiconvection efficiency α_ sc = 0.01 (), thermohaline mixing coefficient α_ th = 2 (). The mixing length parameter α_ mlt is varied to tune the effective temperature T_ eff of the progenitors such that the RSG models match with pre-SN images on the HRD at the end point of the calculation. Throughout the calculation, we ignore wind-driven mass loss. After core helium depletion, we turn on the command and use to artificially remove the hydrogen-rich envelope. The calculation is carried on until core carbon depletion without mass-loss.
The ZAMS mass range is selected to match with the luminosity from (high mass; 16.5-18.5M_⊙), (intermediate mass; 14.0-16.0M_⊙) and (low mass; 11.5-13.0M_⊙). Here our ZAMS mass estimation is slightly higher than <cit.>, where the ZAMS mass is proposed to be 12.0∼14.0M_⊙. This is because compared with their reference models, given the same ZAMS mass, our models have smaller helium cores and appear to be fainter on HRD. However, we do not attempt to change f_ ov, a key parameter that controls the helium core mass for fixed ZAMS mass (see for example ), to align our ZAMS mass estimation because the progenitor models in this work follow the same M_ ZAMS-M_ He core relation as <cit.>, which is frequently used as the initial models for core-collapse simulation and nebular spectroscopy modeling that will be discussed in later sections. It is the helium core mass (or more precisely, the carbon-oxygen core mass), rather than the ZAMS mass that determines the amount of oxygen element and the core-collapse process.
Throughout this work, the estimation of ZAMS mass is based on these progenitor models that follow a fixed M_ ZAMS-M_ He core relation (Figure <ref>).
We further note that T_ eff from <cit.>, estimated to be 2770K, is too cool to be reproduced by the RSG models in this work, we therefore adopt T_ eff = 3110 ∼ 3330K, which are the lower and upper values of T_ eff of IRC-10414, a Galactic RSG analog of SN 2023ixf progenitor ().
After core carbon depletion, we closely follow the test suite to trigger the explosion of the progenitor model with varied explosion energy E_ K. After the shock wave has reached to 0.05 below the stellar surface, we artificially deposit 0.06 ^ 56Ni uniformly in the helium core (M_ Ni is taken from and ), and use boxcar scheme to smooth the abundance profiles in the ejecta (). The model is then hand-off to for the calculation of the light curve. The detailed description of this workflow can be found in the literature ().
The parameters of the RSG progenitor models are listed in Table <ref>. The comparison of the models and the RSG progenitors of SN 2023ixf from pre-SN images on HRD are shown in Figure <ref>.
cccccccc[t]
RSG Progenitor T_ eff(K) logL/L_⊙ α_ mlt M_ ZAMS(M_⊙) M_ Henv(M_⊙) M_ rem(M_⊙) E_ K(foe)
<cit.> 3343_ -50^ +50 5.10_ -0.05^ +0.05 2.00 17.5 3.0-7.5 1.8 0.5-1.5
<cit.> 3220_ -110^ +110 4.95_ -0.08^ +0.09 1.80 15.0 3.0-7.5 1.8 0.5-1.5
<cit.> 3920_ -160^ +200 4.74_ -0.07^ +0.07 2.80 12.0 3.0-8.0 1.5 1.0-2.5
Progenitor models in this work.
§ LIGHT CURVE ANALYSIS
The photometry data of SN 2023ixf in BgVriz bands are collected from <cit.>. Here, we adopt distance 6.85±0.15 Mpc (), and total extinction E (B-V ) = 0.039 mag () with R_V= 3.1. Extinctions in different bands are estimated from the extinction law of .
The light curve of SN 2023ixf is characterized by a rapid rise to M_V∼ -18.4 mag, followed by a gradual decline to a plateau at M_V∼ -17.6 mag. The early phase emission indicates the presence of dense circumstellar material (CSM) that is not predicted by standard stellar evolution theory ()
whose properties have been extensively studied (see, for example, ). While the CSM around SN 2023ixf is crucial for understanding stellar evolution, it is not the focus of this work. As pointed out by <cit.> and <cit.>, CSM interaction dominates early-phase observations, but its effects on the plateau phase, especially in gVriz bands, are minimal. The plateau duration and magnitude are mainly affected by the explosion energy (E_ K) and the ejecta mass (M_ eje). Therefore, we do not include CSM in our models to avoid introducing unrelated parameters. Our analysis is restricted to t > 40 days, i.e., after the midpoint of the plateau.
For progenitor models with the same M_ ZAMS, we use M_ Henv and E_ K as free parameters to fit the multi-band light curves of SN 2023ixf. The quality of the fits is evaluated from t> 40 days, covering roughly from the midpoint of the plateau to the onset of the radioactive tail. The ranges of M_ Henv and E_ K are listed in Table <ref>, with steps of 0.25 M_⊙ 0.1 foe, respectively. The best-fit model is determined by interpolating the model light curves to the observed epochs in the different bands and minimizing χ^2. The photosphere velocities, estimated from the Fe II absorption minimum in early phase spectroscopy measured in , are not included in the fitting process, but used as independent evaluations of the qualities of the fits. The results of the best-fit parameters to models with M_ ZAMS =17.5M_⊙ (), 15.0M_⊙ () and 12.0M_⊙ () are shown in Figure <ref>.
From the results of light curve modeling, we conclude that, if M_ Henv is allowed to vary, RSG progenitors with M_ ZAMS as massive as 15 - 18 M_⊙ can produce similar light curves to those of relatively low mass progenitors (M_ ZAMS∼ 12 M_⊙) with almost all of their hydrogen-rich envelopes attached. The artificial removal of the hydrogen-rich envelope does not significantly change the stellar radius as long as the residual M_ Henv remains larger than ∼ 3 M_⊙ (see Table 2 of or Figure 1 of ). The luminosity, primarily determined by the helium core mass, is also unaffected by this process. Therefore, partial removal of the hydrogen-rich envelope does not alter the position of the progenitor RSG on the HRD, aligning with results from pre-SN images, while it introduces diversity in light curve properties. Without knowing the mass of the residual hydrogen-rich envelope, which can be significantly influenced by the presence of a companion star or complex eruptive activity in the late phases of massive star evolution, light curve modeling cannot determine M_ ZAMS of SN progenitor. For this purpose, other independent measurement is required.
§ NEBULAR SPECTROSCOPY ANALYSIS
While the light curve during the plateau phase is largely affected by the mass of the hydrogen-rich envelope and is therefore sensitive to the uncertain mass-loss history prior to the explosion, late-phase (nebular phase) spectroscopy, taken several months to a year after the explosion when the ejecta becomes optically thin, is primarily determined by the properties of the innermost core. At this phase, the spectroscopy of the SN is dominated by emission lines, with particularly strong lines being [O I] λλ6300,6363, Hα and [Ca II] λλ7291,7323. The absolute or relative flux of [O I] is an useful proxy of the amount of the oxygen elements in the core region, therefore is frequently employed as the indicator of the ZAMS mass of the progenitor from aspects of both theory and observation. In this section, we conduct analysis on the nebular spectroscopy of SN 2023ixf, taken at t= 259 days after the explosion ().
§.§ [O I] luminosity
In <cit.>, based on nebular spectroscopy analysis, the ZAMS mass of the progenitor of SN 2023ixf is proposed to be 12 - 15M_⊙, consistent with the pre-SN images of <cit.> and <cit.>. The conclusion is made based on several lines of evidence: (1) when scaled to the same distance, the [O I] flux of SN 2023ixf is relatively low compared with model spectroscopy at similar phase, taken from and ; (2) The [O I]/[Ca II] ratio, which is a useful proxy of the progenitor CO core mass, is as low as 0.51, falling between the models with M_ ZAMS =12 M_⊙ and M_ ZAMS =15 M_⊙.
While these arguments are well-supported by direct comparison with model spectroscopy, they may not fully apply to SN 2023ixf. The models employed for comparison assume massive hydrogen-rich envelopes and have been found to match well with the observations of SNe 2004et and 2012aw that have a long plateau of ∼ 120 days. In contrast, the plateau duration of SN 2023ixf is ∼ 80 days (), approximately 40 days shorter, meaning it enters the nebular phase earlier. Additionally, the radioactive tail of SN 2023ixf declines faster than those of SNe 2004et and 2012aw. These two factors make SN 2023ixf appear ∼ 0.8 mag fainter in R-band than the model spectroscopy, suggesting that the low [O I] luminosity of SN 2023ixf does not necessarily indicate a low oxygen abundance compared with model progenitors. Instead, it is likely a result of a lower fraction of γ-photons, emitted from the radioactive decay of ^56Co, being trapped in the ejecta.
In Figure <ref>, we compare the model spectroscopy from <cit.> and <cit.> with SN 2023ixf, normalizing all spectra to the integrated fluxes from 4500 to 8000 Å. This normalization ensures that the models and SN 2023ixf have the same amount of deposited radioactive energy in this wavelength range. Consequently, the fractional flux of the emission lines reflects the relative abundance of the emitting elements in the line-forming region. SN 2023ixf shows apparently stronger [O I] emission than the models with M_ ZAMS = 12 M_⊙ (hereafter referred to as M12 model, respectively; similarly, M15, M19 and M25 refer to the models with M_ ZAMS = 15, 19 and 25 M_⊙), while its flux is between M15 and M19 models. The fractional [O I] fluxes of the models, as a function of ZAMS masses, are compared with that of SN 2023ixf in Figure <ref>. Direct interpolation gives M_ ZAMS ∼ 16.3 M_⊙ for SN 2023ixf, close to the upper limit of <cit.> and the lower limit of <cit.>.
The above analysis is based on the assumption that, the γ-photon escape probability is the same throughout the ejecta, from the dense carbon oxygen core to the hydrogen-rich envelope. In this case, decreasing the total luminosity by 60% (0.8 mag in R-band) will at the same time decrease the [O I] luminosity by the same amount, therefore the fractional flux of [O I] remains unchanged and can be used to determine M_ ZAMS. In practice, this assumption does not hold as γ-photons can more easily escape from the outermost envelope. Here we consider a limit case, i.e., all the additional leakage of γ-photons are radiated in the form of Hα from the envelope. The oxygen fluxes of the models and SN 2023ixf are then accordingly normalized to the integrated fluxes without Hα line, i.e., we only consider the integrated fluxes of the metal emission lines. Similar to the above analysis, interpolation gives M_ ZAMS ∼ 15.2 M_⊙. Our final estimation of M_ ZAMS based on nebular spectroscopy therefore falls between 15.2 to 16.3 M_⊙.
While the [O I] flux of SN 2023ixf is within the range of the M12 to M25 models, its Hα emission is noticeably weaker. Given the same total energy, the [O I] flux of SN 2023ixf is about half the value observed in the M15 and M19 models. Although the formation of Hα is complex and influenced by many processes such as mixing, its relative weakness compared to models with a massive hydrogen-rich envelope qualitatively supports the idea that the hydrogen-rich envelope of SN 2023ixf is partially removed. The interpretation here is limited by our current lack of nebular spectroscopy models for partially stripped SNe IIP, however, a pioneering study by <cit.> shows that for SNe IIP with low-mass hydrogen-rich envelopes, the [O I] line is not significantly affected, while the fluxes of Hα dramatically decrease and [Ca II] slightly increase (see their Fig.9.). This behavior is roughly consistent with the observations of SN 2023ixf, and explains the relatively low [O I]/[Ca II] compared with M15 model ().
In conclusion, nebular spectroscopy analysis supports the hypothesis that SN 2023ixf is the explosion of a RSG of M_ ZAMS∼ 15.2 to 16.3 M_⊙, similar to the estimation of <cit.>, with M_ Henv⪅ 5 M_⊙ (about half the values of M15 and M19 models), which is much lower than the prediction of single stellar models evolved with standard stellar wind ().
§.§ Emission line profiles
During the nebular phase, the ejecta expands homologously, i.e., the radial expansion velocity of a fluid parcel is proportional to its radial coordinate. Additionally, the emission line widths are dominated by Doppler broadening, therefore directly reflect the spatial distributions of the emitting elements. Consequently, the emission lines observed in nebular spectroscopy provide information not only on the abundance of different elements but also on their geometric distributions within the ejecta ().
In this work, we focus on the profile of [O I] line. The [O I] line exhibits a horn-like (or double-peak) profile (), characterized by a trough located at v ∼0 km s^-1, with symmetric blue- and red-shifted peaks around it. While double-peaked [O I] is frequently observed in stripped-envelope supernova (SESN; core-collapse supernovae that have lost almost all of their hydrogen-rich envelope prior to the explosions; see ), it is rarely seen for the emission lines of hydrogen-rich SNe (with few exceptions; see for example ), although signature of asphericity can be detected from spectropolarimetry (). This peculiar profile is interpreted as emission from an oxygen-rich torus surrounding a bipolar calcium-rich region, being viewed from the edge ().
Here, we use the velocity profile from the models that best fit the light curves in 3 to synthesize the [O I] line profile with an axisymmetric model proposed in <cit.>. The model is characterized by an oxygen-rich ball excised by two detached ellipsoids, within which all the oxygen elements are burnt into heavy elements (see Figure <ref>; red region: explosive burning ash with X_ O = 0; blue region: oxygen-rich unburnt material with X_ O = 1. Here X_ O is the mass fraction of oxygen). We further assume that the material in the helium core, including the oxygen-rich region, is fully mixed (). Consequently, the boundary velocity of the oxygen-emitting region, V_ O, is the same as the velocity at the edge of the helium core, and the density in this region is a constant. Using the procedure outlined in <cit.>, the synthesized [O I] profiles, viewed from θ = 90 degree, are shown in Figure <ref>. For all the models, despite variations in M_ ZAMS, M_ Henv and E_ K, the synthesized [O I] profiles align well with observation. This consistency indicates that the horn-like profile of [O I] indeed originates from the oxygen-rich torus. Additionally, the [O I] line width of SN 2023ixf also requires the material in the helium core to be fully mixed: if we use the velocity at the edge of the carbon-oxygen core, which is about 1000 to 1500 km s^-1, to model the [O I] line, the synthesized [O I] lines are extremely narrow and none of the profiles provide a satisfactory match with the observation.
The analysis of this section is based on the assumption that the double-peak [O I] profile results from a geometrical effect of the ejecta. Nevertheless, unlike SESN where the separation of the two peaks is wide (see ), for SN 2023ixf, the separation is ∼ 3000 km s^-1, i.e., close to that of the two components of the [O I] doublet. We therefore cannot reject other possibilities such as clumping () or an unipolar oxygen-rich blob moving toward the observer (), accelerated by neutron star kick (see for example ). However, these explanations would require some SNe IIP exhibit horn-like [O I] profile with both peaks red-shifted, which, to our knowledge, have not been observed yet. In this work, we consider bipolar explosion with a torus-like structure as more plausible interpretation, but emphasize that other scenarios, such as clumping or unipolar blob, can not be exclusively rejected.
§ CONCLUSION
In this work, we construct RSG models that occupy the same position on the HRD as the proposed progenitor of SN 2023ixf, based on the pre-explosion images from <cit.>, <cit.>, and <cit.>. From these progenitor models, we artificially remove their hydrogen-rich envelope and trigger explosions, and compare the resulting light curves with the multi-band photometry of SN 2023ixf. Our findings indicate that, by varying the hydrogen envelope mass M_ Henv and explosion energy E_ K, RSG models with M_ ZAMS ranging from 12.5 to 17.5 M_⊙ can produce light curves that closely match the observed data. Consequently, light curve modeling alone cannot effectively constrain M_ ZAMS due to this degeneracy.
To address this limitation, we employ nebular spectroscopy as an independent method for estimating M_ ZAMS. The fractional flux of the [O I] line suggests M_ ZAMS values between 15.2 and 16.3 M_⊙. Interestingly, the Hα line also provides additional constraints: RSG model with M_ ZAMS = 12.0 M_⊙ must retain a massive hydrogen envelope (M_ Henv = 6.5 M_⊙) to match the plateau light curve. However, this is inconsistent with the weak Hα line observed in the nebular phase, implying that a large fraction of the hydrogen-rich envelope was removed prior to the explosion as suggested by the light curve modeling results for RSG models with M_ ZAMS = 15.0 and 17.5 M_⊙.
Finally, we employed the axisymmetric ejecta structure from <cit.> to model the [O I] line profile of SN 2023ixf. By assuming the maximum velocity of the [O I] emitting region corresponds to the edge velocity of the helium core, taken from the velocity profiles of models that best fit the observed plateau light curve, we achieve satisfactory matches between the observed double-peaked [O I] of SN 2023ixf and the synthesized [O I] profiles, viewed from 90 degrees. This agreement not only confirms the aspherical nature of the explosion, but also provides additional constraints on material mixing: the helium core material, including oxygen-rich regions, must be thoroughly mixed to account for the relatively broad [O I] profile observed in the nebular phase.
Bringing these lines of evidence together, we propose that SN 2023ixf represents the aspherical explosion of a partially stripped, intermediate-mass RSG with M_ ZAMS between 15.3 and 16.2 M_⊙. We further note that stars within this mass range do not have strong stellar winds necessary to strip its hydrogen-rich envelope to this small amount. Other mechanisms, such as binary interaction (see for example for a recent study), pulsation-driven mass-loss (see ), among other potential candidates, must be involved to assist the removal of a significant fraction of the hydrogen-rich envelope.
During the drafting of this manuscript, <cit.> presented their analysis on SN 2023ixf. Using the RSG model grid from <cit.>, they found that, by varying the explosion energy, models with M_ Henv ∼3 M_⊙ can produce light curves that well match with observation, despite M_ ZAMS varies from 15 to 22.5 M_⊙. They further use the pre-SN variability of the progenitor to constrain the properties of the progenitor, and conclude that, RSG models with M_ ZAMS> 17 M_⊙, R > 950 R_⊙ and M_ Henv < 3 M_ Henv can explain the pulsation period (∼ 1100 days) as well as reproduce the observed multiband light curve of SN 2023ixf. From their Table 1., the favored models have helium core mass M_ He core > 5.5 M_⊙, or M_ ZAMS > 18.3 M_⊙ using the M_ ZAMS-M_ He core relation of RSG models (). This value is not favored by the nebular spectroscopy analysis presented in <cit.> and this work. However, the [O I] flux is mainly determined by the oxygen mass in the ejecta. Given the same M_ He core, seems to predict systematically lower carbon-oxygen core mass (M_ CO core) than (see the comparison between and ), possibly due to different treatments on the microphysics of the two codes. Using the M_ He core-M_ CO core presented in <cit.>, the models from <cit.> with M_ ZAMS = 17.5 to 18.0 M_⊙ will have M_ CO core = 3.65 to 3.90 M_⊙, translating into M_ ZAMS = 16.6 to 17.3 M_⊙ for models. Given the uncertainties in the nebular spectroscopy model and direct interpolation, we consider this M_ ZAMS range matches with our estimation. However, the other two models (20.5M_eta1.5_alpha1.5 and 21.5M_eta1.5_alpha1.5) can be ruled out. Combining with nebular spectroscopy analysis, we narrow down the M_ ZAMS range from <cit.> to 17.5 to 18.0 M_⊙. In conclusion, the M_ ZAMS of SN 2023ixf progenitor should be around 15.0 to 18.0 M_⊙.
<cit.>; SciPy <cit.>; NumPy <cit.>; Astropy <cit.>; Matplotlib <cit.>
[Andrews et al.(2019)]andrews19 Andrews, J. E., Sand, D. J., Valenti, S., et al. 2019, , 885, 43. doi:10.3847/1538-4357/ab43e3
[Arnett(1982)]arnett82 Arnett, W. D. 1982, , 253, 785. doi:10.1086/159681
[Astropy Collaboration et al.(2013)]astropy13 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, , 558, A33. doi:10.1051/0004-6361/201322068
[Astropy Collaboration et al.(2018)]astropy18 Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, , 156, 123. doi:10.3847/1538-3881/aabc4f
[Berger et al.(2023)]berger23 Berger, E., Keating, G. K., Margutti, R., et al. 2023, , 951, L31. doi:10.3847/2041-8213/ace0c4
[Bersten et al.(2024)]bersten24 Bersten, M. C., Orellana, M., Folatelli, G., et al. 2024, , 681, L18. doi:10.1051/0004-6361/202348183
[Bostroem et al.(2023)]bostroem23 Bostroem, K. A., Pearson, J., Shrestha, M., et al. 2023, , 956, L5. doi:10.3847/2041-8213/acf9a4
[Burrows & Vartanyan(2021)]burrows21 Burrows, A. & Vartanyan, D. 2021, , 589, 29. doi:10.1038/s41586-020-03059-w
[Burrows et al.(2024)]burrows24 Burrows, A., Wang, T., & Vartanyan, D. 2024, , 964, L16. doi:10.3847/2041-8213/ad319e
[Cardelli et al.(1989)]cardelli89 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245. doi:10.1086/167900
[Chandra et al.(2023)]chandra23 Chandra, P., Maeda, K., Chevalier, R. A., et al. 2023, The Astronomer's Telegram, 16073
[Chugai et al.(2005)]chugai05 Chugai, N. N., Fabrika, S. N., Sholukhova, O. N., et al. 2005, Astronomy Letters, 31, 792. doi:10.1134/1.2138766
[Dessart et al.(2012)]dessart12 Dessart, L., Hillier, D. J., Li, C., et al. 2012, , 424, 2139. doi:10.1111/j.1365-2966.2012.21374.x
[Dessart et al.(2013)]dessart13 Dessart, L., Hillier, D. J., Waldman, R., et al. 2013, , 433, 1745. doi:10.1093/mnras/stt861
[Dessart & Hillier(2020)]dessart20 Dessart, L. & Hillier, D. J. 2020, , 642, A33. doi:10.1051/0004-6361/202038148
[Dong et al.(2023)]dong23 Dong, Y., Sand, D. J., Valenti, S., et al. 2023, , 957, 28. doi:10.3847/1538-4357/acef18
[Ercolino et al.(2023)]ercolino23 Ercolino, A., Jin, H., Langer, N., et al. 2023, arXiv:2308.01819. doi:10.48550/arXiv.2308.01819
[Fang et al.(2019)]fang19 Fang, Q., Maeda, K., Kuncarayakti, H., et al. 2019, Nature Astronomy, 3, 434. doi:10.1038/s41550-019-0710-6
[Fang et al.(2022)]fang22 Fang, Q., Maeda, K., Kuncarayakti, H., et al. 2022, , 928, 151. doi:10.3847/1538-4357/ac4f60
[Fang et al.(2024a)]fang24 Fang, Q., Maeda, K., Ye, H., et al. 2024, arXiv:2404.01776. doi:10.48550/arXiv.2404.01776
[Fang et al.(2024b)]fang24b Fang, Q., Maeda, K., Kuncarayakti, H., et al. 2024, Nature Astronomy, 8, 111. doi:10.1038/s41550-023-02120-8
[Farmer et al.(2016)]farmer16 Farmer, R., Fields, C. E., Petermann, I., et al. 2016, , 227, 22. doi:10.3847/1538-4365/227/2/22
[Ferrari et al.(2024)]ferrari24 Ferrari, L., Folatelli, G., Ertini, K., et al. 2024, arXiv:2406.00130. doi:10.48550/arXiv.2406.00130
[Fransson & Chevalier(1989)]fransson89 Fransson, C. & Chevalier, R. A. 1989, , 343, 323. doi:10.1086/167707
[Goldberg et al.(2019)]goldberg19 Goldberg, J. A., Bildsten, L., & Paxton, B. 2019, , 879, 3. doi:10.3847/1538-4357/ab22b6
[Grefenstette et al.(2023)]grefenstette23 Grefenstette, B. W., Brightman, M., Earnshaw, H. P., et al. 2023, , 952, L3. doi:10.3847/2041-8213/acdf4e
[Gvaramadze et al.(2014)]gvaramadze14 Gvaramadze, V. V., Menten, K. M., Kniazev, A. Y., et al. 2014, , 437, 843. doi:10.1093/mnras/stt1943
[Harris et al.(2020)]numpy Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, , 585, 357. doi:10.1038/s41586-020-2649-2
[Hiramatsu et al.(2021)]hiramatsu21 Hiramatsu, D., Howell, D. A., Moriya, T. J., et al. 2021, , 913, 55. doi:10.3847/1538-4357/abf6d6
[Hiramatsu et al.(2023)]hiramatsu23 Hiramatsu, D., Tsuna, D., Berger, E., et al. 2023, , 955, L8. doi:10.3847/2041-8213/acf299
[Hosseinzadeh et al.(2023)]hosseinzadeh23 Hosseinzadeh, G., Farah, J., Shrestha, M., et al. 2023, , 953, L16. doi:10.3847/2041-8213/ace4c4
[Hsu et al.(2024)]hsu24 Hsu, B., Smith, N., Goldberg, J. A., et al. 2024, arXiv:2408.07874. doi:10.48550/arXiv.2408.07874
[Hunter(2007)]matplotlib Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90. doi:10.1109/MCSE.2007.55
[Itagaki(2023)]itagaki Itagaki, K. 2023, Transient Name Server Discovery Report, 2023-1158
[Jacobson-Galán et al.(2023)]jacobson23 Jacobson-Galán, W. V., Dessart, L., Margutti, R., et al. 2023, , 954, L42. doi:10.3847/2041-8213/acf2ec
[Jencson et al.(2023)]jencson23 Jencson, J. E., Pearson, J., Beasor, E. R., et al. 2023, , 952, L30. doi:10.3847/2041-8213/ace618
[Jerkstrand et al.(2012)]jerkstrand12 Jerkstrand, A., Fransson, C., Maguire, K., et al. 2012, , 546, A28. doi:10.1051/0004-6361/201219528
[Jerkstrand et al.(2014)]jerkstrand14 Jerkstrand, A., Smartt, S. J., Fraser, M., et al. 2014, , 439, 3694. doi:10.1093/mnras/stu221
[Jerkstrand(2017)]jerkstrand17 Jerkstrand, A. 2017, Handbook of Supernovae, 795. doi:10.1007/978-3-319-21846-5_29
[Jermyn et al.(2023)]mesa23 Jermyn, A. S., Bauer, E. B., Schwab, J., et al. 2023, , 265, 15. doi:10.3847/1538-4365/acae8d
[Kasen & Woosley(2009)]kasen09 Kasen, D. & Woosley, S. E. 2009, , 703, 2205. doi:10.1088/0004-637X/703/2/2205
[Kilpatrick et al.(2023)]kilpatrick23 Kilpatrick, C. D., Foley, R. J., Jacobson-Galán, W. V., et al. 2023, , 952, L23. doi:10.3847/2041-8213/ace4ca
[Kippenhahn et al.(1980)]kippenhahn80 Kippenhahn, R., Ruschenplatt, G., & Thomas, H.-C. 1980, , 91, 175
[Kumar et al.(2016)]kumar16 Kumar, B., Pandey, S. B., Eswaraiah, C., et al. 2016, , 456, 3157. doi:10.1093/mnras/stv2720
[Kuncarayakti et al.(2020)]kuncarayakti20 Kuncarayakti, H., Folatelli, G., Maeda, K., et al. 2020, , 902, 139. doi:10.3847/1538-4357/abb4e7
[Leonard & Filippenko(2001)]leonard01 Leonard, D. C. & Filippenko, A. V. 2001, , 113, 920. doi:10.1086/322151
[Leonard et al.(2006)]leonard06 Leonard, D. C., Filippenko, A. V., Ganeshalingam, M., et al. 2006, , 440, 505. doi:10.1038/nature04558
[Liu et al.(2023)]liu23 Liu, C., Chen, X., Er, X., et al. 2023, , 958, L37. doi:10.3847/2041-8213/ad0da8
[Lundquist et al.(2023)]lundquist23 Lundquist, M., O'Meara, J., & Walawender, J. 2023, Transient Name Server AstroNote, 160
[Maeda et al.(2002)]maeda02 Maeda, K., Nakamura, T., Nomoto, K., et al. 2002, , 565, 405. doi:10.1086/324487
[Maeda et al.(2006)]maeda06 Maeda, K., Nomoto, K., Mazzali, P. A., et al. 2006, , 640, 854. doi:10.1086/500187
[Maeda et al.(2008)]maeda08 Maeda, K., Kawabata, K., Mazzali, P. A., et al. 2008, Science, 319, 1220. doi:10.1126/science.1149437
[Martinez et al.(2020)]martinez20 Martinez, L., Bersten, M. C., Anderson, J. P., et al. 2020, , 642, A143. doi:10.1051/0004-6361/202038393
[Martinez et al.(2024)]martinez24 Martinez, L., Bersten, M. C., Folatelli, G., et al. 2024, , 683, A154. doi:10.1051/0004-6361/202348142
[Mazzali et al.(2005)]mazzali05 Mazzali, P. A., Kawabata, K. S., Maeda, K., et al. 2005, Science, 308, 1284. doi:10.1126/science.1111384
[Mereminskiy et al.(2023)]mereminskiy23 Mereminskiy, I. A., Lutovinov, A. A., Sazonov, S. Y., et al. 2023, The Astronomer's Telegram, 16065
[Messineo & Brown(2019)]messineo19 Messineo, M. & Brown, A. G. A. 2019, , 158, 20. doi:10.3847/1538-3881/ab1cbd
[Milisavljevic et al.(2010)]milisavljevic10 Milisavljevic, D., Fesen, R. A., Gerardy, C. L., et al. 2010, , 709, 1343. doi:10.1088/0004-637X/709/2/1343
[Modjaz et al.(2008)]modjaz08 Modjaz, M., Kirshner, R. P., Blondin, S., et al. 2008, , 687, L9. doi:10.1086/593135
[Moriya et al.(2011)]moriya11 Moriya, T., Tominaga, N., Blinnikov, S. I., et al. 2011, , 415, 199. doi:10.1111/j.1365-2966.2011.18689.x
[Moriya & Singh(2024)]moriya24 Moriya, T. J. & Singh, A. 2024, arXiv:2406.00928. doi:10.48550/arXiv.2406.00928
[Morozova et al.(2015)]snec15 Morozova, V., Piro, A. L., Renzo, M., et al. 2015, , 814, 63. doi:10.1088/0004-637X/814/1/63
[Morozova et al.(2018)]morozova18 Morozova, V., Piro, A. L., & Valenti, S. 2018, , 858, 15. doi:10.3847/1538-4357/aab9a6
[Nagao et al.(2019)]nagao19 Nagao, T., Cikota, A., Patat, F., et al. 2019, , 489, L69. doi:10.1093/mnrasl/slz119
[Nagao et al.(2024a)]nagao24a Nagao, T., Patat, F., Cikota, A., et al. 2024, , 681, A11. doi:10.1051/0004-6361/202346715
[Nagao et al.(2024b)]nagao24b Nagao, T., Maeda, K., Mattila, S., et al. 2024, , 687, L17. doi:10.1051/0004-6361/202450191
[Neustadt et al.(2024)]neustadt24 Neustadt, J. M. M., Kochanek, C. S., & Smith, M. R. 2024, , 527, 5366. doi:10.1093/mnras/stad3073
[Niu et al.(2023)]niu23 Niu, Z., Sun, N.-C., Maund, J. R., et al. 2023, , 955, L15. doi:10.3847/2041-8213/acf4e3
[Panjkov et al.(2023)]panjkov23 Panjkov, S., Auchettl, K., Shappee, B. J., et al. 2023, arXiv:2308.13101. doi:10.48550/arXiv.2308.13101
[Paxton et al.(2011)]paxton11 Paxton, B., Bildsten, L., Dotter, A., et al. 2011, , 192, 3. doi:10.1088/0067-0049/192/1/3
[Paxton et al.(2013)]paxton13 Paxton, B., Cantiello, M., Arras, P., et al. 2013, , 208, 4. doi:10.1088/0067-0049/208/1/4
[Paxton et al.(2015)]paxton15 Paxton, B., Marchant, P., Schwab, J., et al. 2015, , 220, 15. doi:10.1088/0067-0049/220/1/15
[Paxton et al.(2018)]paxton18 Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, , 234, 34. doi:10.3847/1538-4365/aaa5a8
[Paxton et al.(2019)]paxton19 Paxton, B., Smolec, R., Schwab, J., et al. 2019, , 243, 10. doi:10.3847/1538-4365/ab2241
[Qin et al.(2023)]qin23 Qin, Y.-J., Zhang, K., Bloom, J., et al. 2023, arXiv:2309.10022. doi:10.48550/arXiv.2309.10022
[Riess et al.(2022)]riess22 Riess, A. G., Yuan, W., Macri, L. M., et al. 2022, , 934, L7. doi:10.3847/2041-8213/ac5c5b
[Soraisam et al.(2018)]soraisam18 Soraisam, M. D., Bildsten, L., Drout, M. R., et al. 2018, , 859, 73. doi:10.3847/1538-4357/aabc59
[Soraisam et al.(2023)]soraisam23 Soraisam, M. D., Szalai, T., Van Dyk, S. D., et al. 2023, , 957, 64. doi:10.3847/1538-4357/acef22
[Sukhbold et al.(2016)]kepler16 Sukhbold, T., Ertl, T., Woosley, S. E., et al. 2016, , 821, 38. doi:10.3847/0004-637X/821/1/38
[Schlafly & Finkbeiner(2011)]schlafly11 Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103. doi:10.1088/0004-637X/737/2/103
[Singh et al.(2024)]singh24 Singh, A., Teja, R. S., Moriya, T. J., et al. 2024, arXiv:2405.20989. doi:10.48550/arXiv.2405.20989
[Smith et al.(2023)]smith23 Smith, N., Pearson, J., Sand, D. J., et al. 2023, , 956, 46. doi:10.3847/1538-4357/acf366
[Taubenberger et al.(2009)]taubenberger09 Taubenberger, S., Valenti, S., Benetti, S., et al. 2009, , 397, 677. doi:10.1111/j.1365-2966.2009.15003.x
[Teja et al.(2023)]teja23 Teja, R. S., Singh, A., Basu, J., et al. 2023, , 954, L12. doi:10.3847/2041-8213/acef20
[Temaj et al.(2024)]temaj24 Temaj, D., Schneider, F. R. N., Laplace, E., et al. 2024, , 682, A123. doi:10.1051/0004-6361/202347434
[Utrobin & Chugai(2019)]utrobin19 Utrobin, V. P. & Chugai, N. N. 2019, , 490, 2042. doi:10.1093/mnras/stz2716
[Utrobin et al.(2021)]utrobin21 Utrobin, V. P., Chugai, N. N., Andrews, J. E., et al. 2021, , 505, 116. doi:10.1093/mnras/stab1369
[van Baal et al.(2023)]vanBall23 van Baal, B. F. A., Jerkstrand, A., Wongwathanarat, A., et al. 2023, , 523, 954. doi:10.1093/mnras/stad1488
[Van Dyk et al.(2024)]vandyk24 Van Dyk, S. D., Srinivasan, S., Andrews, J. E., et al. 2024, , 968, 27. doi:10.3847/1538-4357/ad414b
[Vasylyev et al.(2023)]vasylyev23 Vasylyev, S. S., Yang, Y., Filippenko, A. V., et al. 2023, , 955, L37. doi:10.3847/2041-8213/acf1a3
[Vasylyev et al.(2024)]vasylyev24 Vasylyev, S. S., Yang, Y., Patra, K. C., et al. 2024, , 527, 3106. doi:10.1093/mnras/stad3352
[Virtanen et al.(2020)]scipy Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261. doi:10.1038/s41592-019-0686-2
[Wang & Wheeler(2008)]wang08 Wang, L. & Wheeler, J. C. 2008, , 46, 433. doi:10.1146/annurev.astro.46.060407.145139
[Yamanaka et al.(2023)]yamanaka23 Yamanaka, M., Fujii, M., & Nagayama, T. 2023, , 75, L27. doi:10.1093/pasj/psad051
[Yoon & Cantiello(2010)]yoon10_pulse Yoon, S.-C. & Cantiello, M. 2010, , 717, L62. doi:10.1088/2041-8205/717/1/L62
|
http://arxiv.org/abs/2409.03374v1 | 20240905092504 | Toward a universal characterization methodology for conversion gain measurement of CMOS APS: application to Euclid and SVOM | [
"Jean Le Graët",
"Aurélia Secroun",
"Marie Tourneur-Silvain",
"Éric Kajfasz",
"Jean-Luc Atteia",
"Olivier Boulade",
"Alix Nouvel de la Flèche",
"Hervé Geoffray",
"William Gillard",
"Stéphanie Escoffier",
"Francis Fortin",
"Nicolas Fourmanoit",
"Smaïn Kermiche",
"Hervé Valentin",
"Julien Zoubian"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.HE"
] |
[
[
September 9, 2024
=====================
§ ABSTRACT
With the expanding integration of infrared instruments in astronomical missions, accurate per-pixel flux estimation for near-infrared hybrid detectors has become critical to the success of these missions. Based on CPPM's involvement in both SVOM/Colibri and missions, this study introduces universally applicable methods and framework for characterizing IR hybrid detectors and decorrelating their instrinsic properties.
The characterization framework, applied to the ALFA detector and 's H2RG, not only validates the proposed methods but also points out subtle behaviors inherent to each detector.
§ INTRODUCTION
With the advent of CMOS Active Pixel Sensors (APS) technology and the continuously improving performance of infrared hybrid detectors, more space and ground missions tend to include an infrared channel, whether photometric or spectroscopic. It is the case of the European Space Agency's (ESA) mission <cit.>, launched in 2023, and the Sino-French mission SVOM <cit.> (Space-based multi-band astronomical Variable Objects Monitor), planned for launch on the 22th of June. Both aim, in different ways, at understanding the evolution of our Universe, a field of physics of strong scientific interest for the Center for Particle Physics in Marseille (CPPM). Naturally CPPM has become involved in both projects, in particular taking responsibility for characterizing the scientific performance of their infrared detectors.
With missions increasingly aiming for ambitious scientific goals, the technical requirements have similarly intensified, necessitating an unprecedented understanding of detector performance right down to the pixel level. Despite significant advances, achieving accurate pixel-level performance assessment continues to pose significant difficulties. This requires not only handling millions of pixels but also accounting for interactions between pixels—in other words, considering the correlations between the various physical effects occurring within the pixels. From the perspective of detector characterization, it has become evident that we could benefit from a universal framework that considers these various factors while remaining independent of mission-specific details.
In this paper, we take the initial steps toward establishing such a framework by standardizing the conversion gain measurement of CMOS APS. Our focus is on deriving a per pixel conversion gain that is decorrelated from nonlinearity and interpixel capacitance (IPC). The approach is validated through two series of characterization of scientific performances: 's flight H2RGs <cit.>, manufactured by Teledyne and SVOM's Astronomical Large Format Array (ALFA) <cit.>, the fruit of a collaboration between CEA/LETI and Lynred. This framework not only ensures precise conversion gain measurements but also enhances the accuracy of deriving related parameters such as read noise, dark current, and quantum efficiency, all of which rely on a precise conversion gain value. Furthermore, it enables effective comparison of detector performance, underscoring its significance and potential for broader adoption.
In the following, Sect. <ref> thus provides a brief review of the method used to measure the conversion gain. This method relies on our original “nonlinear mean-variance” method combined with the correction of IPC-related bias. Section <ref> lays the groundwork for our characterization framework including data selection, test bench requirements, and data processing specificities. In Sect. <ref>, we derive the conversion gain and IPC maps within our framework for the two different detectors: 's H2RG and ALFA are both 2 k× 2 k MCT-based hybrid detectors working in the short wavelenght infrared range with cutoff at respectively 2.3 and 2.1 and pixel pitch of respectively 18 and 15 .
§ METHOD FOR CONVERSION GAIN
The method used in this paper to measure conversion gain has been defined and validated previously (see Le Graët et al. <cit.> for a detailed description). It is intended to be easily applicable to any CMOS APS and to give an unbiased estimate of conversion gain, decorrelated from pixels' nonlinearity and corrected from IPC bias. The main elements of this method are recalled hereafter. It may be divided into two parts: the nonlinear mean-variance method that addresses the effect of nonlinearity on the measure and the IPC correction method that provides a simple solution to correct IPC biased gain.
Nonlinear mean-variance method
The nonlinear (NL) mean-variance method has been derived to take into account the nonlinearity of the pixel response in gain measurement. It is directly adapted from the well-known mean-variance method <cit.> that uses the relation between variance and mean of the measured signal to derive the conversion gain.
The issue is that the conversion gain, generally assumed to be constant, actually demonstrates a response dependent on the integrated charge. The primary reason for this dependence is that the pixel response is inherently nonlinear, a well-known fact. Most of this nonlinearity comes from the charge-to-voltage conversion, where the pn-junction capacitance decreases as charges accumulate in the pixel. The transistors used for voltage signal amplification and buffering also contribute to this nonlinearity, although their impact is usually much smaller (less than 1%). To address the measured gain's dependence on integrated charge, our NL mean-variance method employs a nonlinear pixel response model based on a polynomial representation, rather than a linear pixel response model as in the classic mean-variance approach.
In a previous paper <cit.>, we proved that using a polynomial pixel response of order 2 or 3 as follows
S = 1g(Q+β Q^2) or S = 1g(Q+β Q^2+γ Q^3) ,
leads to nonlinear mean-variance equations given by respectively
σ_S^2 ≈1gS + 3βS^2 + σ_R^2 or σ_S^2 ≈1gS + 3βS^2 + g( 5γ - 2β^2 ) S^3 + σ_R^2 .
In these equations, S̅ (ADU) is the mean output signal of a pixel that has integrated a charge Q (), g denotes the conversion gain in ADU^-1, β (^-1) and γ (^-2) are the nonlinearity coefficients respectively of order 2 and 3, and σ_R^2 is the readout noise. In the following sections of this paper, the method based on a second-order polynomial will be referred to as NL2, and the method based on a third-order polynomial will be referred to as NL3. Then, fitting a curve of the variance as a function of the mean with a polynomial of order 2 or 3 allows the derivation of the conversion gain as the first-order coefficient.
IPC bias correction method
The second method proposes a correction of the bias created by IPC. IPC originates from the close proximity of pixels, leading to a superposition of the electric fields of adjacent pixels, which results in a parasitic capacitance between them <cit.>. Thus IPC induces electrical crosstalk between close pixels, creating spatial correlations that bias the estimation of the variance of the signal. The effect of IPC on signal is typically modeled <cit.> as a convolution with the 2D impulse response h of a pixel
S_ meas = S_ true * h ,
where S_ meas is the signal detected in a pixel and S_ true is the signal that would be detected if IPC were null and h were a matrix unit with a central value of one. In a previous paper <cit.>, we proposed to use a general form of the h kernel such as
h = [ α_1 α_2 α_3; α_4 1 - ∑_i=1^8 α_i α_5; α_6 α_7 α_8 ] ,
and we demonstrated that the effect of IPC on gain estimation using the mean-variance method is given by
g = ĝ k ,
with k = 1 - 2∑_i α_i + (∑_i α_i )^2 + (∑_i α_i^2 ) ,
where ĝ is the IPC biased gain and g the “true" or IPC corrected gain. Consequently, a simple multiplication of the measured gain by the corrective factor k will suffice to calculate an IPC corrected gain. The calculation of this factor solely requires a precise measurement of the α_i IPC coefficients.
The two methods introduced here have been previously validated on one of 's 16 H2RG flight detectors using data from the ground characterization campaign conducted at CPPM. Nevertheless, to apply these methods to any CMOS APS used in low-light imaging, it is essential to establish a framework that enables the construction of a consistent mean-variance curve, regardless of the detector's technology or the observation strategy of the mission using these detectors. In the following section, this framework will be described.
§ DEFINITION OF A GENERAL FRAMEWORK
In order to make a proper use of the method just defined and apply it to both ALFA and H2RG detectors, a rigorous framework must be defined. This framework should take into account the concrete configuration of each detector, the test environment and the specifics of the data taken during their characterization at CPPM.
Clearly, each mission has its own optimized observing strategy depending on the target it aims to observe. For instance, shall survey 15 000 deg^2 of extragalactic sky, alternating photometry and spectrometry acquisitions. Integration times for spectrometry can be as long as ten minutes. Meanwhile, SVOM/Colibri's <cit.> infrared channel will offer ground follow-up observations of γ-ray bursts' afterglow, consisting of several consecutive short exposures.
Consequently detectors' operation must be adapted to the mission objectives. Table <ref> gives an overview of the main operating configurations for H2RG and ALFA on their respective projects.
When measuring the conversion gain, using each detector in its respective operating configuration, including pixel bias and electronic gain, could introduce biases due to these differences.
To prevent such biases, it's necessary to either ensure that the operational parameters don't bias the gain measurement or incorporate standard parameter values into the framework. During characterization, it was observed that variations in wavelength and detector temperature (within 85–100 K) do not impact gain measurement. Hence, acquisitions across different wavelength bands and temperatures will be used interchangeably. Details regarding the chosen standard values and the rationale behind selections will be provided below.
Data selection
To prevent biases arising from differences in detector operation, data selection criteria must be included in the framework. The first requirement for accurately measuring the conversion gain is to ensure that the noise is dominated by photon shot noise. For 's H2RGs, flat-field ramps taken under fluxes between 16 and 1000 s^-1 were used. For ALFA, due to its higher readout noise, approximately 5 times higher than 's H2RG's, ramps taken under fluxes between 200 and 1000 s^-1 were used. All the acquisitions were taken at sensor temperature between 85 K and 100 K.
As persistence <cit.> is not yet included in our model of pixel response, it is essential to mitigate its effects. For fluxes higher than 200 s^-1, it was observed for 's H2RG and ALFA that the effect of persistence becomes negligible when the integrated flux is greater than 10 k. Therefore, for these ramps, the frames before reaching an integrated flux of 10 ke^- were excluded. For lower fluxes in the case of , as it has been demonstrated <cit.> that for ramps of 400 frames, after the first 100 frames, the persistence is negligible, these first 100 frames were excluded. Additionally, it was decided to limit the integrated fluxes to 70 % of the full well to avoid effects appearing near the saturation of the pixel photodiode. Finally, to prevent the measurements from being biased by outlier pixels, several masks were applied that remove overall less than 3 % of the entire matrix, as outlined below:
* disconnected pixels;
* pixels that are saturated early in the ramp;
* pixels with a high baseline to avoid ADC saturation;
* highly nonlinear pixels, including cosmic rays, thanks to a quality factor based on the goodness of a linear fit on the ramp <cit.>.
Test bench
Although standardized data selection helps avoid biases in gain measurement, the test bench used to acquire the data can also introduce systematic errors in estimating gain. Therefore, the performance of this test bench must meet minimum requirements. CPPM's test bench, dedicated to the characterization of infrared detectors<cit.>, has been designed to meet stringent specifications. Its performance has been thoroughly validated thanks to engineering-grade detectors, proving highly efficient in minimizing systematic errors during data analysis. Table <ref> presents the main critical parameters and their specifications.
Data processing
In addition to the data selection criteria and the performance requirements of the test bench, the data processing methods used to generate the mean-variance curve and measure IPC coefficients need to be integral to the framework. The mean-variance curves are constructed using flat field acquisitions.
Typically, one mean-variance curve is obtained from M flat-field ramps taken with the same flux, using the variance and the mean across the M ramps for each pixel. To reduce the number of acquisitions required to achieve a given accuracy, it is also possible to assume both that there are no spatial correlations between pixels and, that the spatial gain variations are negligible at small scales. Consequently, the variance and the mean of the signal across a box of N× N pixels can be used to build the mean-variance curve. This method requires only two similar ramps (to eliminate fixed pattern noise) to make one measurement of the conversion gain per superpixel rather than M. For the ALFA detector, 637 pairs of ramps meet all our selection criteria, while 376 pairs meet the criteria for the H2RG detector. The conversion gain estimation for each superpixel is then the average of the gains measured from each pair of ramps.
After measuring the conversion gain using the NL mean-variance method with the data outlined above, correcting the IPC bias requires measuring the IPC coefficients. For both detectors, techniques based on resetting a grid of separated pixels and observing the effect on the signal of their neighbors were chosen. For the detector, the single pixel reset (SPR) method <cit.> was used. It consists of an acquisition under dark conditions with a reset of the full detector at a nominal bias, followed by a reset of a grid of pixels at a different bias. By observing the amount of signal detected on the neighbors of the reset pixels at a different bias, the IPC coefficient α_i can be measured. A detailed description of how SPR was applied to detector may be found in Le Graet et al. (2022) <cit.>. The advantage of this method is that, because the acquisition is under dark conditions, there will be no diffusion, and thus only the IPC will be measured. Furthermore, by changing the grid of reset pixels, it is possible to measure the IPC coefficients of every pixel. Unfortunately, for ALFA, resetting a grid of pixels at a different bias is not available. However, a grid of pixels can be continuously reset during an acquisition. Using the method defined by Finger et al. <cit.>, by comparing a nominal acquisition under flux and an acquisition under the same flux with a grid of pixels continuously reset, the IPC coefficients can be measured. How this method is applied to ALFA will be detailed in a future article.
In summary, the characterization framework for conversion gain measurement that we just defined is the combination of the NL mean-variance method, the correction of the IPC bias on gain measurement, the flat fields acquired with a test bench with sufficient performance, and the data selection criteria. In the following section, the results of the methodology applied to ALFA and the detector will be presented.
§ CONVERSION GAIN MEASUREMENT RESULTS
The same framework has been applied to one of 's flight H2RG and to the ALFA detector. Detailed results are presented and discussed hereafter.
§.§ Application of nonlinear mean-variance methods
The three mean-variance methods presented in section <ref> assume a constant gain with respect to the integrated signal (i.e., the total number of electrons accumulated by a pixel). To ascertain that the nonlinear mean-variance approach is broadly applicable, it is crucial to demonstrate that the conversion gain measured using these methods does not depend on the integrated signal. Here we have taken the opportunity to test them on two different detectors: ALFA and 's H2RG. For this purpose, the ramps selected for gain measurement were divided into subsets corresponding to equal integrated signals, and the classical, NL2, and NL3 mean-variance methods were applied to each subset. For each level of integrated signal, the gain was calculated per superpixel as the average of all corresponding measurements and then averaged across the detector. Figure <ref> shows the mean conversion gain as a function of the integrated signal (determined through LED calibration) for 's H2RG (left) and ALFA (right). The error bars include both statistical and systematic errors; the former are minimal due to averaging across the detector, while the latter arise from discrepancies in gain measurement when using ramps with identical integrated signals but different fluxes.
As may be seen in figure <ref> with the blue data, it is obvious that, for both detectors, the conversion gain measured by the classic mean-variance method increases with the integrated signal. Specifically, for 's H2RG, the estimated gain increases by approximately 0.9 % per 10 ke^-, while for ALFA, it increases by about 0.4 % per 10 ke^-. Thus using this method to measure the gain will lead to a biased estimation, dependent on the integrated signal selected for measurement. Subsequent results demonstrate that both detectors exhibit comparable nonlinearity, but the impact of nonlinearity on gain measurement is less significant in the ALFA detector due to its higher conversion gain.
Regarding the NL2 mean-variance, it may be observed that H2RG's gain remains consistent with a constant value at integrated signals below 50 ke^-, but increases at higher integrated signals. Conversely, ALFA's gain remains constant, within uncertainties, across all integrated signals. Additionally, the NL2 mean-variance method gives an estimate of the β coefficient (representative of the detector nonlinear behavior): (-4.2±0.1) × 10^-7 e^-1 and (-1.3±0.3) × 10^-7 e^-1 respectively for 's H2RG and ALFA. Using the NL3 mean-variance, H2RG's gain aligns with a constant value, within uncertainties, for all integrated signals, while ALFA's gain increases with integrated signal. The β coefficients estimated for the H2RG and ALFA are (-5.6 ± 0.6) × 10^-7 e^-1 and (-2 ± 6) × 10^-8 e^-1 respectively. The β coefficients obtained from the NL2 and NL3 mean-variance methods may be compared to those derived during the detectors' characterization. Notably, the mission <cit.> and the SVOM mission <cit.> also employ models based on nonlinear pixel response as described in Eq.(<ref>), to fit the signal ramps and correct the impact of nonlinearity on flux measurements. The β coefficients derived from ramp nonlinearity characterization are approximately -5 × 10^-7 e^-1 for 's H2RG and roughly -4 × 10^-7 e^-1 for ALFA. These characterization values are comparable to those obtained from the NL2 and NL3 mean-variance methods for 's H2RG and to the one from NL2 mean-variance for ALFA.
These results demonstrate that for 's H2RG, a third-order polynomial model most accurately describes the pixel response. At high integrated signals, the increasing gain measured by the NL2 mean-variance method indicates that a second-order polynomial fails to describe the pixel response accurately. However, the similarity of the β coefficient from the NL2 mean-variance method to the value estimated during characterization suggests that a second order polynomial may be sufficient for describing the pixel's behavior. The noted discrepancy at high integrated signal levels could be due to variations in the data related to the length of the ramp. Specifically, anomalies at the beginning and end of an acquisition significantly affect shorter ramps, such as those used for the three high-signal points (less than 100 frames). Future frameworks should incorporate ramp length to mitigate these effects. For the ALFA detector, the suitability of the second-order polynomial to describe the mean-variance curve is certain. The observation that the gain from the NL3 mean-variance increases across all integrated signals, coupled with a β coefficient significantly different from the one obtained during characterization, underscores the model’s inadequacy in describing the pixel response. This discrepancy may originate from unaccounted effects such as persistence or diffusion, affecting the mean-variance curve. Nonetheless, the application of a nonlinear mean-variance method within a coherent framework enables the measurement of a constant conversion gain, effectively correcting for the nonlinearity of the pixel response. The flexibility to employ either the NL2 or NL3 variant allows for tailored adaptations to the distinct behaviors of the detectors.
§.§ Correction of IPC bias on gain measurement
To apply the correction of IPC bias on gain measurement using the methods previously explained, the initial step is to measure the IPC coefficients for each pixel. As mentioned in Sect. <ref>, two distinct techniques were used for each detector, the SPR technique for 's H2RG, and the method described by Finger <cit.> for ALFA. For both methods, it was decided to limit the IPC kernel to a 3 × 3 size as IPC is not detectable beyond this range. The eight α_i IPC coefficients of 's H2RG were presented in a previous publication <cit.>, and those of ALFA will be discussed in an upcoming paper. Nevertheless maps of the total IPC (sum of the α_i coefficients) for 's H2RG and ALFA matrices are shown in Fig. <ref> and Fig. <ref> respectively. Due to limitations in ALFA's readout mode capabilities, it is impossible to maintain the first column of each readout channel under reset. Consequently, the corresponding IPC coefficients have not been measured. Furthermore, Table <ref> presents the median values and statistical uncertainties of the total IPC for both detectors.
The median total IPC is very similar for both detectors, even though ALFA pixels are closer (15 ) than H2RG's (18 ). Typically, closer pixel spacing increases IPC, as demonstrated by TELEDYNE's H4RG detectors <cit.>. This suggests that LYNRED's strategies to minimize IPC have been effective. However, excluding the dark blue zone of H2RG's map (to be discussed later), IPC is significantly more uniform for 's H2RG (within ±10 % at 2σ) than for ALFA (within ±40 % at 2σ). Factors such as the spacing and the size of the indium bumps may influence this uniformity. Nevertheless, these substantial spatial variations underline the necessity of measuring IPC on a per-pixel basis, since using an average value could introduce biases of about 10 % for 's H2RG and 40 % for ALFA. These biases could then propagate to the corrections made for IPC bias in both gain measurements and PSF size estimation. Observations from Fig. <ref> reveal two distinct regions: the center (dark blue) and the surrounding areas (green blue). The dark blue region has been identified as an epoxy void area, where the epoxy between the sensitive layer and the silicon multiplexer is missing. In this region, referred to hereafter as the “void region”, IPC is more than twice as low as in the rest of the detector that will be designated as the “epoxy region.” This discrepancy, previously noted by Brown <cit.>, is attributed to the epoxy's dielectric constant being approximately four times higher than that of air. Finally, uncertainties associated with IPC measurements suggest that the SPR method provides a more precise determination of IPC compared to the Finger method, primarily because measurements using the Finger method are constrained by photon shot noise.
Thanks to the measurements of IPC for each pixel of the detectors, it is now possible to calculate the corrective factor as defined in Eq. (<ref>). Given that superpixels of size 16× 16 were used, the corrective factors were averaged within each superpixel. These averaged values were subsequently applied to the gains derived using the NL3 mean-variance method for 's H2RG and the NL2 mean-variance method for ALFA . The gain estimation is the mean of the gains derived from each pair of ramps selected according the criteria outlined in Sect.<ref>. The histograms representing the superpixel's conversion gains of ALFA (left) and 's H2RG (right) detectors, both before and after correction, are illustrated in Fig. <ref>. The main outcome from these figures is that this method corrects a bias in the gain measurement by approximately 6 % for ALFA and 5 % for 's H2RG. The histogram of 's H2RG before IPC correction reveals a minor peak at 1.9 e^- ADU^-1, corresponding to the previously identified void region. Post-correction, the disparity between this peak and that corresponding to the epoxy region has significantly decreased, from 5.5 % to 2.5 %. However, a residual difference between these two regions persists, suggesting that the epoxy void also influences the gain of the pixels. Except for this void region, the effect of correcting IPC bias on gain is very similar for both detectors.
§.§ Usefullness of per superpixel unbiased conversion gain
The methods and framework previously described result in the conversion gain maps, superpixel histograms and error histograms displayed in Fig. <ref> for ALFA and Fig. <ref> for 's H2RG. The error associated with the conversion gain estimation is statistically calculated using the standard deviation σ_ g across all measured gains for each superpixel, defined as: err = σ_ g / √(M), where M represents the number of pairs of ramps used. For ALFA, the mean gain is about 8.12±0.06 e^- ADU^-1 while it is 1.91±0.02 e^- ADU^-1 for 's H2RG. This corresponds to node capacitances of approximately 60 fF for ALFA, and 30 fF for 's H2RG. Previous analysis, conducted under settings akin to those applied in our measurements, using classic mean-variance with IPC correction, estimated ALFA's conversion gain <cit.> as approximately 10 e^- ADU^-1, revealing a discrepancy of 20 %. This difference emphasizes the requirement to use a coherent framework to avoid discrepancies between measurements of the same parameter. For both detectors, the use of 16×16 superpixels achieves a measurement accuracy of gain better than 1 %, meeting the objectives outlined in this study. Obviously, to increase the resolution, more statistics are required. For 's H2RG, as previously mentioned, the conversion gain in the void region is smaller than in the epoxy region by about 2.5 %. Excluding this region, the conversion gain of 's H2RG shows greater uniformity (within ± 2 % at 2 σ) compared to ALFA (within ± 3.2 % at 2 σ), as observed with the IPC coefficients. Such differences likely originate from the distinct manufacturing processes of LYNRED and TELEDYNE. These spatial variations highlight the necessity of measuring conversion gain at least on a per-superpixel basis to eliminate biases in subsequent gain-dependent measurements. Lastly, the absence of correlation observed between the conversion gain maps and the persistence maps for each detector demonstrates that the persistence mitigation strategies are effective.
Conversion gain is a fundamental parameter measured early during detector characterization because nearly all other critical parameters, such as quantum efficiency (QE), dark current, and readout noise, depend on a known gain value for their measurement. In the following discussion, we will demonstrate how a precise measurement of gain, either per pixel or per superpixel, using a coherent framework enables accurate determination of QE and allows comparison of QE across different detectors.
The standard method to measure QE involves comparing the pixel output in e.s^-1 of the detector with that of a calibrated photodiode. However, to convert the pixel flux from ADU to , it is necessary to apply a conversion gain. Typically, a mean gain value is applied across the entire detector. This approach may cause spatial variations in QE to overlap with variations from conversion gain. Nevertheless, by using the conversion gain measured per superpixel, as described previously, one can distinguish the effects of gain and QE, thereby achieving a more precise measurement of the QE.
Since the test bench at CPPM does not include a calibrated photodiode, an absolute measurement of the QE for both detectors is impossible. Nevertheless, the excellent homogeneity of the flat field, better than 1% at CPPM, allows relative QE measurements. To evaluate the efficiency of using a mean gain versus a per superpixel unbiased gain, one flat field acquisition per detector with fluxes of some 700 photons s^-1 will be used. For each acquisition, the flux in ADU s^-1 will be calculated using correlated double sampling. Then, either a spatial mean or the previously measured per superpixel conversion gain will be used to convert these values from ADU s^-1 to e^-s^-1. The results will then be normalized by the spatial mean to derive a relative QE measurement. Figure <ref> for ALFA and <ref> for 's H2RG illustrate the normalized relative QE maps obtained using mean gain (left) and per superpixel gain (right).
For both detectors, the maps appear significantly flatter when using per superpixel gain; the readout channels are no longer discernible, and regions with substantial deviations from the spatial mean are attenuated. For 's H2RG, spatial features are almost undetectable, indicating a remarkably uniform QE (and consequently, a uniform sensitive layer). However, in the ALFA detector, some areas still exhibit responses 10 % above the spatial mean, suggesting less uniformity in the sensitive layer compared to the H2RG detectors. This difference in uniformity might come from minor instabilities during the fabrication of the sensitive layer or the differing growth techniques—LPE for ALFA versus MBE for H2RG. It is important to note that the H2RG detectors are flight models, obviously more mature than ALFA, the first prototype of very low flux 2 k×2 k NIR detector developed at CEA and Lynred.
§ CONCLUSION
This study has successfully validated the methodologies and framework developed for characterizing the conversion gain of CMOS APS detectors, specifically using the infrared detectors of the and SVOM missions. By employing nonlinear mean-variance methods, we have demonstrated that conversion gain can be measured as a constant across various levels of signal integration for both detectors, thereby underscoring the robustness of our methods in addressing diverse detector behaviors
Our integrated framework, in addition to methodologies that efficiently decorrelate IPC and nonlinearity of the pixel response from gain measurement, significantly reduces biases in gain estimation. By applying this approach, we have systematically eliminated biases associated with correlations of approximately 7 %. Furthermore, the development of a stringent framework that incorporates robust data selection criteria—such as excluding frames with integrated signals below 10 ke^- to mitigate persistence and limiting measurements to 70 % of full well capacity to avoid nonlinear effects at saturation—ensures that our gain measurements are unaffected by adverse environmental conditions. This rigorous approach has corrected previously significant discrepancies, particularly a 20% error in previously reported gain measurements for the ALFA detector.
Moreover, the creation of accurate gain maps allows for precise measurements of pixel response parameters that are decorrelated from conversion gain. For example, it enables a precise measure of quantum efficiency (QE). By applying this framework to ALFA and 's H2RG, we have observed significant differences in the spatial uniformity of the QE of the two detectors. The H2RG detectors used in the mission, produced using MBE, exhibited greater uniformity, indicating a potentially more controlled manufacturing environment compared to the LPE used for ALFA. This analysis not only allows us to compare the quality of the sensitive layer but also to understand the impact of the different techniques used to produce the detectors on their performance.
In conclusion, the successful application of our characterization framework to multiple detector types not only ensures the accuracy of fundamental detector parameters but also provides a detailed evaluation of the fabrication processes and operational efficiencies across different detectors. As such, it serves as a valuable tool for advancing the field of detector technology and improving the data reliability of space and ground-based astronomical missions.
This work was developed within the frame of a CNES-CNRS funded Phd thesis.
spiebib
|
http://arxiv.org/abs/2409.02400v1 | 20240904025134 | Les Canards de Turing | [
"Theodore Vo",
"Arjen Doelman",
"Tasso J. Kaper"
] | math.DS | [
"math.DS",
"35B36, 34E17, 34E15, 35B25"
] |
abbrv
Heisenberg-limit spin squeezing with spin Bogoliubov Hamiltonian
Wenxian Zhang
September 9, 2024
================================================================
§ ABSTRACT
In this article, we study a prototypical system of reaction-diffusion equations in which the diffusivities are widely separated. We report on the discovery of families of spatially periodic canard solutions that emerge from singular Turing bifurcations.
We show that the small-amplitude, spatially periodic solutions that emerge from the Turing bifurcations form families of spatially periodic canards that oscillate about the homogeneous equilibrium, with wavenumbers near the critical value obtained from the Turing analysis.
The emergence of these spatially periodic canards asymptotically close to the Turing bifurcations, which are reversible 1:1 resonant Hopf bifurcations in the spatial ODE system, is an analog in spatial dynamics of the emergence of limit cycle canards in the canard explosions that occur asymptotically close to Hopf bifurcations in time-dependent ODEs.
We also find families of large-amplitude, spatially periodic canards, including some with 𝒪(1) wavenumber and some with small wavenumbers.
These lie further from the homogeneous state and have a “fast-slow" spatial structure, with segments of steep gradients and segments of gradual variation.
In the full PDE system, we show that for most parameter values under study the Turing bifurcation is sub-critical, and we present the results of some direct numerical simulations showing that several of the different types of spatial canard patterns are attractors in the prototypical PDE.
To support the main numerical discoveries, we use the method of geometric desingularization and geometric singular perturbation theory on the spatial ODE system to demonstrate the existence of these families of spatially periodic canards.
Crucially, in the singular limit,
we study a novel class of reversible folded singularities of the spatial ODE system.
In particular, there are two reversible folded saddle-node bifurcations of type II (RFSN-II), each occurring asymptotically close to a Turing bifurcation. We derive analytical formulas for these singularities and show that their canards play key roles in the observed families of small-amplitude and large-amplitude spatially periodic canard solutions.
Then, for an interval of values of the bifurcation parameter further below the Turing bifurcation and RFSN-II point, the spatial ODE system also has spatially periodic canard patterns, however these are created by a reversible folded saddle (instead of the RFSN-II).
It also turns out that there is an interesting scale invariance, so that some components of some spatial canards exhibit nearly self-similar dynamics.
Key words.
folded singularities,
spatial canards,
singular Turing bifurcation,
periodic solutions,
Turing instability,
spatial dynamics,
reversible systems,
nearly self-similar dynamics,
subcritical Ginzburg-Landau
MSC codes.
35B36, 34E17, 34E15, 35B25
§ INTRODUCTION
In mathematical models of pattern formation in biology, chemistry, ecology, engineering, material science, physics, and many other fields, the Turing bifurcation <cit.> is one of the key mechanisms that generates spatially periodic patterns.
It was a pioneering discovery of Alan Turing in 1952 that diffusion can destabilize spatially homogeneous steady states, which are stable states of the associated reaction kinetics, and that this instability to plane wave perturbations results in the formation of spatially periodic patterns as the attractors. See for example <cit.>.
In this article, we report on the discovery and analysis of reaction-diffusion systems in which the spatially periodic solutions created in Turing bifurcations are spatially periodic canards.
Like the known spatially periodic patterns that emerge from Turing bifurcations,
these spatially periodic canards oscillate about the homogeneous equilibrium state.
However, unlike the known periodic patterns, they consist of canard segments generated by folded singularities.
In the systems of spatial ordinary differential equations (ODEs) that govern the time-independent states of reaction-diffusion models, Turing bifurcations correspond to reversible 1:1 resonant Hopf bifurcation points, see for example <cit.>.
We show that, asymptotically close to these reversible 1:1 Hopf bifurcations, the spatial ODE systems can have reversible folded saddle-node singularities of type II (RFSN-II) and that, together with the true and faux canards attached to them, these singularities can serve as the mechanisms responsible for the creation of the observed spatially periodic canard patterns.
We focus first on the van der Pol partial differential equation (PDE),
u_t = v - f(u) + d u_xx,
v_t = (a-u) + v_xx,
and later generalise to a class of activator-inhibitor systems.
The van der Pol PDE is a prototypical reaction diffusion system of activator-inhibitor type.
Here, t ≥ 0, x ∈ℝ, u represents a voltage (or more genrally an activator), v represents a recovery variable (or more generally an inhibitor), the nonlinear reaction function is f(u) = 13u^3-u, a is a threshold parameter, measures the timescale separation for the underlying kinetics, and 0< d ≪ 1 is the ratio of the diffusivities.
With this small parameter, the activator diffuses more slowly than the inhibitor.
This spatial scale separation, i.e., the difference between the diffusivities of the interacting species, is an important –though not necessary– feature of the emergence of periodic patterns in spatially extended systems.
For the van der Pol PDE (<ref>), we consider all positive 𝒪(1) values of the parameter .
This will allow us to consider the general kinetics of the van der Pol system.
We recall that, in the regime of small , the van der Pol ordinary differential equation (ODE), which corresponds to the ODE satisfied by spatially-homogeneous solutions of (<ref>), is in the strongly nonlinear limit, and the limit cycles of the temporal ODE are relaxation oscillations, created in an explosion of temporal limit cycle canards.
By contrast, in the regime of large , the associated oscillations are in the weakly nonlinear limit, and the limit cycles of the kinetics ODE are small perturbations of circular orbits.
(We refer the reader to <cit.> for analysis of the classical van der Pol ODE, i.e. of the ODE satisfied by the spatially-independent solutions of (<ref>).)
The main outcomes of this article are the numerical and analytical demonstrations that the van der Pol PDE (<ref>) possesses several main types of spatially periodic canards.
We find small-amplitude canards with 𝒪(1) wavenumbers (and hence 𝒪(1) spatial periods), and small-amplitude canards with small wavenumbers (and hence large spatial periods).
These small-amplitude solutions emerge along branches emanating from the Turing point, and they oscillate spatially near the homogeneous state (see Fig. <ref>).
In addition, we find large-amplitude canards with 𝒪(1) wavenumbers, as well as large-amplitude canards with small wavenumbers (and hence large periods, which are also referred to as “near-homoclinic" periodic solutions).
These lie further from the homogeneous state in norm and have a distinct “fast-slow" spatial structure, with segments of steep gradients and segments of gradual variation.
The small-amplitude canards transition continuously into those with large-amplitudes.
Moreover, at each value of a in the main interval studied, one can find spatial canards of different types.
We use the method of geometric desingularization (also known as the blow up method) and analysis of folded singularities, especially of the new RFSN-II singularities, to establish the existence of the different types of spatially periodic canards.
The time-independent solutions of (<ref>) satisfy the following fourth-order system of ODEs, in which the spatial variable x is the independent variable:
δ u_x = p,
δ p_x = f(u) - v,
v_x = q,
q_x = (u-a),
where δ = √(d).
We show that this spatial ODE system has a two-dimensional critical manifold in the four-dimensional (u,p,v,q) phase space.
This manifold is induced by the cubic function v=f(u), and hence it has three branches.
We will see that two branches consist of saddle equilibria and one of center equilibria.
Crucially, there are fold sets that separate these branches, and the RFSN-II singularities that are responsible for creating the spatial canards for values of a near the Turing value
a_T = √(1 -2δ√())
lie on these fold sets.
Additionally, we study the desingularized reduced vector field on the critical manifold.
We will show that the desingularized reduced system has a cusp point precisely at a=1, where the RFSN-II singularity occurs in the limit δ=0.
Then, for all sufficiently small values of δ>0, we show that key center-unstable and center-stable manifolds coincide
at locally unique critical values of a.
The first of these is
a_c (δ) = 1 - 5/48δ^2 +𝒪(δ^3).
It corresponds to the locally unique parameter value at which key center-unstable and center-stable manifolds coincide.
Of all of the canards that pass through the neighborhood of the cusp point, the maximal canard has the longest segments near the stable and unstable manifolds of the cusp point (Fig. <ref>(a)).
There are also maximal canards that make a small loop about the equilibrium. as seen in the projection on to the (u,q) coordinates in phase space (Fig. <ref>(b)).
Then, under further variations in a, additional small loops develop around the equilibrium state, and the solutions exhibit nearly self-similar dynamics (Fig. <ref>(c)).
The analysis near the RFSN-II singularity will be valid for all values of a and δ such that a = 1 + 𝒪(δ^3/2).
In the spatial ODE, we also study the spatially periodic solutions for values of a further from the Turing value.
We will show that there are reversible folded saddles (RFS) on the fold sets for all 𝒪(1) values of a ∈ [0,1).
Together with their true and faux canards, these RFS points will be shown numerically to be responsible for creating the spatial canards in this interval away from the Turing bifurcation a_T.
To complement the analysis of the periodic canard solutions of the spatial ODE system, we also present the results of some direct numerical simulations of the PDE (<ref>).
These reveal that several of the large-amplitude spatial canards are stationary attractors.
Moreover, we will find that there are also parameter regimes in which the attractors are small-amplitude solutions that are periodic in both space and time, with profiles given by spatial canards of the form discovered here.
Overall, it turns out that the main Turing bifurcations we study (in the parameter regime in which δ^2 < 64/625) are sub-critical bifurcations (also referred to as the focusing case), in which the coefficient on the cubic term in the Ginzburg-Landau equation is positive.
We will discuss how the spatial canard patterns here constitute a new class of nonlinear attractors in the sub-critical case, and how the canards provide a new selection mechanism.
Having outlined the main ODE and PDE results in this article, we briefly highlight the motivation for this study, and we provide a short comparison of the new spatial canards to classical temporal canards in fast-slow systems of ODEs.
To highlight the motivation, we recall that Turing bifurcations in two-component reaction-diffusion systems with widely-separated spatial scales, such as (<ref>), are reversible, 1:1 resonant Hopf bifurcations in the associated spatial ODE systems
<cit.>.
In particular, the critical points that undergo Turing/1:1 Hopf bifurcations cannot lie on normally hyperbolic slow manifolds, but must instead be located on the boundaries of such manifolds.
This is precisely the case with the RFSN-II singularities that lie on the fold sets between the two-dimensional, saddle and center slow manifolds here in the spatial ODE.
Therefore, in some respect, from the point of view of spatial dynamics, there should be a natural connection between Turing bifurcations and the formation of canards.
As to the comparison, the spatial canards introduced in this article may be viewed as analogs in spatial dynamics of the temporal limit cycle canards first discovered in the van der Pol ODE <cit.>, as well as of the canards of folded saddles and folded saddle-nodes in many other (temporal) systems, see for example <cit.>.
As just recalled, the parameter values at which Turing bifurcations occur are reversible 1:1 resonant Hopf bifurcations in the spatial ODE system <cit.>.
These are the analogs in spatial dynamics of the (singular) Hopf bifurcations that occur in fast-slow ODEs.
Then, just after the Turing bifurcations, families of spatially periodic canards are created, with small-amplitudes for a close to a_T, wavenumbers close to the critical wavenumber k_T (determined by the point of marginal stability), and spatial profiles close to the plane wave e^ik_T x.
Hence, these are analogs in spatial dynamics of the small-amplitude, temporally oscillating solutions that exist close to the Hopf bifurcation in fast-slow systems of ODEs and that oscillate essentially as e^iω t, where ω is the imaginary part of the eigenvalues at the Hopf bifurcation.
Moreover, for parameter values further from a_T, the spatial canards appear to be created by folded saddles, and they appear numerically to exist over a broad range of parameter values, as is also observed for folded saddles and their temporal canards in fast-slow systems of ODEs
(see for example <cit.>).
Also, in the spatial dynamics, the maximal spatial canards act as separatrices that locally partition the phase and paramater spaces into regions of distinct spatial behavior, which is also analagous to the roles played by maximal canards in the phase spaces and parameter spaces of temporal fast-slow ODEs. Finally, in comparing, we will see that there are also key new features of the spatial canards that arise due to the dimension and geometry of the critical manifolds.
Zooming out more broadly, we suggest that the results presented here for spatially periodic canards also contribute to the growing literature about canards and bifurcation delay in spatially-extended systems.
Canards in the kinetics of a two-component model for the Belousov-Zhabotinsky reaction were demonstrated to play a crucial role in the nucleation and annihilation of trigger waves in a 1-D medium of phase waves <cit.>.
Canards arise in the traveling wave ODEs of some reaction-diffusion systems <cit.>.
Slow passage through a saddle-node bifurcation in linear and semi-linear heat equations in 1-D can lead robustly to solutions that spend long times near unstable states as shown analytically in <cit.>.
In astrophysics, folded saddles and their canards play a central role in a model of solar wind when there is a steady, spherically-symmetric outflow from the surface of a star <cit.>.
In an Amari-type neural field integral model <cit.>, temporal canards were observed in the spatial patterns of coherent structures, as were some more complex spatio-temporal patterns containing canard segments.
More recently for PDEs, canard solutions have been studied in an ODE model derived from a sub-critical, infinite-dimensional, pattern-forming system with nonlinear advection on a bounded domain
<cit.>, where they play a role in the nonlinear transitions between two primary states of the system describing the locations of stationary fronts.
Spatio-temporal canards serve as boundaries in multi-mode attractors of reaction-diffusion systems, in which different regions of the domains exhibit different modes of stable oscillation and the canards mediate the transition intervals, keeping the regions separated, see
<cit.>.
Delayed loss of stability occurs in nonlinear PDEs that undergo slow passage through Hopf bifurcations, with examples including the CGL equation, the Brusselator model, the FitzHugh-Nagumo PDE, and the Hodgkin-Huxley PDE (see <cit.>).
The solutions remain for long times near unstable states in a rich manner governed by space-time buffer curves.
A rigorous framework for the local analysis of canard solutions and other forms of bifurcation delay was developed in <cit.> for systems in which the fast variables are governed by a PDE (i.e., infinite-dimensional dynamical system) and the slow variables are governed by ODEs (i.e., by a finite dimensional dynamical system).
Slow passage through fold bifurcations has been studied in fast-slow systems of reaction-diffusion equations, using a Galerkin approach <cit.>.
Slow passage through Turing bifurcations has been studied in reaction-diffusion systems (see <cit.>), as has slow passage through pitchfork bifurcations in Allen-Cahn type equations, in the presence of quenching fronts with small spatial gradients (see <cit.>).
The article is organized as follows.
Section <ref> contains the application of classical analysis to identify the Turing bifurcations and Turing-Hopf bifurcations in (<ref>) and the application of the classical normal form analysis to show that the Turing bifurcation to spatially periodic canards is sub-critical in the main parameter regimes we study, and also to identify where it is super-critical.
In Section <ref>, we introduce the four main types of spatial canards, and present the results obtained from numerical continuation to identify regimes in the (a,k) parameter plane in which spatial canards exist.
In Section <ref>, we begin the geometric singular perturbation analysis by analyzing the fast system, also known as the layer problem. We identify the jump conditions for the fast homoclinics that generate spikes and for the sharp-interfaces (Proposition <ref>). Also, we identify the cusp of the fast system singularity that will be the organizing center for the spatial canard dynamics.
We continue the geometric singular perturbation analysis in Section <ref> by deriving the desingularized reduced vector field on the critical manifold and by studying the slow flow.
We show that there are folded saddle singularities with reversibility symmetry. Moreover, these undergo reversible folded saddle-node bifurcations of type II (RFSN-II) under variation of the system parameters.
Then, in Section <ref>, we rigorously analyze the dynamics around the RFSN-II using the blow-up technique. We determine the key parameter values for which there is an explosion of spatial canards with reversibility symmetry.
Next in Section <ref>, we analyze in detail the geometry of the small-amplitude and large-amplitude spatially periodic canard solutions where the analysis is informed by the rigorous results about the fast system, the desingularized reduced vector field, and the folded singularity with its canards
in Sections <ref>–<ref>.
In Section <ref>, we use that same information to deconstruct the bifurcation sequences along isolas of spatially periodic canards.
Then, in Section <ref>, we analyze the aspects of the spatial canard dynamics that are nearly self-similar.
Section <ref> describes how the spatial canards are analogs in spatial dynamics of the temporal limit cycle canards found in many fast-slow ODEs, such as the van der Pol ODE, FitzHugh-Nagumo ODE, the Lengyel-Epstein model, the Kaldor model, among others, and we identify important differences.
The final new results are in Section <ref>, where we present results from some direct PDE simulations that complement the analysis. Several of the different types of spatial canards discovered here are observed to be attractors in the PDE (<ref>).
We conclude the article in Section <ref> with a summary of the main results, as well as a generalization from the prototype (<ref>) to a class of activator-inhibitor systems, and discussion of future work and open problems.
The appendices contain further information about the main numerical methods we employed, as well as the proofs of some of the propositions and lemmas.
§ TURING BIFURCATION TO SPATIALLY PERIODIC SOLUTIONS
In this section, we apply the classical Turing analysis <cit.> (see also <cit.>) to derive the neutral stability curve of the homogeneous steady state (u,v)=(a,f(a)) of (<ref>).
There are Turing and Hopf bifurcations that occur asymptotically close to each other in the parameter a.
We will obtain the critical wavenumbers and system parameters for the onset of spatially periodic patterns.
In addition, we will apply the classical normal form theory <cit.> for reversible 1:1 resonant Hopf points to show that, in the main parameter regimes studied here with 0<δ≪ 1, the bifurcation to spatially periodic solutions is sub-critical; and, we also identify the conditions under which it is super-critical.
We linearize (<ref>) about the homogeneous state and then Fourier transform the system in space, so that the governing equations become
[ U̇; V̇ ] = DF [ U; V ] = [ -f^'(a)-dk^2 1; - -k^2 ][ U; V ].
The overdot denotes the time derivative, k is the wavenumber, and U(k,t) and V(k,t) denote the Fourier transforms of u(x,t) and v(x,t).
For each k ∈ℝ, the Jacobian has two eigenvalues λ_±(k), and the state (a,f(a)) is spectrally stable as a solution of the PDE (<ref>) if Re(λ_±(k)) < 0 for all k∈ℝ.
The trace of the Jacobian, tr DF= -f^'(a)-(d+1)k^2, is negative whenever
f'(a) = a^2 - 1 > 0, and positive for -1<a<1 (where f'(a)<0).
Hence, we recover the classical (non-spatial) Hopf bifurcation of the van der Pol equation at a=1.
See the left panel in Fig. <ref>.
Now, for a<1, there are two key dynamical features. First, a narrow interval of unstable wavenumbers exists for a<1, centered around k=0, so that the homogeneous state becomes unstable to long wavelength perturbations that oscillate in time as e^i ω_H t, with ω_H =Im(λ_+(0)). However, this instability is only a weak instability, since the rate constant (Re(λ_+(0))=√( d)) is asymptotically small for 0<d ≪ 1 and =𝒪(1).
Second, a Turing instability occurs slightly below a=1.
It appears at the critical wavenumber and critical parameter determined by the conditions
DF = 0 and ∂/∂ k^2 DF = 0
(for which λ_± (k) ∈ℝ).
For the van der Pol PDE (<ref>), we find
k_T^2 = - f^'(a_T)/2d = √(/d) and a_T = ±√(1 - 2√( d)).
The focus in this article will be on the positive value of a_T; the results for the negative value may be obtained by using the symmetry (u,v,a) → (-u,-v,-a) of (<ref>).
Precisely at a=a_T, the real part of the dominant eigenvalue of the Jacobian is zero for wavenumbers k = ± k_T = ±(/d)^1/4.
See the right panel in Fig. <ref>.
The Turing point a_T marks the boundary between two different (linear) regimes.
On one side, for a ≲ a_T, there are intervals of wavenumbers k, one about each of ± k_T, over which the homogeneous steady state is linearly unstable to plane waves e^ikx since the determinant is strictly negative there.
On the other side, for a ≳ a_T, the homogeneous state is linearly stable to plane waves with k near k_T (though not to those with k=0 if a<1 by the above).
Critically, the modes with k near k_T (which is ∼ k_m for small d) are only weakly stable for a ≳ a_T, as may be seen at a=1 in the left panel in Fig. <ref>.
There, λ_+(k_m)=-2√( d), so that the rate constant is asymptotically small, and nonlinear terms also play a central role.
In comparison, the magnitude of this weak stability of the Turing modes (k∼ k_T) is of the same size asymptotically as the magnitude of the weak instability of the Hopf modes (k∼ 0). Therefore, it is important and useful first to study the two competing mechanisms individually and then to study their interactions.
In this article, we focus primarily on the spatial dynamics of stationary solutions of (<ref>), which turn out to capture important features of the overall dynamics of (<ref>) for a near a_T.
In addition, toward the end of the article, we identify some of the rich dynamics created by the interactions of the Turing and Hopf modes.
We study the Turing bifurcation by using the equivalent formulation obtained through the spatial ODE (<ref>), which we recall is
δ u_x = p,
δ p_x = f(u) - v,
v_x = q,
q_x = (u-a).
and we recall the small parameter is δ = √(d).
(We refer to <cit.> for the general theory about the equivalence of this spatial ODE formulation.)
The equilibrium at (u,p,v,q) = (a,0,f(a),0) corresponds to the homogeneous steady state. Linear stability analysis shows that the Jacobian of (<ref>) has a quartet of eigenvalues
μ = ±1/√(2)δ[
f'(a) ±√( (f'(a))^2 - 4 δ^2)]^1/2,
which are symmetric about the real- and imaginary-axes in the spectral plane. See Fig. <ref>.
Exactly at the critical value a_T=√(1 - 2 δ√()) given by (<ref>), where f'(a)<0 and (f'(a))^2 = 4 δ^2, the quartet of eigenvalues consists of two coincident pairs of pure imaginary eigenvalues.
Hence, at a_T, the equilibrium is a reversible, 1:1 resonant Hopf bifurcation point.
Moreover, this point is non-degenerate, because the two pairs pass through this configuration transversely as a passes through a_T.
See Fig. <ref>.
Furthermore, this Turing bifurcation is singular because one pair of the eigenvalues becomes singular in the limit δ→ 0.
For -a_T < a < a_T, the quartet consists of two pairs of pure imaginary eigenvalues with different imaginary parts, while for a_T < a < √(1+2δ√()) it consists of two pairs of complex conjugate eigenvalues, one with negative real parts and the other with positive real parts of equal magnitude, where the upper bound of this interval corresponds to the parameter value at which the eigenvalues become real.
(See <cit.> for other examples of the relation between a Turing bifurcation in a PDE and a reversible 1:1 resonant Hopf bifurcation in the associated spatial ODE system.)
It is also useful to study the spatial ODE system (<ref>) in the stretched spatial variable y = x/δ,
u_y = p
p_y = f(u) - v
v_y = δ q
q_y = δ (u-a).
This system will be used to analyze the steep gradients in the spatially periodic solutions that we study, while system (<ref>) will be used for the slowly-varying segments of the spatial periodic solutions.
For all δ>0, the systems of ODEs (<ref>) and (<ref>) are equivalent.
For a spatially periodic solution of (<ref>) with period T, the wavenumber with respect to the stretched y variable is k_y = 2πT. For comparison with the wavenumbers in (<ref>), we work with the wavenumber given by k = 1δ k_y = 2πδ T. Throughout the remainder of the article, we will use the wavenumber k = 2πδ T.
System (<ref>) has a reversibility symmetry, which it inherits from the x→ -x symmetry of the system of two coupled second-order ODEs that govern time-indepednent solutions of (<ref>). Let
ℛ
= [
[ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 -1 ]], u = [
[ u; p; v; q ]], and F = [ [ p; 1/3u^3 - u - v; δ q; δ (u-a) ]].
Then, ℛ is a reversibility symmetry of (<ref>), because it anti-commutes with the vector field:
ℛ F ( u) = - F( ℛ u).
(See for example <cit.> for general results about reversibility in the spatial ODEs governing stationary solutions of reaction-diffusion systems.) The presence of this reversibility symmetry is also what guarantees the symmetries about the real- and imaginary-axes of the resolvent and spectrum of the operator L obtained by linearizing the vector field F at a stationary state, since u is real-valued and since the reversibility ℛL = -Lℛ implies ( μ𝕀 + L )^-1ℛ = ℛ(μ𝕀 - L )^-1.
For all >0, system (<ref>) (equivalently (<ref>)) has the following conserved quantity/first integral:
𝒢(u,p,v,q,a)=
1/2 p^2 - 1/2 q^2 - [f̃(u) + (a-u)v],
where f̃(u) = ∫ f(u) du = u^4/12 - u^2/2.
(It may be derived for example by setting u_t=0 in the first equation of (<ref>) and multiplying it by u_x, setting v_t=0 in the second equation in (<ref>) and multiplying it by v_x, then subtracting the second equation from the first so that all terms are perfect derivatives, and recalling the definitions of p and q given in (<ref>).)
At the stationary state, 𝒢(a,0,f(a),0,a)=1/12 a^2 ( 6 - a^2 ).
The existence of small-amplitude, spatially periodic solutions of the ODE system near the critical Turing bifurcation and with wavenumber k near k_T is established by the following proposition.
The proof is presented in Appendix <ref>, where we use the standard procedure to derive the normal form for the reversible 1:1 resonant Hopf bifurcation in the spatial ODE and then directly apply Theorem 3.21 in Chapter 4.3.3 of <cit.> to derive the conclusions about the system dynamics.
For a-a_T sufficiently small, where a_T=√(1-2δ√()), we have:
(i)
For δ^2 < 64/625, the spatial ODE system (<ref>) has a one-parameter family of spatially periodic solutions that surround the symmetric equilibrium (a,0,f(a),0), and it has a two-parameter family of quasi-periodic solutions located on KAM tori.
The same conclusion also holds for δ^2 > 64/625 but then only when a-a_T<0.
(ii)
For a-a_T<0 and δ^2 > 64/625, there exists a one-parameter family of pairs of reversible homoclinics to periodic solutions.
(iii)
For a-a_T>0 and δ^2 < 64/625, there is a pair of reversible homoclinic orbits to the symmetric equilibrium.
(iv)
For a-a_T>0 and δ^2 > 64/625, there is just the symmetric equilibrium and no other bounded solutions.
We are most interested in the parameter values for which δ^2 < 64/625, which corresponds to the main case in part (i) and to part (iii).
These parts of the proposition establish the existence of one-parameter families of small-amplitude, spatially periodic solutions of (<ref>) near a_T, and the limiting homoclinics.
The Turing bifurcation is sub-critical, and the coefficient on the cubic term in the Ginzburg-Landau PDE is positive <cit.>.
(In contrast, for δ^2 > 64/625, the Turing bifurcation is super-critical, and the coefficient on the cubic term in the Ginzburg-Landau equation is negative.)
Some representative spatially periodic solutions are shown (in blue and red) in Fig. <ref> for parameter values a close to a_T.
These solutions exhibit small-amplitude oscillations about the homogeneous state, and their wavenumber k is close to k_T.
Moreover, the closer a is to a_T, the smaller the amplitude of the solution, the closer k is to k_T, and the closer the spatial profile is to that of the plane wave e^ik_T x.
Then, for a slightly further away, nonlinear terms have a dominant effect (black orbit), and a corner begins to form in the orbit, which will develop into a cusp.
From the point of view of Ginzburg-Landau theory for the weakly nonlinear stability of spatially periodic patterns created in Turing bifurcations, the sub-critical case that we focus on is the least studied and understood. In fact, in the sub-critical case, the Ginzburg-Landau formalism typically predicts that the amplitude of the solutions under consideration will grow beyond that of their assumed near-onset magnitude, and thus beyond the region of validity of the Ginzburg-Landau set-up.
As the analysis below will show, it turns out to be possible to go beyond this magnitude, and our results may thus be expected to shed a fundamental light on the behavior of patterns near a sub-critical Turing bifurcation (see also Section <ref>).
§ SPATIALLY PERIODIC CANARD SOLUTIONS
In this section, we present representative numerical simulations of the plethora of spatially periodic canard solutions of (<ref>) that are created asymptotically close to the Turing bifurcation. There are at least four different types of spatial canards, depending on amplitudes and wavenumbers.
§.§ Small-amplitude spatial canards with Lg wavenumbers
Immediately beyond the parameter regime of essentially regular plane wave solutions, where the nonlinear terms become dominant, we find small-amplitude spatially periodic canard solutions with 𝒪(1) wavenumbers.
A representative spatial canard is shown in Fig. <ref>.
This solution has period T ≈ 353.366 (k ≈ 1.778), and
it lies on the level set 𝒢 = ( 23a-14).
The spatial profile (see Fig. <ref>(a)) shows that it lies in a small neighborhood of the u=1 state over the entire period.
Moreover, on a large portion of the period, the solution slowly increases or slowly decreases. In between these slow regimes, there is a spatial interval on which the solution makes three small-amplitude oscillations around the state u=a.
During the long portion of slow increase, the orbit gradually moves up the right branch of the cubic nullcline v=f(u) (black curve), with u slowly increasing.
See the projection onto the (u,v) plane shown in Fig. <ref>(b).
The orbit then transitions away and oscillates rapidly about the middle branch of the cubic nullcline.
In the (u,v) projection, the oscillations are manifested by the zigzag upward to the upper left extremum, and then back along the same path (due to the reversibility symmetry of the solution) to the right branch, completing the oscillatory segment on the central spatial interval.
Along the remainder of the period, the solution slowly moves down along the right branch of the v=f(u) curve, to its minimum value. Along this segment, u slowly decreases.
The projections onto the (u,p) and (u,q) planes (see panels (c) and (d), respectively) also show that the solution oscillates about the equilibrium state.
We will show in Section <ref> that system (<ref>) has a folded singularity –in particular a reversible folded saddle-node of type II– which lies on the fold set L^+ and that portions of this orbit (and of the other small-amplitude solutions of this type) lie along the canards of this singularity. Hence, these are spatial canard solutions.
§.§ Small-amplitude spatial canards with small wavenumbers
The spatial ODE system (<ref>) also possesses small-amplitude spatial canards with small (𝒪(δ)) wavenumbers.
A representative solution is shown in Fig. <ref>.
The solution stays close to the equilibrium state u=a over almost the entire period, as may be seen from the spatial profile of u shown in Fig. <ref>(a).
(Here, the value of a is 𝒪(δ^2) below a=1.)
Then, on an extremely short interval in space, it exhibits a
small-amplitude excursion, with magnitude of at most 𝒪(δ) (see also the inset in Fig. <ref>(a)).
During the long spatial interval over which u is close to a, the solution slowly evolves in a neighborhood of the local minimum (see the projection of the solution onto the (u,v) plane shown in Fig. <ref>(b)).
In particular, the solution slowly moves down the right branch of the nullcline with u decreasing,
and then it passes underneath the local minimum
(right black dot in Fig. <ref>(b)), slowly spiraling in toward the equilibrium at (u,v)=(a,f(a)) (left black dot).
Subsequently, the solution slowly spirals away from the equilibrium, retracing its path (in the (u,v) projection, due to the reversibility symmetry) back underneath the local minimum and toward the right branch of the nullcline, completing the long segment of slow evolution.
On the short spatial interval that interrupts the long segment of slow evolution, the fast excursion corresponds to the jump away from the right branch to just beyond the middle branch, followed by the return along the same segment (in the projection), due to the reversibility symmetry.
We emphasize that these small-amplitude spatially periodic solutions are also canard solutions, because they also have long segments near the canards of a reversible folded saddle-node singularity, as we will show in Sections <ref> and <ref>.
For completeness, we note that, in frames (c) and (d), the fast portion of the solution on the short spatial interval corresponds to the segments located to the left of the box marking the first inset, i.e. to the left of approximately u=0.998.
The dynamics of the solution in the neighborhood of the equilibrium also exhibit a degree of self-similarity. This may be seen in the projections onto the (u,p) and (u,q) planes; see frames (c) and (d), respectively.
As u and v slowly evolve, their spatial derivatives p and q alternate in sign, and the solution exhibits nested spirals as it slowly approaches –and then slowly recedes from– the equilibrium.
Three levels of the nested, nearly self-similar spiraling are shown, and the solution exhibits additional, more-deeply nested levels (not shown).
The nearly self-similar dynamics of this representative solution (and of other small-amplitude solutions with small wavenumbers) will be shown to follow from the self-similarity in a level set of the Hamiltonian that arises naturally in the desingularized reduced system (see Section <ref>).
§.§ Large-amplitude spatial canards with moderately small wavenumbers
In this section, we turn our attention to large-amplitude spatially periodic canard solutions, and we present a class of spatial canards
that have moderately small (𝒪(√(δ))) wavenumber.
A representative spatial canard solution of this type is shown in Fig. <ref>.
Over a large portion of the spatial period, u slowly decreases monotonically from √(3) to just above 1.
Near its minimum, the u-component exhibits a small-amplitude spike (see the inset in Fig. <ref>(a)).
Subsequently, u slowly increases monotonically near 1 back up to near √(3).
Finally, there is a fast downward jump
to u=-√(3) on a narrow spatial interval, followed by a slow segment along which u slowly increases to a local maximum just above -√(3), and then a fast upward jump back to √(3), to complete the period.
The dynamics along each of the slow and fast segments of the solution observed in the spatial profile of u may be understood from the projections onto the (u,v), (u,p), and (u,q) planes, which are shown in Figs. <ref>(b)–(d), respectively.
The long portion on which u slowly decreases from √(3) (near v=0) down toward 1 corresponds to the slow evolution near the right branch of the cubic nullcline v = f(u), see Fig. <ref>(b).
The solution comes close to the equilibrium (right black dot).
Then, it exhibits a small, localized oscillation about the middle branch of the nullcline, before returning to the neighborhood of the right branch of the v = f(u) nullcline, where it slowly moves upward until it reaches the neighborhood of the v=0 state.
Subsequently, at the point where the solution reaches the { v=0 } state, a fast jump takes place, along which the solution transitions to the left branch of the v=f(u) nullcline.
It slowly flows up the left branch of the cubic nullcline until it reaches a maximum, turns around, and then heads back toward the v=0 state.
Once it reaches the v=0 state, another fast transition is initiated, and the solution returns to the right branch of the cubic.
The projections of the large-amplitude solution onto the (u,p) and (u,q) phase spaces are shown in Figs. <ref>(c) and (d), respectively.
These solutions are called spatial canards,
because they have long segments near the canards of a reversible folded saddle-node singularity, as we will show in Section <ref>.
Moreover, there are also large-amplitude spatial canards with 𝒪(1) wavenumbers, as we will see in Section <ref>.
These have no loops and only relatively short segments near the true and faux canards of the RFSN-II point.
§.§ Large-amplitude spatial canards with small wavenumbers
System (<ref>) has a fourth main family of spatially periodic canards, which are large-amplitude, small wavenumber solutions.
In Fig. <ref>, we show a representative solution with wavenumber k ≈ 0.000628 (T = 1 × 10^6).
Over most of the spatial period, the solution slowly varies near u=a
(see Fig. <ref>(a)).
There is a narrow interval in which the solution has five, symmetrically-disposed, large-amplitude spikes, with peaks near u=√(3) and troughs near u=-√(3).
This narrow interval is magnified in the main inset, so that the individual spikes are visible, including the pairs of spikes at the edges and the central spike.
Also, within the main inset, two further insets zoom onto the sharp, small spikes at the outer edges.
In the phase space, the solution is near the equilibrium during the large portion on which u near a, see the left black dot in Fig. <ref>(b).
Then, outside of this large portion of slow evolution, the solution has a narrow interval (see the main inset in Fig. <ref>(a)) in which it makes the two small and five large-scale oscillations. In particular, the small-amplitude, moderately fast oscillation about u=1 (secondary inset) corresponds in phase space to the small-amplitude excursion around the manifold of center points (near the local minimum of the nullcline, as shown in the inset oanel (b)).
Then, the orbit travels fairly rapidly up the right branch of the cubic nullcline until it reaches a neighborhood of v=0, where the profile has the first of the five large-amplitude peaks. Subsequently, it makes a fast jump toward the left branch of the cubic, oscillating once before it reaches the left branch.
This oscillation corresponds to the second spike at the left edge of the interval shown in the spatial profile (recall Fig. <ref>(a)).
Further along, the orbit travels up the left branch of the cubic until it reaches some maximum, where it exhibits a fast jump corresponding to the middle large-amplitude spike of the five in the inset in Fig. <ref>(a).
Finally, the solution travels back down near the left branch of the cubic nullcline until it reaches the neighborhood of v=0, where another fast jump is initiated, and it makes another large-amplitude oscillation on its way back to the right branch of the cubic.
These final large-scale oscillations correspond to the spikes at the right edge in Fig. <ref>(a), thereby completing the periodic solution.
This large-amplitude, small wavenumber spatial canard exhibits dynamics that are close to being self-similar in the neighborhood of the u=1 state.
This may be seen in the projections of this solution onto the (u,p) and (u,q) planes (see Figs. <ref>(c) and (d), respectively). This near self-similarity will be discussed in Section <ref>.
§.§ Numerical continuation of the families of spatial canards
Having presented four main types of spatially periodic canard solutions, we now identify regions in the (a,k) parameter space containing spatially periodic canards, as obtained using numerical continuation.
(We refer to Appendix <ref> for details about the numerical continuation.)
We began by finding a stable periodic orbit for a=0, continued this with respect to the period to find the orbits with minimal and maximal periods for a=0, and then performed two-parameter continuation on these in a and the period.
The continuation results are shown in Fig. <ref>.
For each of four values of δ, namely for 10^-2, 10^-1, 0.5, and 1, spatially periodic canard solutions exist in the region between two curves.
The upper curve corresponds to minimal period solutions (with maximal k) and the lower curve to the maximal period solutions (with minimal k).
The solutions with small k have long segments along which they are near the equilibrium, making successively smaller nested loops, which are close to being self-similar.
The solutions with 𝒪(1) or higher k have shorter or no segments near the equilibrium.
This trend is illustrated for values of a on both sides of a=1 by the representative canards shown in Figs. <ref>–<ref>.
Consider first a=0.999433, so that the equilibrium lies to the left of the local minimum.
The small-amplitude canard with small k shown in Fig. <ref> has a finite sequence of successively smaller, nearly self-similar loops, and hence it has a long segment near the equilibrium (and we note that similar canards exist also for a>1, e.g. along the rightmost blue stalk in Fig. <ref>(c)).
By contrast, small-amplitude canards with 𝒪(1) k
(one of which is shown for example below in Section <ref>) have few or no loops, and hence have only short or no segments near the equilibrium.
At this same value of a, the large-amplitude canards exhibit the same trend with respect to k, as may be seen for example by comparing the canard with small k shown in Fig. <ref> with a canard with 𝒪(1) k.
Similar trends are seen for a=1.008, as well as for values of a further from a=1 on both sides.
The lower curves in Figs. <ref> (a), (b), and (c) exhibit several sharp drops in k near the critical Turing value a_T = √(1-2δ√()).
These sharp drops occur because of the birth of infinite-period (i.e., zero wavenumber), homoclinic solutions.
The existence of these homoclinics follows from part (iii) of Proposition <ref>.
Finally, as illustrated in Fig. <ref> (d), the rightmost edge of the region of spatial periodics converges to the critical Turing bifurcation as δ→8^-25√(), which is in accord with part (i) of Proposition <ref>, since spatially periodic solutions only exist for a<a_T for δ on the other side.
(For reference, we also show the linear stability boundary, δ^2 k^4 + (a^2-1) k^2+ = 0,
of the equilibrium state, which is obtained from DF = 0 (<ref>), as the black dashed curve.)
A more detailed view of the bifurcation structure of the spatially periodic solutions of (<ref>) is presented in Fig. <ref>, with frames (a)–(c) showing the wavenumbers k of sizes k ≫ 1, k=𝒪(1),
and k ≪ 1, respectively.
First, the same diagram from Fig. <ref>(a) is shown in Fig. <ref>(a), now also with curves of constant 𝒢 (red curves), fixed at the special value 𝒢 = ( 23a-14). These particular spatially periodic solutions only exist at small and 𝒪(1) wavenumbers. At the 𝒪(1) scale of wavenumbers (Fig. <ref>(b)), the fine structure of the spatially periodic solutions becomes more apparent. Each of the different colored curves (red, orange and blue) correspond to different branches of spatially periodic solutions on the level set 𝒢 = ( 23a-14). The red branch emerges from the Turing bifurcation (black marker), continues into large a, decreases to small k, and remains at small wavenumbers (horizontal segments).
The detailed structure of the nearly horizontal segments for small wavenumbers is shown in Fig. <ref>(c). It can be seen that there are even more branches of spatially periodic solutions with 𝒢 = ( 23a-14). Some of these branches exhibit sharp drops in k and continue in toward the homoclinic solution along k=0 for a>a_T.
Each of these branches in Fig. <ref>(c) contains spatially periodic canard solutions.
Overall, the numerical continuation results show that system (<ref>) has many different types of spatially periodic canard solutions in the (a,k,d) space for a wide range of values of >0.
We have found these types of spatially periodic canards for other values of , including =0.8.
We add that, in Section <ref>,
the solutions and bifurcations along these branches will be studied in detail by using the results obtained in the analysis of the fast system, the desingularized reduced vector field, and the RFSN-II singularity (Sections <ref>–<ref>).
To conclude this section, we note that the regions shown in Fig. <ref> correspond to what are called `existence balloons' in <cit.>, i.e., regions in (parameter,wave number)-space in which spatially periodic patterns exist.
Naturally, the eventual goal would be to determine the Busse balloon, the sub-balloon of stable spatially periodic patterns <cit.>.
The Busse balloon plays a central role in pattern forming systems in fluid mechanics, reaction-diffusion systems, ecological models, and many other scientific and engineering problems <cit.> and is also expected to play a similar role in understanding the impact of the canard-type solutions of this work on the dynamics exhibited by PDE (<ref>).
§ FAST SYSTEM (LAYER PROBLEM)
In this section, we study the fast system (layer problem) of the spatial ODE, which is obtained by taking the limit δ→ 0 in (<ref>):
u_y = p
p_y = f(u) - v
v_y = 0
q_y = 0.
Hence, v and q are constants, and the fast system is independent of a.
Moreover, for each fixed v, the fast system is Hamiltonian,
u_y = ∂ H_ fast/∂ p,
p_y = -∂ H_ fast/∂ u,
with
H_ fast(u,p;v)= 1/2p^2 - f̃(u) + uv.
The functional form of H_ fast is equivalent to that obtained by reducing 𝒢 to the fast system and scaling by a factor of . We recall from (<ref>) that f̃= 1/12u^4 - 1/2 u^2.
§.§ The critical manifold of the fast system
In the four-dimensional (u,p,v,q) space, the fast system (<ref>) has a two-dimensional critical manifold,
S := { (u,p,v,q)
: u∈ℝ, p=0, v=f(u), q ∈ℝ},
which corresponds to the union of all of the fixed points of (<ref>).
Moreover, given that the cubic function f(u)=1/3u^3 - u has three segments separated by the local extrema at (u,v)=(± 1, ∓2/3), the critical manifold S has three branches:
S_s^-
= { (u,p,v,q) ∈ S : u=u_- < -1, p = 0, v=f(u_-), q ∈ℝ},
S_c
= { (u,p,v,q) ∈ S : u=u_0, -1 < u_0 < 1, p=0, v=f(u_0), q ∈ℝ},
S_s^+
= { (u,p,v,q) ∈ S : u=u_+>1, p=0, v=f(u_+), q ∈ℝ}.
S_s^± are hyperbolic saddle invariant manifolds, because the fixed points (u_±,0) are saddle fixed points of the (u,p) subsystem for each v>-2/3 and each v < 2/3, respectively. Similarly, S_c is a non-hyperbolic, elliptic invariant manifold, because the fixed point (u_0,0) of the (u,p) subsystem is a nonlinear center for each -2/3 < v < 2/3.
The three invariant manifolds are illustrated in Fig. <ref>.
The fold curves are
L^±=
{
(u,p,v,q) | u = ± 1, p=0,
v = ∓2/3, q ∈ℝ}.
They form the boundaries of the invariant manifolds, where S_s^± lose normal hyperbolicity.
Together with the three invariant manifolds, they provide the following natural decomposition of the critical manifold,
S = S_s^- ∪ L^- ∪ S_c ∪ L^+ ∪ S_s^+.
§.§ Homoclinics of the fast system
We prove the existence of homoclinic orbits in the fast system, which will be useful in the deconstruction of spatially periodic canard solutions.
For each fixed v ∈ (-23,0), there exists a homoclinic solution of (<ref>) corresponding to the intersection W^s(u_+) ∩ W^u(u_+), where u_+ ∈ S_s^+ is the root of v = f(u) for which u_+ >1.
Similarly, for each fixed v ∈ (0,23), there exists a homoclinic solution of (<ref>) corresponding to the intersection W^s(u_-) ∩ W^u(u_-), where u_- ∈ S_s^- is the root of v = f(u) for which u_- <-1.
We examine the fast system Hamiltonian along the cross-section { p =0 }:
H_ fast(u,0; v) = u v - ( 112u^4-12u^2 ).
For a fixed v in the interval (0,23), H_ fast(u,0; v) is a quartic polynomial (Fig. <ref>) in u with local maxima at u_± and a local minimum at u_0.
Recall that u_± are saddle equilibria of (<ref>) and u_0 is a center.
The Hamiltonian H_ fast(u,0; v) is monotonically increasing for all u<u_-, decreasing for u_-<u<u_0, increasing for u_0<u<u_+, and decreasing for u>u_+.
Moreover,
H_ fast(u_+,0; v) > H_ fast(u_-,0; v) > H_ fast(u_0,0; v),
so that, by the Intermediate Value Theorem, there exists a u_* ∈ (u_0,u_+) such that H_ fast(u_-,0; v) = H_ fast(u_*,0; v).
Since the only equilibrium between u_- and u_* is the nonlinear center u_0, the orbits with u ∈ [u_-,u_*] are closed.
Hence, there exists a closed orbit of (<ref>) that connects the fixed point (u_-,0) to a point (u_*,0) and then back to itself.
The proof for v ∈ (-23,0) is similar.
§.§ Heteroclinics in the intersections Lg
In this section, we establish
In the (u,p,v,q) space, W^u(S_s^-) ∩ W^s(S_s^+) and W^u(S_s^+) ∩ W^s(S_s^-). Moreover, these intersections lie in the plane { v=0 }, and they are transverse.
The stable and unstable manifolds are shown in Fig. <ref>,
along with the heteroclinic orbits that lie in their transverse intersections.
For v=0,
the saddles (-√(3),0)
and (√(3),0)
lie on the level set
H_ fast (±√(3),0;0)=3/4.
They are connected by heteroclinic orbits
in the (u,p) plane,
as may be verified directly
from the functional form
of the Hamiltonian.
Next, for values of v near –but not equal to– zero,
the saddles (u^±,0) lie on different level sets of the Hamiltonian.
In particular, for each fixed v>0 and close to zero,
H_ fast < 3/4 at the saddle (u^-,0) on S_s^-,
and H_ fast > 3/4 at the saddle (u^+,0) on S_s^+.
Similarly, for each fixed v≲ 0 ,
H_ fast > 3/4 at the saddle (u^-,0) on S_s^-,
and H_ fast < 3/4 at the saddle (u^+,0) on S_s^+.
Therefore,
W^u(S_s^-) and W^s(S_s^+)
intersect transversely
in the plane { v=0 },
as do the invariant manifolds
W^u(S_s^+) and W^s(S_s^-).
§.§ The cusp point at Lg
For v=-2/3,
the fast system (<ref>) has a saddle-node point at (u,p)=(1,0). The Jacobian at that point has a double zero eigenvalue, with
Jordan block given by
[
[ 0 1; 0 0 ]].
In the phase plane, it is a cusp point with a single-branched stable manifold
and a single-branched unstable manifold. In parametrized form, the functions whose graphs give these manifolds have the form
p = ∓ C (u-1)^3/2,
for some C>0, respectively, and they lie on the level set H_ fast(u,p)=- 1/4.
This cusp point is the point we will desingularize. The stable manifold will correspond to one fixed point on the equator of the blown-up hemisphere, and the unstable manifold to another fixed point. The dynamics of the orbits near the cusp point will then be determined by studying the dynamics over the hemisphere.
§ SLOW SYSTEM ON LG, WITH THE FOLDED AND ORDINARY SINGULARITIES
In this section, we study the slow system induced by the spatial ODE system on the two-dimensional critical manifold S = { u ∈ℝ, p=0, v=f(u), q ∈ℝ}, recall (<ref>). We derive the locations and stability types of the folded and ordinary singularities.
Recall that the slow manifold S consists of the fixed points of the fast system in the limit δ→ 0 (i.e., of (<ref>) with δ=0).
One obtains the slow dynamics on S in the slow variable by differentiating the constraints p=0 and v=f(u) that define S:
[
[ 0 1; f'(u) 0 ]]
[
[ u_x; p_x ]]
=
[
[ 0; v_x ]].
The adjoint operator is
adj[
[ 0 1; f'(u) 0 ]]
=
[
[ 0 -1; -f'(u) 0 ]].
Hence, left multiplying (<ref>)
by the adjoint and recalling the third and fourth components of
(<ref>), we find
-f'(u)
[
[ u_x; p_x ]]
= [
[ -q; 0 ]]
[
[ v_x; q_x ]]
= [
[ q; (u-a) ]].
Next, we desingularize the reduced system on S
by rescaling time with a factor of f'(u).
Let dx = f'(u) dx_d.
In this new, dynamic time variable,
the reduced system (<ref>) is equivalent to
[
[ u_x_d; p_x_d; v_x_d; q_x_d ]]
= [
[ q; 0; f'(u) q; f'(u) (u-a) ]].
Finally, we observe that, on S, the constant solution for p is p=0.
Also, the equation for v decouples (and may be solved by quadrature).
Hence, we arrive at the desingularized reduced system
u_x_d = q
q_x_d = f'(u) (u-a).
This is the main system studied in this section; it defines the folded and ordinary singularities.
We note that on S_s^± the direction of the flow of (<ref>) is the same as that of (<ref>), whereas on S_c the flow direction is reversed since f'(u)<0 on S_c.
Also, (<ref>) is Hamiltonian with
H_ desing(u,q;a)= -1/2q^2 + ( 1/4u^4
- 1/3a u^3 - 1/2u^2 + au ),
where H_ desing is induced by (<ref>), i.e., H_ desing = .𝒢|_S, and u_x_d=-∂ H_ desing∂ q and q_x_d=∂ H_ desing∂ u.
The ordinary and folded singularities of the desingularized reduced system (<ref>) are found as follows.
Off the fold set, both components of the vector field vanish at (u,q)=(a,0). Hence, (<ref>) has an ordinary singularity (equilibrium) at
E = { u=a, q=0 }.
Then, on the fold set where f'(u)=0, there are two folded singularities
M^± = { u = ± 1, q=0 },
where the vector field in (<ref>) vanishes.
We classify the linear stability types of the ordinary and folded singularities using the eigenvalues of the Jacobian of (<ref>),
[
[ 0 1; f”(u) (u-a) + f'(u) 0 ]]
The classifications are listed in Table <ref>.
The phase plane of (<ref>)
is sketched for different values of a>0
in Fig. <ref>. (Those for a<0 may be obtained using the symmetry (u,q,a) → (-u,-q,-a).)
The orbits lie on the level sets
of the Hamiltonian H_ desing
given by (<ref>).
The equilibrium E lies on the level set H_ desing= a^2/12 ( 6 - a^2 ), and the folded singularities M^± lie on the level sets H_ desing=( ±2/3a - 1/4). Furthermore, we observe that ∂ H_ desing/∂ u (u,0) = (u^2-1)(u-a)> 0
for u ∈ (-1,a) in the case 0<a<1 and for u ∈ (-1,1)
in the case a≥ 1.
By Fenichel theory <cit.>, the normally hyperbolic saddle sheets, S_s^±, of the critical manifold perturb to nearby invariant slow manifolds, S_s,δ^±, of (<ref>). Moreover, the flow of (<ref>) restricted to S_s,δ^± is a small 𝒪(δ) perturbation of the reduced flow on S_s^±. Canard theory can then be used to establish that the singular true and faux canards of the folded saddle persist as maximal canard solutions for 0<δ≪ 1 provided the eigenvalues of the folded saddle remain bounded away from zero <cit.>.
The saddle slow manifolds, S_s,δ^+, and associated maximal canards are shown in Fig. <ref>. The numerical method used to compute these is outlined in Appendix <ref>.
The saddle slow manifolds consist of two types of solutions. The first type are the solutions that approach the neighbourhood of the fold curve L^+ and subsequently escape via the fast dynamics. The other type are the solutions that turn away from the fold region and remain on S_s,δ^+. The separatrices that divide between the two types of solutions are the maximal canards of the folded saddle. In the RFSN-II limit, the saddle slow manifolds and the canard dynamics become complicated (see Fig. <ref>(b).)
§ DESINGULARIZATION OF THE REVERSIBLE FSN-II POINT LG
In this section, we focus on the dynamics of the system for a values near a=1 and desingularize the reversible FSN-II point M^+ located at (u, p, v, q, a, δ) = (1, 0, -2/3, 0, 1, 0).
First, we translate coordinates so that the RFSN-II point is shifted to the origin.
Let
(u,p,v,q,a,δ)
=(1+ũ,p̃,-2/3 + ṽ, q̃,
1+ã,δ̃).
In these new coordinates,
the spatial system (<ref>) becomes
ũ_y = p̃
p̃_y = ũ^3/3 + ũ^2 - ṽ
ṽ_y = δ̃q̃
q̃_y = δ̃ (ũ-ã)
ã_y = 0
δ̃_y = 0.
Also, the conserved quantity (<ref>) may be rewritten as
𝒢(u,p,v,q,a)
= 𝒢̃(ũ,p̃,ṽ,q̃,ã) + ( 2/3ã + 5/12)
= 𝒢̃(ũ,p̃,ṽ,q̃,ã) + ( 2/3a - 1/4),
where 𝒢̃(ũ,p̃,ṽ,q̃,ã)= 1/2 (p̃^2 - q̃^2 )
- ( 1/12ũ^4 + 1/3ũ^3 + (ã-ũ)ṽ).
The point
(ũ,p̃,ṽ,q̃,ã,δ̃)
=(0,0,0,0,0,0), which corresponds to the RFSN-II point at a=1
(recall Table <ref>),
is a nilpotent point
of (<ref>).
Hence, in order to unfold the dynamics, we desingularize this point
by introducing the additional dependent variable r
and by dynamically rescaling the variables:
ũ = √() r^2 u̅, p̃ = r^3 p̅, ṽ = r^4 v̅, q̃ = ^3/2 r^3 q̅, ã = √() r^3 a̅, δ̃ = r^2 δ̅.
Here, (u̅,p̅,v̅,q̅,a̅, δ̅) ∈𝕊^5.
The factors of in (<ref>) were chosen
for symmetry in the equations for ũ and ṽ.
We will examine system
(<ref>)
in two coordinate charts:
the entry/exit chart
K_1 = {u̅=1 }
and the rescaling (or central) chart
K_2 = {δ̅=1 }.
As is customary in articles on geometric desingularization (a.k.a. blow up), in chart K_i, i=1,2,
the variable □̅
will be denoted by □_i.
In chart K_1,
the rescaling (<ref>)
is given by
ũ = √() r_1^2, p̃ = r_1^3 p_1, ṽ = r_1^4 v_1, q̃ = ^3/2 r_1^3 q_1, ã = √()r_1^3 a_1, δ̃ = r_1^2 δ_1.
Then, in chart K_2,
the rescaling (<ref>)
is given by
ũ = √() r_2^2 u_2, p̃ = r_2^3 p_2, ṽ = r_2^4 v_2, q̃ = ^3/2 r_2^3 q_2, ã = √() r_2^3 a_2, δ̃ = r_2^2.
This section is organized as follows.
In Subsection <ref>, we present the analysis in the rescaling chart K_2, identifying the key algebraic solution in this chart and the self-similarity of the level set on which it lies.
These results also justify the range of validity of the asymptotics, as stated in the Introduction, since we have ã = ^3/2δ̃^3/2 q_2 by the rescalings (<ref>) and δ̃=δ by (<ref>).
Subsection <ref> contains the analysis in the entry/exit chart K_1.
Then, the transition maps between the two charts are presented in Subsection <ref>, and these are used to track the key solution from chart K_2 into chart K_1.
Next, in Subsection <ref>, the transverse intersection of the main center-unstable and center-stable manifolds is shown, thereby establishing the formula (<ref>) for the critical value a_c(δ).
Finally, Subsection <ref> presents sp,e analysis of the smoothness of key solutions.
§.§ Rescaling (or central) chart Lg
To obtain the governing equations
in chart K_2,
we substitute (<ref>)
into (<ref>):
du_2/dy = r_2 √() p_2
dp_2/dy = r_2 (u_2^2 - v_2) + 1/3√() r_2^3 u_2^3
dv_2/dy = r_2 √() q_2
dq_2/dy = r_2 (u_2 - r_2 a_2)
da_2/dy = 0.
Here, we used dr_2/dy=0 in K_2,
which follows since δ̅≡ 1 implies
dδ̅/dy=0.
Then, introducing the rescaled independent variable
y_2 = r_2 y
and letting prime denote
d/dy_2,
we obtain the desingularized vector field in chart K_2,
u_2' = √() p_2
p_2' = u_2^2 - v_2 + 1/3√() r_2^2 u_2^3
v_2' = √() q_2
q_2' = u_2 - r_2 a_2
a_2' = 0.
For each a_2, this system is Hamiltonian
[ u_2^'; q_2^'; p_2^'; v_2^' ] = [ ∂ H_2/∂ p_2; ∂ H_2/∂ v_2; -∂ H_2/∂ u_2; -∂ H_2/∂ q_2 ] = J ∇_(u_2,q_2,p_2,v_2) H_2
Here, J is the 4× 4 anti-symmetric matrix [ 0 𝕀; -𝕀 0 ], 𝕀 is the 2× 2 identity matrix, and H_2 is the Hamiltonian:
H_2(u_2,p_2,v_2,q_2;r_2)
= 1/2√()( p_2^2 - q_2^2 )
+ (u_2 - r_2 a_2) v_2
- ( 1/3 u_2^3 + 1/12√() r_2^2 u_2^4 ),
in which r_2 is a constant parameter.
Furthermore, this Hamiltonian H_2 is directly related through the transformation (<ref>) to the conserved quantity via:
𝒢̃ (ũ,p̃,ṽ,q̃,ã)
= ^5/2 r_2^6 H_2 (u_2, p_2, v_2, q_2; r_2),
recall (<ref>).
The hyperplane { r_2=0 }
is invariant in chart K_2.
On this hyperplane,
the system reduces to
u_2^' = √() p_2
p_2^' = u_2^2 - v_2
v_2^' = √() q_2
q_2^' = u_2.
§.§.§ The key algebraic solution in chart Lg
In this section, we establish the existence of a key algebraic solution that acts as a separatrix in phase space.
The system (<ref>) possesses an explicit algebraic solution given by
Γ_0 = (u_2(y_2), p_2(y_2), v_2(y_2), q_2(y_2))
= (
1/12√() y_2^2,
1/6 y_2,
1/144 y_2^4 - 1/6,
1/36√() y_2^3
).
Moreover, the orbit Γ_0 has the following two halves:
Γ_0^- = . Γ_0 |_y_2≤ 0 and Γ_0^+ = . Γ_0 |_y_2 ≥ 0.
The segment Γ_0^- corresponds to the true canard of the RFSN-II singularity and the segment Γ_0^+ corresponds to the faux canard of the RFSN-II.
As we will see, both halves of the orbit Γ_0 play central roles in the dynamics induced by the RFSN-II singularity and also in the singular limit of the spatially periodic canards.
It lies in the zero level set of the Hamiltonian H_2(u_2,p_2,v_2,q_2;0), as may be verified by direct substitution and observing that all y_2-dependent terms sum to zero.
For reference, it may also be parametrized by p_2: u_2 = 3 √() p_2^2, v_2 = 9 p_2^4 - 1/6, and q_2 = 6√() p_2^3.
§.§.§ Geometric role of the algebraic solution Lg
Here, we study the unperturbed problem (<ref>) and provide a geometric interpretation of the algebraic solution Γ_0.
Recall from (<ref>) that the system (<ref>) has the Hamiltonian
H_2(u_2,p_2,v_2,q_2;0) = 12√() (p_2^2-q_2^2)+u_2 v_2 - 13 u_2^3,
and the canard solutions, Γ_0^±, lie in the zero level contour { H_2(u_2,p_2,v_2,q_2;0) = 0 }.
By using this constraint to eliminate the v_2 variable, the dynamics restricted to the { H_2(u_2,p_2,v_2,q_2;0) = 0 } level set are given by
u_2^' = √() p_2
p_2^' = 23u_2^2 + 12u_2√() (p_2^2-q_2^2)
q_2^' = u_2.
To remove the singularity at u_2 = 0, we desingularize the vector field via the transformation dy_2 = u_2 dη_2, which gives
du_2/dη_2 = √() u_2 p_2
dp_2/dη_2 = 23u_2^3 + 12√() (p_2^2-q_2^2)
dq_2/dη_2 = u_2^2.
The system (<ref>) is topologically equivalent to (<ref>) for u_2 >0 and has opposite orientation for u_2<0. Moreover, the system (<ref>) has the symmetry
(u_2,p_2,q_2,η_2) → (u_2,-p_2,-q_2,-η_2),
which it inherits from the reversibility symmetry (<ref>) of the original problem.
The system (<ref>) possesses two lines of equilibria,
ℒ_± = { u_2 = 0, p_2 = ± q_2: q_2 ∈ℝ}.
The line ℒ_- has hyperbolic and center spectra, σ_ s/u = { -√()q_2,-√()q_2} (where s/u depends on the sign of q_2) and σ_c = { 0 }, respectively, with corresponding subspaces
𝔼^ s/u( ℒ_- ) = span{[ 1; 0; 0 ], [ 0; 1; 0 ]} and 𝔼^c( ℒ_- ) = span[ 0; -1; 1 ].
Thus, the half-lines
ℒ_-^s := { (u_2,p_2,q_2) ∈ℒ_- : q_2 >0 } and ℒ_-^u := { (u_2,p_2,q_2) ∈ℒ_- : q_2 < 0 }
are center-stable and center-unstable, respectively.
Similarly, the line ℒ_+ has hyperbolic and center spectra, σ_ s/u = {√() q_2, √() q_2 } and σ_c = { 0 }, respectively, with corresponding subspaces
𝔼^ s/u( ℒ_+ ) = span{[ 1; 0; 0 ], [ 0; 1; 0 ]} and 𝔼^c ( ℒ_+ ) = span[ 0; 1; 1 ].
Thus, the half-lines
ℒ_+^s := { (u_2,p_2,q_2) ∈ℒ_+ : q_2 <0 } and ℒ_+^u := { (u_2,p_2,q_2) ∈ℒ_+ : q_2 >0 }
are center-stable and center-unstable, respectively.
In Appendix <ref>,
we prove the following proposition that establishes the key geometric properties of the algebraic solution, Γ_0, and its stable and unstable manifolds in phase space:
The restriction of the system (<ref>) to the half-space { u_2 ≥ 0} possesses three classes of heteroclinic solutions.
* A class 1 heteroclinic is a solution that connects a point (0,-q̃_2,q̃_2) ∈ℒ_-^u for some q̃_2<0 to a point (0,-q̂_2,q̂_2) ∈ℒ_-^s for some q̂_2>0.
* A class 2 heteroclinic is a solution that connects a point (0,-q̃_2,q̃_2) ∈ℒ_-^u for some q̃_2<0 to a point (0,q̂_2,q̂_2) ∈ℒ_+^s for some q̂_2<0.
* A class 3 heteroclinic is a solution that connects a point (0,q̃_2,q̃_2) ∈ℒ_+^u for some q̃_2>0 to a point (0,-q̂_2,q̂_2) ∈ℒ_-^s for some q̂_2>0.
The stable manifold, W^s(Γ_0^-), of the algebraic solution Γ_0^- is the phase-space boundary between class 1 and class 2 heteroclinics.
The unstable manifold, W^u(Γ_0^+), of the algebraic solution Γ_0^+ is the phase-space boundary between class 1 and class 3 heteroclinics.
The results of this proposition are illustrated in Fig. <ref>.
§.§ Entry/exit chart Lg
We substitute (<ref>) into (<ref>) and desingularize the vector field by transforming to a dynamic independent variable y_1= r_1 y. Hence, the system in chart K_1 is
dr_1/dy_1 = 1/2√() p_1 r_1
d δ_1/dy_1 = -√()p_1 δ_1
dp_1/dy_1 = 1-v_1-3/2√()p_1^2+1/3√()r_1^2
dv_1/dy_1 = √()(-2p_1 v_1 + δ_1 q_1 )
dq_1/dy_1 = -3/2√() p_1 q_1+δ_1 (1-r_1 a_1 )
da_1/dy_1 = -3/2√() p_1 a_1.
This system has a series of invariant hyperplanes defined by { r_1=0}, {δ_1=0}, { a_1=0}, and their intersections. In this section, we present the analysis of system (<ref>) in the hyperplane {r_1=0}∩{δ_1=0}∩{ a_1=0}, where the dynamics are the simplest. Then, we go directly to the analysis of system (<ref>) in the full (r_1,δ_1,p_1,v_1,q_1,a_1) space.
Intermediate results that are needed for this full analysis are presented in Appendix <ref>, where we analyze system (<ref>) in the following sequence of invariant hyperplanes:
{δ_1=0}∩{ a_1=0},
{ r_1=0}∩{ a_1 = 0 },
{ r_1 = 0 }∩{δ_1 = 0 },
{δ_1=0 },
and { r_1=0},
respectively.
For the remainder of this section and in Appendix <ref>, we will use an overdot to denote the derivative with respect to y_1: = d/dy_1.
In the invariant hyperplane
{r_1=0}∩{δ_1=0}∩{ a_1 = 0 },
the equations (<ref>) reduce to
ṗ_1 = 1 - v_1 - 3/2√() p_1^2
v̇_1 = -2 √() p_1 v_1
q̇_1 = -3/2√() p_1 q_1.
Hence, the q_1 equation decouples from the equations for (p_1,v_1).
This system (<ref>) has two invariant lines. The first is a line of equilibria
ℓ = { r_1=0, δ_1=0, p_1=0, v_1=1, q_1 ∈ℝ, a_1=0 }.
It is a line of saddle points,
since the eigenvalues are
λ_s = -√(2)^1/4, λ_u = √(2)^1/4, and λ_c = 0.
The associated eigenvectors are
w_s = [ [ 2√(2); 4^1/4; 3 ^1/4 q_1 ]],
w_u = [ [ -2√(2); 4^1/4; 3 ^1/4 q_1 ]],
and w_c = [ [ 0; 0; 1 ]].
See Fig. <ref>.
Therefore, by standard center manifold theory (see for example <cit.>), system (<ref>) has a one-dimensional center manifold W^c(ℓ) in the invariant hyperplane
{ r_1 = 0 }∩{δ_1 = 0 }∩{ a_1=0 }, and it has one-dimensional fast stable and unstable fibers in the transverse directions.
This manifold, which we will show is embedded in a larger center manifold in the full system, and the fast stable and unstable fibers,
will play central roles in the dynamics of the full system (<ref>).
The second invariant line in the phase space of equations (<ref>) is
I = { r_1=0, δ_1=0,
p_1 ∈ℝ,
v_1=0, q_1=0, a_1=0
}.
On I, the vector field reduces to ṗ_1 = 1 - 3/2√()p_1^2, and there are two equilibria:
E_±= { r_1 =0, δ_1=0,
p_1 = ±√(23)^-1/4,
v_1=0, q_1=0, a_1=0
}.
The equilibrium E_+ is stable with eigenvalues -√(6)^1/4, -2√(2/3)^1/4, -√(3/2)^1/4,
and eigenvectors
w_1^s = [ [ 1; 0; 0 ]],
w_2^s = [ [ -√(3); √(2)^1/4; 0 ]],
w_3^s = [ [ 0; 0; 1 ]].
The equilibrium E_- is unstable with eigenvalues
√(3/2)^1/4,
2√(2/3)^1/4,
and √(6)^1/4,
and eigenvectors
w_1^u = [ [ 0; 0; 1 ]],
w_2^u = [ [ √(3); √(2)^1/4; 0 ]],
w_3^u = [ [ 1; 0; 0 ]].
The equilibria E_± are shown in the (p_1,v_1) subsystem in
Fig. <ref>(a)
and in the (p_1,v_1,q_1) system in Fig. <ref>(b).
The number of arrows on the principal components of the stable and unstable manifolds indicates the relative magnitudes of the eigenvalues.
The orbits in this phase plane are defined implicitly by the one-parameter family √() p_1^2 - ( 2/3 - 2 v_1 ) -c v_1^3/2 = 0, where c denotes the parameter.
Overall, the dynamics of (<ref>) are organized by the line of equilibria ℓ and the invariant line I.
Now, we turn to the invariant sets and dynamics of the full system (<ref>), recalling that some of the intermediate results are in Appendix <ref>.
The line I given by (<ref>) is invariant for the full system (<ref>), and E_± are again fixed points on I, recall (<ref>).
The equilibrium E_+ is a hyperbolic saddle with stable and unstable spectra
σ^s_+ = { -√(6)^1/4,
-2√(2/3)^1/4,
-√(3/2)^1/4,
-√(3/2)^1/4,
-√(2/3)^1/4} and σ_+^u = {1/√(6)^1/4}.
At E_+, the stable and unstable subspaces are given by
𝔼^s = span{[ [ 0; 0; 1; 0; 0; 0 ]],
[ [ 0; 0; -√(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 0; 1 ]],
[ [ 0; ^1/4; 0; 0; √(6); 0 ]]
} and 𝔼^u = span{[ [ 1; 0; 0; 0; 0; 0 ]]
}.
The other equilibrium, E_-, is also a hyperbolic saddle with stable and unstable spectra given by
σ^s_- =
{-1/√(6)^1/4} and σ_-^u = {√(2/3)^1/4,
√(3/2)^1/4,
√(3/2)^1/4,
2√(2/3)^1/4, √(6)^1/4}.
At E_-, the stable and unstable subspaces are
𝔼^s = span{[ [ 1; 0; 0; 0; 0; 0 ]]
} and 𝔼^u = span{[ [ 0; -^1/4; 0; 0; √(6); 0 ]],
[ [ 0; 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 0; 1 ]],
[ [ 0; 0; √(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 0; 1; 0; 0; 0 ]]
}.
Thus, the blow-up transformation (<ref>) splits the non-hyperbolic equilibrium state (a,0,f(a),0) into the pair of hyperbolic fixed-points E_±. Moreover, by (<ref>) E_+ has a five-dimensional stable manifold and a one-dimensional unstable manifold, with the latter being in the r_1 direction,
and by (<ref>) E_- has a one-dimensional stable manifold (in the r_1 direction) and a five-dimensional unstable manifold.
Now, in addition to the equilibria E_± and invariant line I, the full system (<ref>) also has an important, three-dimensional manifold of equilibria:
𝒮_1 = { r_1 ∈ℝ, δ_1=0, p_1=0, v_1=1 + 1/3√() r_1^2, q_1 ∈ℝ, a_1 ∈ℝ}.
It contains the line ℓ, and it is a manifold of saddle fixed points, since the eigenvalues are
λ_s = -√(2 √() + r_1^2), λ_u = √(2 √() + r_1^2),
λ_c = 0, 0, 0, 0.
The associated stable and unstable eigenspaces are
𝔼^s = span{[ [ -√() r_1; 0; 2 √(2√() + r_1^2); 4/3√()(3 + √()r_1^2); 3 √() q_1; 3√()a_1 ]]
},
𝔼^u = span{[ [ -√() r_1; 0; - 2 √(2√() + r_1^2); 4/3√()(3 + √()r_1^2); 3 √() q_1; 3√()a_1 ]] },
and the associated center subspace is
𝔼^c = span{[ [ 0; 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 0; 1 ]],
[ [ 3; 0; 0; 2√() r_1; 0; 0 ]],
[ [ 0; 2 + √() r_1^2; q_1; 0; 0; 0 ]]
}.
Therefore, application of center manifold theory
(see for example <cit.>) establishes that (<ref>)
has a four-dimensional
center manifold
M = W^c(ℓ).
This manifold contains the surface 𝒮_1 of equilibria.
It also contains the three-dimensional center manifold N
in the invariant hyperplane { r_1=0 }, see (<ref>) in Appendix <ref>.
Moreover, the branch of N in p_1<0 and δ_1>0 is unique.
The above analysis establishes the following lemma:
The following properties hold for the system (<ref>).
* There exists a stable invariant foliation, ℱ^s, with base M and one-dimensional fibers. For any constant c>-√(2), the contraction along ℱ^s during an interval [0,X] is stronger than e^cX.
* There exists an unstable invariant foliation, ℱ^u, with base M and one-dimensional fibers. For any constant c< √(2), the expansion along ℱ^u during an interval [0,X] is stronger than e^cX.
§.§ Forward and backward asymptotics of Lg in chart Lg
In this section, we study the forward and backward asymptotics (as y_2 →±∞) of the “Vo connection" Γ_0, given by (<ref>), along with its tangent vectors.
We start with the coordinate transformation maps between the two charts.
For δ_1>0, the coordinate transformation from K_1 to K_2 is
κ_12 (r_1,δ_1,p_1,v_1,q_1,a_1)
= (r_2,u_2,p_2,v_2,q_2,a_2)
= (
r_1 √(δ_1), 1/δ_1,
p_1/δ_1^3/2, v_1/δ_1^2,
q_1/δ_1^3/2,
a_1/δ_1^3/2).
For u_2>0,
the coordinate transformation from K_2 to K_1 is
κ_21 (r_2,u_2,p_2,v_2,q_2,a_2)
= (r_1,δ_1,p_1,v_1,q_1,a_1)
= (
r_2 √(u_2), 1/u_2, p_2/u_2^3/2,
v_2/u_2^2, q_2/u_2^3/2,
a_2/u_2^3/2).
Now, for the forward asymptotics of Γ_0, we use κ_21 to find
lim_y_2 →∞κ_21(Γ_0)
= lim_y_2 →∞(
0, 12/√() y_2^2, 4√(3)/^3/4 y_2^2,
1-24/ y_2^4, 2√(3)/3^1/4, 0
)
= (
0, 0, 0, 1,
2√(3)/3^1/4, 0
).
Hence,
on the invariant hyperplane { r_1=0 },
orbits on Γ_0
are forward asymptotic
to the fixed point
(0,0,0,1,q^+,0)
with q^+=2√(3)/3^1/4,
on the line of equilibria ℓ.
Similarly, the backward asymptotics of Γ_0
are given by:
lim_y_2 → -∞κ_21(Γ_0)
= lim_y_2 → -∞(
0, 12/√() y_2^2, -4√(3)/^3/4 y_2^2,
1-24/ y_2^4, -2√(3)/3^1/4, 0
)
= (
0, 0, 0, 1,
-2√(3)/3^1/4, 0
).
Hence, on the invariant hyperplane
{r_1=0}, solutions on Γ_0
are backward asymptotic to the fixed point
(0,0,0,1,q^-,0) with
q^-=-2√(3)/3^1/4,
on the line of equilibria ℓ.
The tangent vectors of Γ_0 in chart K_1 at the equilibria (0,0,0,1,q^±,0) are given by
lim_y_2 →±∞d/dy_2κ_21(Γ_0)/‖d/dy_2κ_21(Γ_0) ‖
= lim_y_2 →±∞(0,∓ 24 √() y_2^2,-8√(3)^1/4 y_2^2,± 96,0 ,0)/8√(3)√(48+√()y_2^4(1+3√()))
= ( 0, ∓√(3)^1/4/√(1+3√()),-1/√(1+3√()), 0, 0, 0).
Hence, the tangent vectors of Γ_0 in chart K_1 at the equilibria (0,0,0,1,q^±,0) are tangent to the center manifolds M. In fact, at these equilibrium points, the tangent vectors are parallel to the generalized eigenvector of the center subspace.
§.§ Intersection of center-unstable and center-stable manifolds at Lg
In this section, we present the derivation of (<ref>):
For 0 < δ≪ 1, the center-unstable manifold W^cu(0,0,0,1,q^-,0) coincides with the center-stable manifold W^cs(0,0,0,1,q^+,0) for
a_c(δ) = 1 - 5 /48δ^2 + 𝒪(δ^3).
This lemma is proven in Appendix <ref>.
It involves a Melnikov type calculation.
An immediate consequence of the lemma is that, for all δ>0 sufficiently small, there is a maximal canard orbit exactly at a_c(δ)=1 - 5 /48δ^2 + 𝒪(δ^3).
Moreover, the orbit is unique up to translations since the ODEs are autonomous.
Of all the canard orbits in the spatial ODE system (<ref>) that pass through a neighborhood of the cusp point, the maximal canard has the longest segments near the stable and unstable manifolds of the cusp point, all the way out to where u=+√(3).
This critical value is the analog in spatial dynamics of the critical parameter value at which the slow attracting and repelling manifolds coincide to all orders, and hence the explosion of temporal limit cycle canards occurs, in planar fast-slow ODEs.
§.§ Calculation of the center-unstable and center-stable manifolds for Lg
In this section, we establish the following proposition:
The branches of the critical manifold corresponding to the level set 𝒢 = (23a-14) perturb to invariant slow manifolds. These slow manifolds are smooth to at least 𝒪(δ^2) in p and smooth to at least 𝒪(δ^3) in q, for
a_c(δ) = 1 + a_2 δ^2 + 𝒪(δ^3).
for any real a_2.
The proof uses the iterative scheme devised in <cit.> to calculate the invariant slow manifolds of (<ref>), and determine the key parameter values for which the slow manifolds intersect.
To do this, we follow the approach in <cit.> in which the parameters are carefully chosen to remove poles in the expansions of the slow manifolds.
Proof of Proposition <ref>.
We first translate the reversible FSN-II to the origin via the linear shift
ũ = u-1, p̃ = p, ṽ=v+23, q̃ = q, ã = a-1.
Dropping the tildes, the dynamics are governed by
u̇ = p
ṗ = f(u) - v
v̇ = δ q
q̇ = δ (u-a)
where f(u) = u^2+13u^3. In this formulation, the conserved quantity is
𝒢 = (u-a)v - f(u) + 12 p^2 - 12q^2,
where f(u) = 13u^3+112u^4 is an antiderivative of f, and the level set 𝒢 = (23a-14) becomes 𝒢 = 0. By eliminating the q-variable, we reduce the study of the 2-fast/2-slow system (<ref>) with constraint 𝒢=0 to the study of the 2-fast/1-slow system
u̇ = p
ṗ = f(u) - v
v̇ = √(2) δ √((u-a)v-f(u)+p^2).
Here, we focus on the positive root for q, and then obtain results for the negative root for q by using the symmetry (<ref>).
For the 2-fast/1-slow system (<ref>), the critical manifold is given by
S_0 = { p = 0 , v = f(u) }
with fold points at (u,p,v)=(0,0,0) and (u,p,v)=(-2,0,43), and canard points at (u,p,v,a)=(0,0,0,0) and (u,p,v,a)=(-2,0,43,-2). The canard point at the origin corresponds to the RFSN-II point of interest.
We assume that the invariant slow manifold, S_δ, has a graph representation in u, and that it can be expanded as a power series in δ,
S_δ = { (u,p,v,q) : p = ∑_k=0^∞ p_k(u) δ^k, v = ∑_k=0^∞ v_k(u) δ^k, q = ∑_k=0^∞ q_k(u) δ^k }
We also expand the parameter a as a power series
a = ∑_k=0^∞ a_k δ^k.
Substituting these expansions into (<ref>) and equating coefficients of like powers of δ, we find that the leading 𝒪(1) terms are given by
p_0 = 0, v_0 = f(u), and q_0 = √(2)√((u-a_0)f(u)-f(u)),
corresponding to the critical manifold. Proceeding to the next terms in the expansion, we find that the 𝒪(δ) terms are given by
p_1 = √(2)√((u-a_0)f(u)-f(u))/f^'(u), v_1 = 0, and q_1 = 0.
To remove the singularity at u=0 in p_1, we set a_0 = 0 so that every term in the argument of the square root has a factor of u^2. In that case, the coefficients become
p_1 = √(2)√(u ( 14u + 23))/2+u, v_1 = 0, and q_1 = 0.
Continuing to higher order, we find that the 𝒪(δ^2) terms are given by
p_2 = -a_1 √(23) (3+u)/√(u) (2+u) √(8+3u), v_2 = -(4+u)/3(2+u)^3, and q_2 = -√( u) 4a_2(2+u)^3(3+u)-(10+3u)/2√(6) (2+u)^3 √(8+3u).
To eliminate the singularity at u=0 in p_2, we set a_1 = 0, so that
p_2 = 0, v_2 = -(4+u)/3(2+u)^3, and q_2 = -√( u) 4a_2(2+u)^3(3+u)-(10+3u)/2√(6) (2+u)^3 √(8+3u).
The 𝒪(δ^3) terms are given by
p_3 = √()( -12a_2(2+u)^5(3+u)+(9u^3+54u^2+64u-40) )/6√(6) (2+u)^6 √(u)√(8+3u), v_3 = 0, and q_3 = 0.
Here, there is a singularity at u=0 in p_3, however, we choose not to enforce smoothness of the p-component of the slow manifold and leave a_2 free.
Therefore, the proposition is proven, since p is smooth up to and including 𝒪(δ^2) and q up to and including 𝒪(δ^3) for any choice of a_2.
We add that the 𝒪(δ^4) terms are given by
p_4 = -√(2/3)a_3 (3+u)/√(u)(2+u)√(8+3u), v_4 = P_6(u)/3(2+u)^8, and q_4 = -√() P_12(u)/24 √(6)
(u+2)^8 u^3/2 (8+3u)^3/2.
where P_6(u) and P_12(u) are, respectively, the 6^ th and 12^ th order polynomials
P_6(u) = -a_2(2+u)^5(4+u)+(3u^3+18u^2+14u-34)
P_12(u) = 24 a_2 (9 u^3+61 u^2+110 u+32) (u+2)^5 + 48 a_2^2 u (u+3)^2 (u+2)^8 +
48 a_4
u^2 (3 u^2+17 u+24) (u+2)^8- ^2 (513 u^5+4608 u^4+12168 u^3+5120 u^2-12400 u-2560)
To cancel one factor of u in the denominator of q_4, we must set a_2 = -5/48.
Thus, upon reverting to the original coordinates and parameters, Fraser-Roussel iteration shows that in order for the slow manifolds to be smooth to at least 𝒪(δ^2) in p and smooth to at least 𝒪(δ^3) in q, the parameters must satisfy
a = 1 - 5/48δ^2 + 𝒪(δ^3).
consistent with the values obtained from the Melnikov analysis, recall (<ref>). □
On the level set 𝒢 = (23a-14), one can alternatively get smoothness to at least 𝒪(δ^3) in p and to at least 𝒪(δ^3) in q for
a_c2(δ) = 1 - 5/144δ^2 + 𝒪( δ^3 ).
This follows from the calculations in the proof of Proposition <ref>. At 𝒪(δ^3), the pole at u=0 in p_3 is removed by seeking a value for a_2 such that the numerator in the expression for p_3 has u as a factor, namely a_2 = -5144. The lack of smoothness in the singular limit is visible in the (u,p) and (u,q) projections for small δ), with corners and cusps (in the projections) developing. There appears to be a discrete set of critical values of a marking the transitions between solutions with different numbers of loops, and different numbers of points of nonsmoothness in the limit of small δ.
§ ANALYZING THE GEOMETRY OF THE SPATIAL CANARDS
In this section, we use the analytical results from Sections <ref>–<ref>, for the analyses of the fast system, the desingularized reduced system, and the desingularization of the RFSN-II singularity, respectively, to deconstruct and understand the spatially periodic canard solutions.
This analysis reveals how the RFSN-II point at a=a_T and its canards, as identified in Section <ref>, are responsible for the creation of these spatial canard solutions.
The geometry of small-amplitude spatial canards is analyzed in Section <ref>, and that of large-amplitude spatial canards in Section <ref>.
§.§ Geometry of the Small-Amplitude Spatial Canards
We present the geometry of small-amplitude spatially periodic canards.
For illustration, we use the solution shown in Fig. <ref>, and we recall also the representative small-amplitude canards in Figs. <ref> and <ref>.
In the singular limit, the small-amplitude orbit consists of the following three segments, all confined to the level set {𝒢 = ( 23a-14) }:
* A short slow segment from the folded saddle at (u,p,v,q) = (1,0,-23,0) (blue ducky) that closely follows the faux canard, Γ_0^+, of the folded saddle up to some nearby v-value (near the red ducky).
* A fast segment following a homoclinic solution (see Proposition <ref>) of the layer problem until it returns to a neighborhood of S_s^+ (near the green ducky).
* A short slow segment that closely follows the true canard, Γ_0^-, of the folded saddle until it returns to the folded saddle.
In this manner, the singular solutions of small-amplitude, spatially periodic canards with 𝒪(1) wavenumbers are completely understood through the dynamics of the RFSN-II singularity, its true and faux canards, and some invariant manifold theory, following the results of the analysis in Sections <ref>–<ref> .
The geometry of other small-amplitude spatially periodic canard solutions with 𝒪(1) wavenumbers, such as that shown in Fig. <ref>, is similar.
The geometry of small-amplitude canards with asymptotically small wavenumbers may also be understood in a similar manner.
These are very close in profile to the small-amplitude canards with 𝒪(1) wavenumbers over most of the period, but they also exhibit nested, successively smaller, nearly self-similar twists (loops in certain projections) that are confined to the neighborhood of the equilibrium and the folded saddle
(recall for example the solution shown in Fig. <ref>).
These nearly self-similar loops arise for small δ due to scale invariance of the zero level of the Hamiltonian H_2 in the rescaling chart K_2.
§.§ Geometry of the Large-Amplitude Spatial Canards
In this section, we use the analytical results
derived in Sections <ref>–<ref>,
about the fast system, the desingularized reduced system, and the blowup analysis, to fully deconstruct the large-amplitude, spatially periodic canard solutions of (<ref>).
We choose the solution from Fig. <ref> for the deconstruction, and we review some of the main properties of that solution again in Fig. <ref>.
The singular limit solution is obtained by concatenating orbit segments from the fast and slow subsystems, (<ref>) and (<ref>).
The singular orbit consists of seven segments, all of which are constrained to the level set {𝒢 = ( 23a-14) }.
For convenience, we denote the level set {𝒢 = ( 23a-14) } restricted to orbits of the layer problem by 𝒢_ fast and we denote the level set {𝒢 = ( 23a-14) } restricted to the slow flow on S by 𝒢_ slow.
* Slow segment on S_s^+ along the faux canard Γ_0^+ from the folded saddle at (u,p,v,q)=(1,0,-23,0) (grey ducky) to the upper right corner point (red ducky) at
(u,p,v,q)=( √(3),0,0,√(23(3-2a))).
(For a = 0.998512, the upper right corner point is located at (u,p,v,q) ≈ (1.7321,0,0,0.2586).)
* Fast jump along the heteroclinic orbit in the transverse intersection W^u(S_s^+) ∩ W^s(S_s^-) in { v = 0} (see Proposition <ref>) from the take off point at the upp[er right corner (red ducky) to the touch down point at the upper left corner (green ducky) at
(u,p,v,q)=( -√(3),0,0,√(23(3-2a))).
This heteroclinic consists of three segments that occur in rapid succession, one from a neighborhood of S_s^+ to near S_s^-, followed by another back to the neighborhoof of S_s^+, and then the third is forward asymptotic to S_s^-; it lies in a secondary intersection of the two invariant manifolds.
(For a = 0.998512, this corner point is located at (u,p,v,q) ≈ (-1.7321,0,0,0.2586).)
* Slow segment on S_s^- from the touch down point (green ducky) to the local maximum (in u) of the 𝒢_ slow contour (gold ducky) at
(u,p,v,q) = ( u_-,0,f(u_-),0 ),
where u_- = 13( 2a-3-2√(a^2+3a)). (For a = 0.998512, this gold ducky is located at (u,p,v,q) ≈ (-1.6664,0,0.1239,0).)
* Fast jump consisting of the homoclinic orbit of the layer problem (see Proposition <ref>) from the take off point (gold ducky) to the local maximum of the homoclinic (cyan ducky)
and then to the touch down point on S_s^- at (u,p,v,q) = ( u_-,0,f(u_-),0 ) (magenta ducky).
* Slow segment on S_s^- from the touch down point (magenta ducky) until the solution reaches the { v = 0 } hyperplane at
(u,p,v,q)=(-√(3),0,0,-√(23(3-2a)))
(blue ducky).
* Fast jump in the transverse intersection W^u(S_s^-) ∩ W^s(S_s^+) in { v = 0 } (see Proposition <ref>) from the take off point (blue ducky) to the touch down point at the lower right corner (yellow ducky) on S_s^+ at
(u,p,v,q) = (√(3),0,0,√(23(3-2a)))
This heteroclinic consists of three segments in rapid succession and lies in a secondary intersection of the manifolds.
(For a = 0.998512, the lower right corner point is located at (u,p,v,q) ≈ (1.7321,0,0,-0.2586).)
* Slow segment on S_s^+ along the true canard, Γ_0^-, of the RFSN-II point
from the touch down point (yellow ducky) up to the cusp (grey ducky), thus completing the singular cycle.
For the solution shown in Fig. <ref>, the connection from S_s^+ to S_s^- (and vice versa) is a secondary heteroclinic and features additional large-amplitude spikes not present in the singular limit.
We observed other large-amplitude, small wavenumber solutions without additional spikes that correspond to primary heteroclinics.
The deconstruction of such solutions is similar to that presented above.
In addition, we observed solutions that have more of these spikes along the outer edges of the orbit.
§ ISOLAS OF CANARD-INDUCED SPATIAL PERIODICS
In this section, we employ many of the results derived in Sections <ref>-<ref>, about the fast/layer problem, the desingularized reduced vector field, and the geometry of the RFSN-II singularity, to study the bifurcations of the spatially periodic canard solutions along a representative isola in the (a,k) parameter plane (see Fig. <ref>).
The branches of the isola have been color coded (green, orange, blue, and red) according to the nature of the spatially periodic solutions that lie on each segment.
§.§ Growth of canard segments
We analyze the growth of the canard segments for solutions on the green branch of the isola (see Fig. <ref>(a)).
These solutions are four-stroke, spatial relaxation cycles.
As illustrated in Fig. <ref>(c) and (d), in the singular limit, the relaxation cycles consist of:
* A slow segment on S_s^+ that starts at the touchdown point (u,p,v,q)=(√(3),0,0,-√(2(1-23a))), flows down the critical manifold toward the fold set L^+ to the turning point (local minimum) at (u,p,v,q) = (u_ turn,0,13u_ turn^3-u_ turn,0) where u_ turn = 23a-1+23√(a^2+3a), and then flows back up the critical manifold to the takeoff point at (u,p,v,q)=(√(3),0,0,√(2(1-23a))).
* A fast jump at v = 0 corresponding to the heteroclinic of the layer problem that takes the orbit from the takeoff point at (u,p,v,q)=(√(3),0,0,√(2(1-23a))) to the touchdown point at (u,p,v,q)=(-√(3),0,0,√(2(1-23a))).
* A slow segment that flows along S_s^- from the touchdown point at (u,p,v,q)=(-√(3),0,0,√(2(1-23a))) up to the turning point (local maximum) at (u,p,v,q) = (u_ turn,0,13u_ turn^3-u_ turn,0) where u_ turn = 23a-1-23√(a^2+3a), and then down to the takeoff point at (u,p,v,q)=(-√(3),0,0,-√(2(1-23a))).
* Another fast jump at v = 0 corresponding to the heteroclinic of the layer problem that takes the orbit from the takeoff point at (u,p,v,q)=(-√(3),0,0,-√(2(1-23a))) to the touchdown point at (u,p,v,q)=(√(3),0,0,-√(2(1-23a))).
At the right edge of this branch, there is a
saddle-node bifurcation near a ≈ 1.5 (Fig. <ref>(a)), which marks the transition to the blue branch.
As the parameter a is decreased toward a=1, the slow segment on S_s^+ with q<0 converges to the true canard of the folded saddle-node, and the slow segment on S_s^+ with q>0 converges to the faux canard of the folded saddle-node.
Then, as a is further decreased along this branch to a=0, the slow segment on S_s^- with q>0 grows toward the true canard of the folded saddle at (u,p,v,q)=(-1,0,23,0), and the slow segment on S_s^- with q<0 approaches the faux canard of the folded saddle.
§.§ Spike formation
In this section, we analyze the sequence of canards that exhibit spike formation along the orange branch of the isola (see Fig. <ref> (a)).
The solutions here consist of large-amplitude cycles with spikes initiated on S_s^-. The geometry of these solutions is similar to that of the solution presented in Section <ref>.
At a=0 (left green triangle), the solution is a four-stroke, spatial relaxation cycle, with slow segments given by the true and faux canards of the FS points on L^±, and fast jumps in { v=0 }.
Then, moving left to right along the orange branch, as the value of a is increased from a=0, an extra spike forms at the local maximum of the slow segment on S_s^-.
It is a given to leading order by a homoclinic of the fast subsystem.
The size (in the (u,p) projection) of the extra spike increases with a, whilst the v-height at which it occurs decreases away from v = 23.
At the saddle-node bifurcation at a ≈ 1.5, the v-level of the fast jump is at v = 0, the fast jump corresponds to the heteroclinic of the layer problem
(recall Section <ref>), and the slow segment on S_s^- has vanished.
We also note that at the other end, there is a saddle-node bifurcation near a ≈ -0.0002, where the orange branch transitions to the green branch
§.§ Two-stroke relaxation cycles
In this section, we analyze the solutions along the blue branch of the isola (Fig. <ref>(a)).
This branch emanates from one of the homoclinic orbits (k=0) near a=1.
The homoclinic solution has small amplitude and is centered on the steady state. Continuation of the solution away from the homoclinic limit causes the wavenumber to increase to a value close to the critical Turing value k_T. As this occurs, the corresponding spatially periodic solution remains small amplitude and confined to a small neighborhood of the fold L^+.
As the value of a is increased along this branch, the solutions develop into two-stroke relaxation oscillations that consist of a slow segment on S_s^+ and a fast (near homoclinic) jump.
The slow segment on S_s^+ starts at a `touchdown' point at (u,p,v,q) = (u_ touchdown(v),0,v,q_ touchdown(v)), where
u_ touchdown(v) = 2^1/3/(3v+√(9v^2-4))^1/3 + (3v+√(9v^2-4))^1/3/2^1/3,
q_ touchdown(v) = -√()√(23a-14-au_ touchdown+12u_ touchdown^2+13au_ touchdown^3-14u_ touchdown^4),
for v ∈ (-23,0). It flows down the 𝒢= (23a-14) contour to the turning point (u,p,v,q) = (u_ turn,0,v_ turn,0), where
13 u_ turn^3 - u_ turn = v_ turn = 2/81(2a-3)(2a-3+2√(a^2+3a))(2a+3+2√(a^2+3a)).
Then, it flows back up the 𝒢= (23a-14) contour to the `takeoff' point at (u,p,v,q) = (u_ takeoff(v),0,v,q_ takeoff(v)), where u_ takeoff = u_ touchdown and q_ takeoff = - q_ touchdown.
At the takeoff point, a fast jump is initiated which connects the takeoff point to the touchdown point via the homoclinic of the layer problem that exists at that value of v (see Proposition <ref>).
When the blue branch of the isola reaches the saddle-node bifurcation at a ≈ 1.49881, the v-level at which the fast jump occurs is v = 0, and the homoclinic jump mechanism switches to a heteroclinic jump mechanism.
§.§ Two-stroke double-loop cycles
Finally, we analyze solutions along the red branch of the isola, which emerges from the orange branch at the saddle-node bifurcation near a ≈ 1.5.
The solutions on the red branch of the isola are two-stroke relaxation oscillations, like those in Section <ref>. However, the fast segments of these orbits consist of two near-homoclinic cycles.
As the parameter a is decreased along this branch away from the saddle-node bifurcation near a≈ 1.5, the v-level at which the fast jump occurs decreases, and the amplitudes of the double-loop homoclinics shrink.
Then, near a=1, the red branch of the isola becomes nearly vertical. Along this nearly vertical part of the isola, the amplitudes of the solutions are very small, and the solution becomes homoclinic to the equilibrium near the fold L^+ (still with small amplitude).
§ THE NEARLY SELF-SIMILAR DYNAMICS OF SOME SPATIAL CANARDS
In this section, we discuss some of the spatial canard solutions that exhibit nearly self-similar dynamics. Recall for example the canards shown in Figs. <ref> and <ref>. First, we study self-similarity in the equations in the rescaling chart, (<ref>), and then we study it in the full fourth order system (<ref>).
In the coordinate chart K_2, the nearly self-similar dynamics for 0<δ≪ 1 may be understood as follows.
For δ=0, the equations are Hamiltonian with
H_2(u_2,p_2,v_2,q_2;0)
= 1/2√()( p_2^2 - q_2^2 )
+ u_2 v_2
- 1/3 u_2^3,
where we recall (<ref>).
Examination of the level set { H_2 = 0 } reveals that it is scale-invariant.
In particular, on the hyperplane {r_2=0}, which corresponds to the singular limit of δ=0, the zero level set of the Hamiltonian H_2(u_2,p_2,v_2,q_2;0) is invariant under the scaling
(u_2,p_2,v_2,q_2) → (ζũ_2, ζ^3/2p̃_2, ζ^2 ṽ_2, ζ^3/2q̃_2) for any real number ζ.
In fact, H_2(u_2,p_2,v_2,q_2;0)=ζ^3 H_2(ũ_2,p̃_2,ṽ_2,q̃_2;0).
The projection of this level set onto the (u,q) plane has infinitely many self-crossings and nested loops. It is infinitely self-similar.
Then, for 0<δ≪ 1, this scale invariance is broken. Nevertheless, for sufficiently small values of δ, the solutions exhibit nearly self-similar dynamics.
This nearly self-similar dynamics manifests itself in the maximal canards of the full equations with 0 < r_2 ≪ 1 in chart K_2.
A boundary value problem was set up using these equations, together with the boundary conditions
that enforce the symmetry (u_2,p_2,v_2,q_2) → (u_2,-p_2,v_2,-q_2) and an integral constraint to preserve the Hamiltonian.
(See Appendix <ref> for details.)
The continuation method starts with the algebraic solution Γ_0 at r_2=0 and a_2=0 given by (<ref>).
That solution is continued into r_2>0 until r_2=√(δ).
Then, it is continued in a_2 to yield the maximal canards for the system in chart K_2.
Throughout, the constraint is imposed to conserve the value of the Hamiltonian.
The key observation is that these maximal canards approach the infinitely self-similar structure in the limit.
The continuation of the maximal canards is shown in the plane of a_2 and the L^2 norm in Fig. <ref> (a), and orbits of the maximal canards are shown in Figs. <ref> (b)–(g) for a sequence of a_2 values.
The bifurcation curve undulates about the line a_2=0, as shown in Fig. <ref>(a).
The magnitude of the undulations decreases as the L^2 norm decreases.
Also, there appears to be a self-similarity in the curve.
For example, if one takes the lower part of the bifurcation curve shown in Fig. <ref>(a) below the second local extremum on the left, stretches that vertically and horizontally so that it has the same height as the curve shown, and overlays that stretched version onto the curve shown, then they look almost identical.
The nearly self-similar structure of the bifurcation diagram persists into the full fourth-order spatial system (<ref>), see Fig. <ref>.
The undulations decay and the branch eventually converges to a single value of a. For = 1 and δ = 0.05, we find that the a value to which the branch converges is a_c,num≈ 0.99972881. Comparing this with the critical value, a_c(δ), from the Melnikov analysis (see formula (<ref>)), we have that a_c(0.05) ≈ 0.99973958. Hence,
a_c(0.05)-a_c,num = 𝒪(δ^4).
§ THE SPATIAL CANARDS OF LG ARE ANALOGS IN SPATIAL DYNAMICS OF TEMPORAL LIMIT CYCLE CANARDS IN FAST-SLOW ODES
The spatially periodic canards in the PDE system (<ref>) are spatial analogs of the time-periodic canards known to exist in fast-slow systems of ODEs <cit.>.
(For reference, we recall that for example in the van der Pol ODE u̇=v-f(u), v̇=(a-u), the Hopf bifurcation occurs at a=1 and the canard explosion is centered on a_ ODE∼ 1-1/8- 3/32^2, asymptotically as → 0, recall <cit.>.)
First, the Turing points a_T are reversible 1:1 resonant Hopf bifurcations in the spatial ODE system, as shown in Section <ref>.
The reversible Hopf points are the analogs in spatial dynamics of the Hopf bifurcation points that occur in (temporal) fast-slow ODEs.
Second, families of small-amplitude, spatially periodic solutions are created in the Turing bifurcation.
These solutions have wavenumbers close to k_T and profiles close to the plane wave e^ik_Tx (recall the red and blue orbits in Fig. <ref>).
These spatially periodic solutions are the analogs of the small-amplitude, temporally oscillating solutions that are created in (singular) Hopf bifurcations in fast-slow systems of ODEs.
Third, the critical value a_c(δ), recall (<ref>), asymptotically close to a_T is the analog in spatial dynamics of the canard explosion value (a_ ODE for the van der Pol ODE, for example) in fast-slow oscillators, which is asymptotically close to the Hopf point.
Fourth, on each side of the maximal spatial canards, there are families of (non-maximal) spatial canards, just as there are (non-maximal) temporal canards (headless ducks and ducks with heads) on each side of maximal limit cycle canards.
In this manner, the maximal spatial canards act as separatrices that locally partition the phase space into regions of distinct behaviour, and the bifurcations of maximal canards delimit distinct modes of activity in the parameter space, just as is the case for maximal temporal canards.
Fifth, the singularity responsible for the spatial canards is an RFSN-II point.
This is the analog of the canard point in the temporal fast-slow systems, which may be viewed as an FSN-II point in the extended (u,v,a) phase space of the kinetics ODE.
Sixth, in carrying out the geometric desingularization analysis of the RFSN-II point in the spatial ODE system (<ref>), we identified a key algebraic separatrix solution Γ_0, recall (<ref>), in the rescaling chart that is the key segment of the singular limit of the spatial canards.
It is the spatial analog of the parabolic separatrix solution in the rescaling chart (see <cit.>) in the analysis of the explosion of temporal limit cycle
canards in fast-slow ODEs.
There are also interesting differences between the spatial canards and the temporal limit cycle canards, which arise due to differences between the spatial ODEs and the temporal (kinetics) ODEs, as well as due to the increased dimension of the phase space.
In the classical explosion of temporal canards, the left and right branches are one dimensional attracting slow manifolds and the middle branch a one dimensional repelling slow manifold.
Moreover, the maximal canards exist for the parameter values when these two slow manifolds coincide to all orders.
In contrast, in the spatial ODE, the left and right branches are two dimensional saddle slow manifolds, and the middle branch a two dimensional elliptic slow manifold, recall Fig. <ref>.
Further, the maximal canards exist when the single-branched stable and unstable manifolds of the cusp point, which are the true and faux canards of the RFSN-II point,
continue into each other to all orders.
§ PDE DYNAMICS
In Sections <ref>–<ref>, we focused primarily on studying the rich family of stationary, spatially periodic canard patterns created by Turing bifurcations in the singularly perturbed van der Pol PDE (<ref>) for values of a near the Turing bifurcation a_T = √(1-2δ√(ε)), as well as numerically for a∈ [0,a_T).
A natural –and necessary– next question is: which of these patterns are observable, i.e., which are stable as solutions of the PDE?
This is in general a hard and deep question.
In this section, we will scratch the surface of this challenge, presenting a number of PDE simulations that give some hints about what kind of (stability) results may be expected.
We fix (,δ) at (0.1,0.01), which implies that a_T = 0.996832....
Moreover, we consider two primary values of a: a=1 > a_T, so that the background state (a,f(a)) is stable as solution of (<ref>) and a=0.99 < a_T for which (a,f(a)) has undergone the Turing bifurcation.
We consider the dynamics generated by (<ref>) from the point of view of the onset of pattern formation, i.e., we consider initial conditions (u_0(x),v_0(x)) that are close (in L^1 norm) to (u,v) ≡ (a,f(a)), and we look for the a priori small (∼ close to (a,f(a))) patterns generated by (<ref>).
For a=1, |a-a_T|= 0.00316...
This implies, from the point of view of weakly nonlinear stability theory/the Ginzburg-Landau approach (see for example <cit.>), that the magnitudes of the patterns expected near the Turing bifurcation are of the order of √(|a-a_T|) = 0.0562....
Therefore, we only work with initial conditions (u_0(x),v_0(x)) that satisfy |u_0(x)-a|, |v_0(x)-f(a)| ≤ 0.10 < 2√(|a-a_T|) uniformly on the x-interval under consideration.
Now, since δ^2 < 64/625 for 𝒪(1) values of >0 and 0<δ≪ 1, the Ginzburg-Landau equation associated to the Turing bifurcation has a positive Landau coefficient i.e., a positive coefficient on the cubic term (Section <ref>).
Thus, one does not expect small stable patterns for a < a_T.
In fact, one expects the amplitude of the patterns generated by the Turing bifurcation to grow beyond the validity of the Ginzburg-Landau approach <cit.>.
Then, for a > a_T, only the trivial state (corresponding to (a,f(a))) will be stable.
Moreover, its domain of attraction will be small, in the sense that solutions with initial data that is 𝒪(√(|a-a_T|)) close to (a,f(a)), but not sufficiently close, are also expected to outgrow the Ginzburg-Landau domain of validity.
Fig. <ref> shows the final attracting patterns of two numerical simulations with a=1.00 that both start with localized initial conditions at most 0.10 away from the (1,-2/3) background state, ı.e., the initial perturbations of (a,f(a)) = (1,-2/3) can be considered to be sufficiently small so that the dynamics can be expected to be within the range of validity of the Ginzburg-Landau approach.
In fact, the only difference between the two simulations is the widths (in x) of the `plateaus' where u_0(x) differs from a=1.
In both cases, the initial conditions are outside the domain of attraction of (u(x,t),v(x,t)) ≡ (1,-2/3) (by construction: further reducing the amplitude of the initial conditions brings (u_0(x),v_0(x)) into the domain of attraction of (1,-2/3)).
Indeed, the dynamics generated by (<ref>) depart from the Ginzburg-Landau region near (1,-2/3), and the magnitude of the pattern grows to 𝒪(1).
In both cases, a (stationary) localized homoclinic multi-front pattern appears as attractor.
Depending on the width of the initial plateau, it is either a 1-pulse/2 front or a 2-pulse/4-front pattern, see Fig. <ref>.
In the second row of Fig. <ref>, the orbits of the final attracting states are plotted in the (u,v) phase space (parameterized by x).
This further confirms that the attractors are of the large-amplitude spatial canard types studied in Sections <ref>, <ref>, and <ref>.
Next, we consider exactly the same setting as in the simulations of Fig. <ref> but decrease a to a=0.99 < a_T.
As a consequence, the pattern dynamics of (<ref>) must be significantly different, since the background state (u(x,t),v(x,t)) ≡ (a,f(a)) has become unstable.
In Fig. <ref>, the attracting pattern is plotted that emerges from the simulations with a=0.99 from initial conditions that are similar to those in the second column of Fig. <ref> (in the sense that the deviations from (a,f(a)) are identical in both cases).
Here also, the attractor is a stationary large-amplitude homoclinic pattern, but now not homoclinic to the homogeneous state (a,f(a)), but to a spatially periodic pattern – with a well-selected wavenumber.
This periodic pattern appears step-by-step, on a long time scale.
Up to t ≈ 380, the pattern is `transitionally stable', in the sense that it remains very similar to the attractor observed for a=1.00 in Fig. <ref>.
Then, a fast growing spike appears that evolves into a new localized 2-front pulse – or better: two of these, one each for x > 0 and for x <0 – between t = 390 and t=400 (see the first row of Fig. <ref>).
In the second row of Fig. <ref>, the same patterns are plotted, now again in (u,v)-space.
The (spatial) canard dynamics near the point (a,f(a)) ≈ (1,-2/3) seem to play a dominant role in the formation and shape of the spike.
Each `new period' of the periodic pattern to which the final state is asymptotic is formed in the same manner – with long periods of `transient stability' in which the pattern only evolves marginally and slowly.
Here, we leave it as a subject for future research to go deeper into the relation between the spatial dynamics near (1,-2/3) and the formation of the periodic pattern and especially the mechanism by which the periodic pattern –that must be an element in a one-parameter family of spatially periodic patterns– is selected.
Overall, the simulations of the van der Pol PDE (<ref>) that we have conducted so far indicate that there are substantial classes of initial conditions that are in the basins of attraction of (large-amplitude) stationary patterns of the types considered in the analysis here.
Moreover, this behavior appears to be related to the width of plateaus in both components u_0(x) and v_0(x) of the initial data.
If one keeps all other aspects as in the simulations shown, then there appears to be a critical threshold for the width of the plateaus.
In particular, data for which the plateau width is below the threshold evolve to these attractors. Whereas, for data in which the plateau width exceeds the threshold, the system no longer exhibits stationary large-amplitude fronts and pulses. Instead, the PDE exhibits attracting patterns that vary periodically in time with a small amplitude, while their spatial variation may either be uniformly small, or consist of distinct small- and large-amplitude parts (see also Fig. <ref> below).
To gain an intuitive understanding of this threshold, we recall from Section <ref> that, near the (spatial) Turing bifurcation at a=a_T, the trivial state (a,f(a)) is already unstable with respect to the long (spatial) wavelength perturbations associated to the (temporal) Hopf bifurcation at a=1, since a_T<1 by (<ref>) –see Fig. <ref>.
Hence, if the initial data consist of perturbations of (a,f(a)) for which the support is sufficiently large, then one expects that the unstable Hopf modes will be triggered.
On the other hand, if the initial data consists of sufficiently localized perturbations, i.e., has plateaus that are too narrow, then the dynamics of the PDE are predominantly driven by the Turing mode, as seen above.
Moreover, these observations also suggest that to understand the dynamics of patterns in (<ref>) that do not originate from such localized perturbations, one needs to consider both instability mechanisms, i.e. one needs to study the co-dimension 2 bifurcation in which the spatial Turing mode and the temporal Hopf mode interact.
In that case, the natural Ginzburg-Landau set-up involves two (complex) amplitudes: A(ξ,τ) that modulates the (linear) Turing mode e^i k_T x and B(ξ,τ) that modulates the Hopf mode e^i ω_T t (cf. Section <ref>). In the generic setting, the dynamics of small-amplitude solutions near this bifurcation has been studied in <cit.> (in the setting of reaction-diffusion systems).
In fact, the validity of a coupled (cubic) system of equations for A and B has been established in <cit.> for the generic situation in which
the shapes of the eigenvalue curves λ_±(k) are independent of the small parameter δ.
Here, significant additional analysis appears to be required to go beyond the generic situation, because in (<ref>) the distance between the Hopf and Turing bifurcations is 2 √( d) = 2 √()δ. Thus, it is natural to take δ > 0 as the main small parameter in the Ginzburg-Landau analysis (recall also that δ = 0.01 (and = 0.1) in most simulations here).
Furthermore, in order to derive the (A,B)-system that governs the dynamics of small patterns 𝒪(δ) close to a_T, one needs to adapt the generic scalings of <cit.>. We also leave this analysis for future work.
Finally, regarding the PDE dynamics, there is also the question whether (some of) the stationary small-amplitude canard patterns presented in Sections <ref>, <ref>, and <ref> may also be stable as solutions of the PDE (<ref>).
In our (limited) numerical simulations to date, we have not yet encountered such patterns. As observed above, by choosing the extent of the support of the initial conditions sufficiently large, i.e., above threshold, we see that system (<ref>) may exhibit small-amplitude patterns.
In the simulation shown in the top row of Fig. <ref>, we consider localized initial data for which the spatial extent is somewhat larger than in Figs. <ref>, <ref>, and <ref> and take (again) a=1.
We observe the formation of a small-amplitude pattern that varies periodically in time and in space. Its spatial variation has a long wavelength character, which indicates that the amplitude A(ξ,τ) of the Turing mode has vanished.
Thus, the PDE dynamics seem to be dominated by the Hopf bifurcation associated to the van der Pol equation and the associated (complex, uncoupled) Ginzburg-Landau equation for B(ξ, τ).
This changes drastically if we decrease a to 0.999 (see the bottom row of Fig. <ref>).
Here, as also in the simulations shown in Figs. <ref> (second column), <ref>, and <ref>, a localized, four-front, large-amplitude spatial structure appears near x=0 (see Fig. <ref>).
However, unlike in Figs. <ref> and <ref>, the background state is now unstable with respect to the Hopf bifurcation, and the tails of the large-amplitude pattern vary periodically in time – and on the long spatial scale. (In fact, the localized structure itself also oscillates in time, but with an amplitude that is much smaller than that of the oscillations beyond the large-amplitude part of the pattern.)
Note that the attractor of Fig. <ref> has the (expected) nature of a stationary localized structure as in Fig. <ref> that is destabilized by a Hopf bifurcation stemming from its essential spectrum (cf. <cit.>).
Of course, this evidence is far from sufficient to conclude that the stationary small-amplitude canard patterns cannot be stable.
So far, our findings only indicate that in the study of potentially stable small-amplitude patterns, one necessarily must also take the spatial aspects of the Hopf bifurcation into account.
In other words, one needs to merge the (non-generic) coupled Ginzburg-Landau dynamics of the Turing mode A(ξ,τ) e^i k_T x and the Hopf mode B(ξ,τ) e^i ω_H t with the canard analysis in the present work. Moreover, a much more detailed and extensive (numerical) investigation of the PDE dynamics is crucial: our limited experiments show that the nature of the final attractor depends in a subtle way on small variations in the initial conditions and (consequentially) on the accuracy of the numerical procedure.
These questions would be interesting and relevant subjects for future work.
The types of stationary, multi-front patterns shown in Fig. <ref> have been constructed for various problems in the literature on singularly perturbed reaction-diffusion equations (see for example <cit.> and references therein).
However, in the known constructions, the asymptotic homogeneous states of the homoclinic patterns correspond to critical points on normally hyperbolic manifolds, which is of course not the case here.
§ CONCLUSIONS AND OUTLOOK
§.§ Summary
In this article, we reported on the discovery of classes of spatially periodic canard solutions that emerge from Turing bifurcations in the van der Pol PDE (<ref>) in one dimension, a phenomenon that we have dubbed “Turing's canards".
The canards that we studied analytically and numerically include classes of small-amplitude and large-amplitude spatially periodic canard solutions, with large, 𝒪(1), and small wavenumbers. (See Figs. <ref>–<ref>, as well as Figs. <ref> and <ref> for representative canards, and see Figs. <ref>, <ref>, and <ref>–<ref> for bifurcation diagrams.)
Furthermore, we observed numerically that several of these classes of spatially-periodic canards are attractors in the PDE.
The spatial ODE system, recall (<ref>), governs time-independent solutions of the PDE.
It has reversible, 1:1 resonant Hopf bifurcation points exactly at the parameter values where the PDE undergoes Turing bifurcations.
Quartets of eigenvalues merge there into two identical purely imaginary pairs, and hence hyperbolicity of the equilibrium/homogeneous state is lost,
recall Fig. <ref> and Proposition <ref> in Sec. <ref>.
We performed a complete analysis of the Turing/reversible 1:1 Hopf point at a_T=√(1-2δ√()), recall (<ref>), in the full four-dimensional phase space (and the results for the other RFSN-II points follow by symmetry).
We studied the two-dimensional fast/layer system (Sec. <ref>) and the two-dimensional slow system (Sec. <ref>).
Both are one-degree-of-freedom Hamiltonian systems, due to the reversibility symmetry of the full spatial ODE system.
Our analysis showed that the critical manifolds, which govern the slow dynamics to leading order, are two-dimensional, cubic-shaped manifolds consisting of saddle points of the fast system on the left and right branches and of center points on the middle branch.
We identified the key folded singularities, namely the reversible folded saddle-node points of type II (RFSN-II points), that lie on the fold sets between the saddle and center branches of the cubic crirical manifold.
These RFSN-II exist asymptotically close to the Turing bifurcations.
Using the method of geometric desingularization (see Section <ref>), we showed that the true and faux canards of the RFSN-II points are responsible for creating the spatially periodic canard patterns that we discovered, with the spatial canards having long segments near the true and faux canards.
The analysis of the dynamics in the coordinate charts led to the discovery that there is a special algebraic solution Γ_0 (see Section <ref>) in the rescaling chart that constitutes the core component of the maximal spatial canards in the full system.
The orbit Γ_0 consists of two branches, one corresponding to each of the true and faux canards of the RFSN-II point, that approach the cusp point from above and below.
It is the geometrically unique orbit in the rescaling chart that asymptotes to the two critical points in the extry-exit chart on the equator of the hemisphere, thereby serving as a separatrix (or “river" type solution) over the hemisphere (in analogy to the parabola in the rescaling chart of the canard explosion in the fast-slow van der Pol ODE, recall <cit.>).
Perturbation analysis of this algebraic solution then led to the calculation of a critical value a_c(δ)
(recall (<ref>)) at which this orbit persists to leading order in δ (where the asympotic expansion was calculated using the dynamic, small-amplitude, perturbation parameter r_2 in the rescaling chart).
This is the value at which the true and faux canards continue into each other.
Further analysis of key solutions focused on their smoothness in δ, and critical values of a were identified, the next one of which is a_c2(δ) (recall (<ref>)).
These calculations were performed on a fixed level set of the conserved quantity 𝒢̃, and can be generalized.
All of these canards are maximal canards that have the longest segments near the true and faux canards of the RFSN-II points, and they serve as boundaries in phase and parameter space separating spatially periodic canards of different profiles.
The dynamics of the canards change as one moves along the isolas of periodic solutions in the (a,k) parameter plane.
One elementary change along branches of isolas occurs when the length of the canard segments grows as a changes (recall the green branch in Fig. <ref>).
Next, for branches of isolas of spatially periodic solutions with canard segments along the true and faux canards of the RFSN-II point on the right slow manifold S_s^+, we studied the nucleation of interior spikes at the RFSN-II point on the left critical manifold S_s^- (recall the orange branch in Fig. <ref>).
In addition, we followed these new spikes through the transition in parameter space from small to large spikes (recall Section <ref>).
A number of further bifurcations of spatially periodic canards were also observed, including a bifurcation to canards with double loops (recall Fig. <ref>).
Self-similarity plays a central role in the spatially periodic canards.
We showed that, in the singular limit δ=0, the zero level set of the Hamiltonian of the slow system is scale invariant, and hence that it has an infinite self-similarity.
This self-similarity manifests through a sequence of crossing points and small, nested loops.
Then, we observed that for 0<δ≪ 1, the self-similarity is broken.
The spatial patterns exhibit nearly self-similar dynamics, recall Fig. <ref>.
Solutions with different numbers of nested, successively-smaller, nearly self-similar loops are shown in Figs. <ref> and <ref>.
Moreover, the wavenumber decreases as the number of successive loops increases.
The spatial canards are analogs in spatial dynamics of the situation in fast-slow ODEs with time-periodic limit cycle canards, where the explosion of temporal canards occurs near –and asymptotically close to– the singular Hopf bifurcation.
The new spatially periodic canards were found (recall Sec. <ref>) to have many features in common with the classical temporal limit cycle canards, as well as several important new features. The most interesting of these new features is that the critical manifold in the spatial ODE system is two-dimensional and consists of branches of saddle equilibria and center equilibria of the fast system, which contrasts with the one-dimensional critical manifolds of attracting and repelling equilibria that give rise to canard explosions in fast-slow ODEs.
To complement the above results for the spatial ODE system, we also studied some basic dynamical properties of the PDE (<ref>), recall Section <ref>.
We showed that the Turing bifurcation from which the spatially periodic canards is sub-critical for the PDE.
The standard Ginzburg-Landau theory shows that small-amplitude perturbations should grow because the coefficients of both the linear and cubic terms are positive, and it cannot determine what the nonlinear saturation mechanisms might be.
Hence, this study also sheds new light on what attractors can exist in the sub-critical case.
The PDE simulations that we have performed so far showed that the large-amplitude, spatially periodic canards are attractors in the full PDE (<ref>). With localized initial data, which are at the homogeneous state over most of the interval and have localized tanh-shaped perturbations (`plateaus'), we observed that, when the plateaus are not too wide, the attractors are stationary, large-amplitude canard patterns that are homoclinic in space to the (stable) homogeneous state (a,f(a)) for a ≳ a_T.
Then, for a ≲ a_T, with the same initial data (plateaus not too wide), the attractors are large-amplitude canard patterns homoclinic to spatially periodic patterns, since the homogeneous state is linearly unstable here. Recall Figs. <ref>-<ref>.
By contrast, when the plateaus in the initial data are too wide, then the attractors exhibit time periodic dynamics for a<1, due to the Hopf bifurcation.
In addition, a class of small-amplitude attractors is observed in the PDE.
For 1 > a ≳ a_T, these are periodic in both time and space, and they are observed for initial data in which the support is sufficiently wide, greater than a threshold.
By contrast, for a ≲ a_T, the same iniital data develops into a large-amplitude attractor whose spatial tails vary periodically in time.
Recall Fig. <ref>.
Furthermore, the simuations suggest that there is a rich interplay between the Turing and Hopf modes, especially since the real parts of the dominant eigenvalue are both of size 𝒪(δ) at the Hopf value a=1 and the Turing bifurcation a_T=√(1-2δ√()) (recall Fig. <ref>), and it is expected that the PDE dynamics are governed by coupled Ginzburg-Landau equations (one for each mode), as described in Section <ref>.
§.§ RFSN-II points and spatial canards in general reaction-diffusion systems
In this brief section, we generalize some of the main results about the classes of small-amplitude, spatially periodic canards asymptotically close to the Turing bifurcation value a_T established here for the van der Pol PDE (<ref>) to other reaction-diffusion systems with separated diffusivities, which undergo Turing bifurcations and which have reaction kinetics consisting of two or more branches separated by non-degenerate fold points.
In particular, we consider
systems of reaction-diffusion PDEs
u_t = f̂(u,v) + d u_xx,
v_t = ĝ(u,v) + v_xx,
where 0<d≪ 1.
To generalize the results about the RFSN-II points and their canards, system (<ref>) needs to have a sufficiently large open set in the (u,v) plane on which the critical set f̂(u,v)=0 has a locally unique (non-degenerate) quadratic solution v=h_0(u) i.e., a point (u_0,v_0) in the open set such that f̂(u_0,h_0(u_0))=0, ∂f̂/∂ u (u_0,h_0(u_0))=0, and ∂^2 f̂/∂ u^2(u_0,h_0(u_0)) 0.
With this assumption, there will be a critical manifold consisting of a saddle branch and a center branch that meet along a fold curve in the (u,p,v,q) space
(recall (<ref>) and Fig. <ref> for the van der Pol system).
Like all reaction-diffusion systems in one dimension, these multi-scale reaction-diffusion systems are invariant under the transformation x → -x.
Hence, the systems of spatial ODEs governing the shapes of time-independent patterns have a reversibility symmetry of the form ℛ, recall (<ref>), and the Turing bifurcation points in these systems correspond to reversible 1:1 resonant Hopf bifurcations in the spatial ODE systems.
In addition, the fast system (or layer problem)
u_y = p
p_y = -f̂(u,v)
will be Hamiltonian, with v as a parameter, recall (<ref>).
Furthermore, the desingularized reduced vector fields on the critical manifold S={ p = 0, v = h_0(u) } will be
u_x_d = q,
q_x_d = f̂_u/f̂_vĝ(u,h_0(u)).
These systems are also Hamiltonian, just as the desingularized reduced system (<ref>) is for the van der Pol system.
Due to the reversibility symmetry, the ordinary singularities (fixed points) of the desingularized vector fields must be saddles, centers, or saddle-nodes, and the folded singularities must be reversible folded saddle (RFS), reversible folded center, or reversible folded saddle-node (RFSN) points, since the eigenvalues of the Jacobians must be symmetric with respect to the real and imaginary axes.
Finally, the true and faux canards of RFSN-II and RFS points will give rise to the canard segments of the spatially periodic solutions.
§.§ Discussion
Open questions about the full PDE (<ref>) have been listed at the end of Section <ref>.
Here, we list some open questions about the spatial ODE system.
We are presently performing the geometric desingularization of the reversible folded saddle (RFS) points on the fold sets L^± in (<ref>), which were shown to exist for parameter values further from the Turing bifurcation (and hence from the RFSN-II singularity) in Section <ref>.
Simulations (see for example Figs. <ref>–<ref>) show that these RFS singularities are responsible for the creation of the spatially periodic canard solutions observed for a values sufficiently far away from a_T. The further a is from a_T, the true and faux canards of the RFS play a similar role in the geometric deconstructions of the spatially periodic solutions as those of the RFSN-II for a near a_T. Also, the ordinary singularity E, which is a center for a<1, is located further from the fold set, and hence it would be of interest to determine how the number and structure of the loops changes as a decreases.
Furthermore, in the opposite limit as a → 1, preliminary calculations indicate that the canards of the RFS points converge to the canards created by the RFSN-II.
The results here motivate a more general rigorous analysis of folded singularities in singularly perturbed systems of ODEs with reversibility symmetry, especially into the geometry of the invariant manifolds associated to RFSN-II and RFS points.
We refer to <cit.> for the theory of general (not necessarily reversible) folded saddle nodes, and to <cit.> for the theory of general folded saddles.
The three-component reaction-diffusion model studied in <cit.> consists of one activator and two inhibitors.
The kinetics of the activator and first inhibitor are essentially those of the FitzHugh-Nagumo ODE.
The kinetics of the second inhibitor are also linear.
All three species diffuse, with the diffusivity of the activator being asymptotically smaller than those of the inhibitors.
Overall, the reaction-diffusion subsystem for the activator and first inhibitor is of the general form (<ref>).
Hence, it would be of interest to study the roles of the folded singularities and their canards in the onset of the spikes in the two-component FHN system in which the inhibitor also diffuses, as well as in the full three-component reaction-diffusion system.
The formation of the spike on the left branch, from S_s^- due to the nearby RFSN-II point and its canards observed here for <ref> (recall Sec. <ref>) may explain the nucleation of some spikes in the three-component model of <cit.>.
In the other direction, analysis similar to that in <cit.> of how a small spike grows into a full-fledged spike should apply here for system <ref>.
Another question concerns the double asymptotic limit in which and δ are small. It is well-known that the asymptotic limit of small gives rise to temporal canards in the kinetics, and we have established here that the asymptotic limit of small δ gives rise to canards in the spatial dynamics.
How do the temporal limit cycle canards of the kinetics problem for small interact with spatial canards?
Acknowledgments.
The authors gratefully acknowledge Irv Epstein, Guido Schneider, and Gene Wayne,
for useful comments. We also thank Irv Epstein for bringing reference <cit.> to our attention, for the possible evidence of spatial canards.
The results in this article were presented by T.V. at the conference Multiscale Systems: Theory and Applications, held July 8-12, 2024, at the Lorentz Center, Leiden University, Leiden, NL.
NSF-DMS 1616064 and NSF-DMS 1853342 provided partial support to T.K. and T.V., respectively.
The research of A.D. is supported by the ERC-Synergy project RESILIENCE (101071417).
§ NUMERICAL METHODS
§.§ Continuation of periodic solutions
Periodic solutions of the system (<ref>) with non-trivial first integral 𝒢 given by (<ref>) were computed and numerically continued using the method developed in <cit.>. More specifically, families of periodic solutions were computed by calculating solutions of the auxiliary system
u̇ = T (F( u) + η∇𝒢),
subject to periodic boundary conditions u(0) = u(1). Here,
u = (u,p,v,q), F is the vector field in (<ref>), T is the spatial period, η is a new auxiliary parameter (fixed at zero), and the overdot denotes the derivative with respect to the scaled spatial variable y = yT.
The branches of 𝒢( u) = g, where g is a constant, were computed by appending the integral constraint
∫_0^1 𝒢( u) dy = g,
to the system (<ref>) subject to the periodic boundary conditions u(0) = u(1).
The numerical continuation was implemented using the continuation software AUTO <cit.>.
The bifurcation diagrams for the families of spatially periodic canard solutions of (<ref>) shown in Figs. <ref>, <ref>, and later figures were also computed using AUTO. We note that additional branches of periodic solutions may exist.
§.§ Computation of saddle slow manifolds and maximal canards
The saddle slow manifolds shown in Fig. <ref> were computed following the method developed in <cit.>. More specifically, the saddle slow manifolds were computed in two parts: one for the solutions enclosed by the true and faux canards of the folded saddle and one for the solutions outside that lie outside the region enclosed by the true and faux canards of the folded saddle.
The subsets of S_s,δ^+ outside the region enclosed by the true and faux canards of the folded saddle were computed by solving the system (<ref>), where u = (u,p,v,q) and F is the vector field in (<ref>), and subject to the boundary conditions
u(0) ∈{ v = 13u^3-u, q = q_0 : q_0 < 0 } and u(1) ∈{ p = 0, u = 1 }.
The left-end condition enforces the constraint that solutions enter the saddle slow manifold along the p-nullcline at a fixed q-distance from the folded singularity. The right-end condition is a statement that solutions leave the neighbourhood of the slow manifold along the u-nullcline and terminate at the fold.
The subsets of S_s,δ^+ enclosed by the true and faux canards of the folded saddle were computed by solving the system (<ref>), where u = (u,p,v,q) and F is the vector field in (<ref>), and subject to the boundary conditions
u(0) ∈{ v = 13u^3-u, u = u_0 : u_0 > 1 },
u(1) ∈{ v = 13u^3-u },
and { q(0) + q(1) = 0 }.
The left-end condition specifies that solutions enter the saddle slow manifold along the p-nullcline at a fixed u-distance from the fold. The remaining boundary conditions ensure that the solutions turn away from the fold and stay on the slow manifold.
The maximal canards were then computed as saddle-node bifurcations of the above two-point boundary value problems.
§.§ Continuation of canard orbits in chart Lg
For the results shown in Section <ref>, solutions of the blown-up system (<ref>) on the zero level set H_2(u_2,p_2,v_2,q_2,r_2) = 0 were computed and numerically continued by solving the auxiliary problem (<ref>), where u = (u_2,p_2,v_2,q_2), F is the vector field in (<ref>), and the conserved quantity is 𝒢 = H_2. These equations were solved subject to the boundary conditions
p_2(0) + p_2(1) = 0, and q_2(0) + q_2(1) = 0,
which enforces the symmetry (u_2,p_2,v_2,q_2) → (u_2,-p_2,v_2,-q_2), together with the integral constraint
∫_0^1 H_2(u_2,p_2,v_2,q_2,r_2) dy = 0
which constrains the solution to the zero level contour of the Hamiltonian.
§.§ ODE and PDE Simulations
Finally, direct numerical simulations of solutions of the spatial dynamics of (<ref>) were carried out using Mathematica's in-built ODE solvers.
Direct numerical solutions of the full PDE (<ref>) were performed using
Mathematica's NDSolveValue-package with the “MethodOfLines” and “SpatialDiscretization” prescibed by {“TensorProductGrid”, “MinPoints” → 10000}.
§ THE PROOF OF PROPOSITION LG
In this appendix, we prove Proposition <ref>. by applying Theorem 3.21 from Chapter 4.3.3 of <cit.>.
We work with the spatial ODE system in a general form,
0 = v - f(u) + d_u u_xx
0 = (a-u) + d_v v_xx,
where the diffusivities, d_u and d_v are positive.
To derive the normal form, we rewrite these spatial ODEs as the following fourth-order system:
u_x = p
p_x = 1/d_u
(f(u)-v)
v_x = q
q_x = /d_v (u-a).
We translate the variables so that the equilibrium (a,0,f(a),0) is at the origin and use the notation u=(u_1,u_2,u_3,u_4) of Chapter 4.3.3 for the dependent variables:
u=a+u_1, p = u_2, v = f(a) + u_3, and q=u_4.
Hence, the system is
u_1_x = u_2
u_2_x = 1/d_u[ (a^2-1) u_1 + a u_1^2 + 1/3 u_1^3 - u_3 ]
u_3_x = u_4
u_4_x = /d_v u_1.
The quartet of eigenvalues is given by
±1/√(2)d_u d_v√( (a^2-1)d_u d_v^2 ± i d_v^3/2√( 4 d_u - (a^2-1)^2 d_v)).
At a=a_T=√(1 - 2 √( d_u/d_v)), the quartet degenerates into two coincident pairs of pure imaginary eigenvalues
± i ω = ± i (/d_u d_v)^1/4.
The eigenvalues ± i ω are algebraically double and geometrically simple.
This is the non-degenerate, reversible, 1:1 resonant Hopf bifurcation identified in Section <ref>, recall also Fig. <ref> (where we note that d_u=d=δ^2 and d_v=1 in (<ref>)).
In order to unfold this point, we set
a = a_T + μ,
where the parameter μ here is different from the spatial eigenvalue μ used in Section <ref>,
and we write the system (<ref>) as
u_x = ℱ ( u, μ) = L u
+ R_20( u, u)
+ R_30( u, u, u)
+ μ R_11 ( u)
+ μ R_21( u, u)
+ μ^2 R_12( u),
where the operators are defined as
L = [
[ 0 1 0 0; -2 ω^2 0 -1/d_u 0; 0 0 0 1; /d_v 0 0 0 ]], R_20( u, v) = [
[ 0; a_T/d_u u_1 v_1; 0; 0 ]], R_30( u, v, w) = [
[ 0; 1/3d_u u_1 v_1 w_1; 0; 0 ]],
R_11( u) = [
[ 0; 2a_T/d_u u_1; 0; 0 ]], R_21( u, v) = [
[ 0; 1/d_u u_1 v_1; 0; 0 ]], R_12( u) = [
[ 0; 1/d_u u_1; 0; 0 ]].
The terms represent, respectively L: the Jacobian at the origin at a_T;
R_20:
the quadratic terms in u;
R_30:
the cubic terms in u; R_11 and R_21: the terms that are linearly proportional to the unfolding parameter μ, and R_12: the term proportional to μ^2. (We follow the general notation in <cit.> for the unfolding of this bifurcation, labeled as (iω)^2 in Chapter 4.3.3. The subscripts on R indicate the powers of u and μ, respectively, in each term in (<ref>); for example, the term involving R_21 is quadratic in u and linear in μ.)
For L, the Jacobian at a_T, we use the following eigenvector and generalized eigenvector associated to iω:
ζ_0 = [
[ 1; i ω; -/ω^2 d_v; -i /ω d_v ]],
ζ_1 = [
[ -i/ω; 2; -i/ω^3 d_v; 0 ]].
These satisfy ( L-iω)ζ_0=0 and ( L-iω)ζ_1 = ζ_0.
Moreover, since L is real, one also has
( L + i ω) ζ̅_0 = 0
and ( L + i ω) ζ̅_1 = ζ̅_0,
where the overbar denotes the complex conjugate.
The set {ζ_0, ζ_1, ζ̅_0,ζ̅_1 } is used as a basis for ℝ^4, and we represent the vector u = A ζ_0 + B ζ_1 + A̅ζ̅_0 + B̅ζ̅_1 by u=(A,B,A̅,B̅), where A, B ∈ℂ.
Now, we derive the normal form of (<ref>).
It may be obtained directly by applying Lemma 3.17 in Chapter 4.3.3 of <cit.>, as follows.
The linear part, L, is conjugate to the block Jordan matrix,
J = [
[ i ω 1 0 0; 0 i ω 0 0; 0 0 -i ω 1; 0 0 0 -i ω ]].
For the nonlinear part, which also includes the μ-dependent terms, we observe that Hypotheses 3.1, 3.2, and 3.14 from <cit.> are satisfied
for every integer k ≥ 3, i.e., the vector is C^k for every K≥ 3 since it is polynomial. Hence, Lemma 3.17 establishes that, for any integer p with 2 ≤ p ≤ k, there exist neighborhoods 𝒱_1 of the origin in ℝ^4 and 𝒱_2 of the origin in ℝ and a real-valued polynomial Φ of degree p such that the near-identity coordinate change,
u = A ζ_0 + B ζ_1 + A̅ζ̅_0 + B̅ζ̅_1 + Φ(A,B,A̅,B̅,μ) defined on 𝒱_1 ×𝒱_2, transforms (<ref>) into the following normal form:
A_x = i ω A + B + i A P(μ, A A̅, i/2(A B̅ - A̅ B) ) + ρ_A
B_x = i ω B
+ i B P(μ,AA̅, I/2(AB̅-A̅B)) + A Q(μ, A A̅, i/2(A B̅-A̅ B)) + ρ_B.
Moreover, Φ(0,0,0,0,0)=0, ∂_(A,B,A̅,B̅)Φ(0,0,0,0,0)=0, the coefficients of the monomials of degree q in Φ(·,μ) are C^k-q in μ, and
Φ satisfies the reversibility symmetry
ℛΦ(A,B,A̅,B̅,μ) = Φ(A̅,-B̅, A, -B, μ).
Here, P and Q are real-valued polynomials of degree p-1.
The remainder terms ρ_A(A,B,A̅,B̅,μ) and ρ_B(A,B,A̅,B̅,μ) are C^k functions, satisfy the estimate
|ρ_A| + |ρ_B | = o( (|A| + |B|)^p), and have the following symmetries:
ℛρ_A(A,B,A̅,B̅,μ) = -ρ̅_A(A̅,B̅, A, B, μ) and
ℛρ_B(A,B,A̅,B̅,μ) = ρ̅_B(A̅,B̅, A, B, μ) for each μ.
(See the general normal form for reversible 1:1 resonant Hopf bifurcations given by equations (3.25) in Chapter 4.3.3 of <cit.>.)
As shown in <cit.>, the existence of equilibria, spatially periodic orbits, quasi-periodic orbits, and homoclinic orbits of (<ref>) may be determined by working with the principal parts of the polynomials P and Q. That is, it suffices to work with p=2.
Hence, throughout the rest of this appendix, we set
P= α̂μ + β̂ A A̅ + i/2γ̂ (A B̅ - A̅ B) and Q = âμ + b̂ A A̅ + i/2ĉ(AB̅-A̅B).
After lengthy calculations using the invariance equations, one finds:
â = a_T/2 d_u, b̂=25-8√(d_v/ d_u)/36d_u, ĉ= 109 d_u (d_v/)^1/4 - 32(d_v/)^3/4/72 d_u^5/4,
α̂ = -a_T/4 ω d_u, β̂= -1/16( d_v/ d_u^3)^1/4, γ̂= -32 d_v +37√( d_u d_v)/216 d_u.
(Note that these six parameters are the same as those in Hypothesis 3.18 in <cit.>, except that we have added the hats.)
Next, we turn to the signs of these six parameters in the principal parts of P and Q. Since a_T>0, we know â>0 and α̂<0.
Also, β̂<0.
The sign of b̂ depends on the ratios of the diffusivities:
b̂ < 0 for d_u/d_v < 64/625,
and b̂ > 0 for d_u/d_v > 64/625.
This sign analysis of b̂ determines the parameter conditions in Proposition <ref>. The former is referred to as the focusing/sub-critical case, and the latter as the defocusing/super-critical case.
The sign of ĉ is given by:
ĉ > 0 for d_u > 32/109√(d_v/), and ĉ < 0 for d_u < 32/109√(d_v/).
Finally, the sign of γ̂ is given by
γ̂ > 0 for d_u > (32/37)^2d_v/, and γ̂ < 0 for d_u < (32/37)^2 d_v/.
§ THE PROOF OF PROPOSITION <REF>
In this appendix, we prove Proposition <ref>.
We analyze the dynamics around the nilpotent equilibrium (0,0,0) by performing the blow-up transformation
u_2 = r u_2, p_2 = r p_2, and q_2 = r q_2,
with (u_2,p_2,q_2)∈𝕊^2 and r≥ 0, which inflates the nilpotent equilibrium to the unit sphere. For our analysis, we restrict attention to the half-space {u_2 ≥ 0 } which completely contains the algebraic solutions Γ_0^±. We will examine the dynamics in three coordinate charts:
K_21: {u_2=1 }, K_22: {p_2=1 }, and K_23: {q_2 = 1},
and then appeal to the symmetry (<ref>) to obtain the dynamics on the remainder of the blown-up hemisphere. We denote an object Φ of (<ref>) in the blown-up coordinates by Φ and in the charts K_2j by Φ_2j for j=1,2,3.
Dynamics in chart K_21: {u_2=1 }:
The blow-up transformation in chart K_21 is
u_2 = r_21, p_2 = r_21 p_21, and q_2 = r_21 q_21.
Transformation and desingularization (dζ_21 = r_21 dη_2) gives
ṙ_21 = √() r_21 p_21
ṗ_21 = 23r_21-12√()( p_21^2+q_21^2 )
q̇_21 = 1-√() p_21q_21
where the overdot denotes the derivative with respect to the rescaled spatial coordinate ζ_21.
The set { r_21 = 0 } is invariant. The dynamics in the { r_21 = 0 } subspace are given by
ṗ_21 = -12√()( p_21^2+q_21^2 )
q̇_21 = 1-√() p_21q_21.
The coordinate change (z_21,w_21) = 12( p_21-q_21, p_21+q_21) transforms the system (<ref>) to the decoupled system of Riccati equations
ż_21 = -√() z_21^2-12
ẇ_21 = -√() w_21^2+12.
From this, we find that solutions of (<ref>) are given by
p_21(ζ_21) = 1√(2)^1/4[ tanh( ^1/4√(2)ζ_21 + tanh^-1( √(2)^1/4w̃_21) ) - tan( ^1/4√(2)ζ_21 - tan^-1( √(2)^1/4z̃_21) ) ]
q_21(ζ_21) = 1√(2)^1/4[ tanh( ^1/4√(2)ζ_21 + tanh^-1( √(2)^1/4w̃_21) ) + tan( ^1/4√(2)ζ_21 - tan^-1( √(2)^1/4z̃_21) ) ]
where z̃_21 and w̃_21 are the values of z_21 and w_21 at ζ_21=0. Among these solutions, we distinguish two particular solutions:
ℓ_21^± := { p_21+q_21 = ±√(2)^-1/4}.
The solutions with initial conditions above ℓ_21^- are forward asymptotic (i.e., ζ_21→ +∞) to the line ℓ_21^+
and the solutions with initial conditions below ℓ_21^+ are backward asymptotic (i.e., ζ_21→ -∞) to the line ℓ_21^-.
The dynamics in the invariant subspace {r_21=0} are shown in Fig. <ref>(a).
Dynamics in chart K_22: {p_2 = 1 }: The blow-up transformation in chart K_22 is
u_2 = r_22 u_22, p_2 = r_22, and q_2 = r_22 q_22.
Transformation and desingularization (dζ_22 = r_22 dη_2) gives
ṙ_22 = r_22( 12√()(1-q_22^2)+23 r_22 u_22^3 )
u̇_22 = 12√()u_22(1+q_22^2)-23r_22 u_22^4
q̇_22 = -12√()q_22(1-q_22^2)+u_22^2-23r_22u_22^3 q_22,
where the overdot has been recycled to denote the derivative with respect to ζ_22.
The line { r_22 = 0, u_22 = 0 } is invariant with dynamics governed by
q̇_22 = -12√() q_22(1-q_22^2).
The equilibrium at q_22=0 is stable, and the equilibria at q_22 = ± 1 are unstable.
The plane { u_22 = 0 } is also invariant. The dynamics restricted to this plane are given by
ṙ_22 = 12√()r_22(1-q_22^2)
q̇_22 = -12√()q_22(1-q_22^2).
The system possesses a pair of lines of equilibria, ℒ_22,+^u = { q_22 = 1 } and ℒ_22,-^u = { q_22 = -1 }, which are both center-unstable with eigenvalues λ_u = √() and λ_c = 0. The associated eigenspaces are given by
𝔼^u( ℒ_22,±^u ) = [ ∓ r_22; 1 ] and 𝔼^c( ℒ_22,±^u ) = [ 1; 0 ].
The lines ℒ_22,±^u correspond to the center-unstable lines of equilibria ℒ_±^u.
The plane { r_22 = 0 } is invariant. The dynamics restricted to this subspace are given by
u̇_22 = 12√()u_22(1+q_22^2)
q̇_22 = -12√()q_22(1-q_22^2)+u_22^2.
As shown in Fig. <ref>(b), there is a saddle equilibrium at the origin, with stable eigenvalue λ_s = -12√() and stable eigendirection aligned with the q_22-axis, and unstable eigenvalue λ_u = 12√() and unstable eigendirection aligned with the u_22-axis.
Moreover, there is a pair of unstable degenerate nodes at (u_22,q_22)=(0,± 1) with spectra given by σ_u = {√(), √()} and corresponding eigenspaces
𝔼^u( 0,± 1 ) = span{[ 1; 0 ], [ 0; 1 ]}.
Thus, the equilibria (r_22,u_22,q_22)=(0,0,± 1) correspond to the intersections of the center-unstable lines, ℒ_±^u, of equilibria (see (<ref>) and (<ref>)) with the blown-up hemisphere.
We concl;ude with the following two key observations. First, there are two straight line solutions,
ℓ_22^± := { r_22=0, u_22 = ±1√(2)^1/4( q_22+1 ) }.
Second, the unstable manifold, W^u(0,0)=:Γ_22^+, of the saddle equilibrium at the origin splits the (u_22,q_22) phase space into two regions. Solutions with initial conditions to the left of Γ_22^+ are backward asymptotic to the equilibrium at (r_22,u_22,q_22)=(0,0,-1), and solutions with initial conditions to the right of Γ_22^+ are backward asymptotic to the equilibrium at (r_22,u_22,q_22)=(0,0,1). The dynamics in the { r_22=0 } subspace are shown in Fig. <ref>(b).
Dynamics in chart K_23: {q_2 = 1 }: The blow-up transformation in chart K_23 is given by
u_2 = r_23 u_23, p_2 = r_23 p_23, and q_2 = r_23.
Transformation and desingularization (dζ_23=r_23dη_2) gives
ṙ_23 = r_23u_23^2
u̇_23 = u_23( √()p_23-u_23^2 )
ṗ_23 = 12√()( p_23^2-1 )-u_23^2p_23+23 r_23 u_23^3,
where the overdot now denotes derivatives with respect to ζ_23.
The line { r_23=0, u_23=0 } is invariant. The dynamics on this line are governed by
ṗ_23 = 12√()( p_23^2-1 ).
There is a stable equilibrium at p_23=-1 and an unstable equilibrium at p_23=1.
The plane { u_23=0 } is invariant. The dynamics in the (r_23,p_23) plane are given by
ṙ_23 = 0
ṗ_23 = 12√()( p_23^2-1 ).
The line ℒ_23,-^s = { p_23=-1 } of equilibria is center-stable and corresponds to the line ℒ_-^s.
The line ℒ_23,+^u = { p_23=1 } of equilibria is center-unstable and corresponds to the line ℒ_+^u.
The plane { r_23=0 } is invariant. The dynamics on this subspace are governed by
u̇_23 = u_23( √()p_23-u_23^2 )
ṗ_23 = 12√()( p_23^2-1 )-u_23^2p_23.
As shown in Fig. <ref>(c), there is a stable degenerate node at (u_23,p_23)=(0,-1) with spectrum σ_s={ -√(),-√()} and stable subspace
𝔼^s( 0,-1 ) = span{[ 1; 0 ], [ 0; 1 ]}.
There is also an unstable degenerate node at (u_23,p_23)=(0,1) with spectrum σ_u={√(),√()} and unstable subspace
𝔼^u( 0,1 ) = span{[ 1; 0 ], [ 0; 1 ]}.
Solutions converge to the equilibium at (u_23,p_23)=(0,-1) along the lines
ℓ_23^± := { u_23=±1√(2)^1/4(p_23+1) }.
The dynamics in the {r_23=0} subspace are shown in Fig. <ref>(c).
The equilibrium (r_23,u_23,p_23)=(0,0,1) corresponds to the intersection of the center-unstable line, ℒ_+^u, of equilibria with the blown-up hemisphere. Similarly, the equilibrium (r_23,u_23,p_23)=(0,0,-1) corresponds to the intersection of the center-stable line, ℒ_-^s, of equilibria with the blown-up hemisphere.
Transition between charts K_21:{u_2=1 } and K_22:{p_2=1 }: The coordinate change, φ_12(r_21,p_21,q_21), from chart K_21 to chart K_22 is given by
φ_12(r_21,p_21,q_21) = (r_22, u_22, q_22) = ( r_21p_21, 1p_21, q_21p_21), for p_21 >0.
The inverse map, φ_21(r_22,u_22,q_22), which transports coordinates from chart K_22 to K_21, is given by
φ_21(r_22,u_22,q_22) = (r_21,p_21,q_21) = ( r_22u_22, 1u_22,q_22u_22), for u_22>0.
Under this coordinate change, we find that
lim_ζ_21→ -∞φ_12( ℓ_21^±) = lim_ζ_22→ -∞ℓ_22^± = { (r_22,u_22,q_22) = (0,0,-1) }.
Thus, solutions emanate from the point ℒ_-^u ∩𝕊^2 on the equator of the blown-up hemisphere.
See Fig. <ref>.
Transition between K_21:{u_2=1 } and K_23:{q_2=1 }: The coordinate change, φ_13(r_21,p_21,q_21), from chart K_21 to chart K_23 is given by
φ_13(r_21,p_21,q_21) = (r_23,u_23,p_23) = ( r_21q_21, 1q_21, p_21q_21), for q_21 >0.
The inverse map, φ_31(r_23,u_23,p_23), which transports coordinates from chart K_23 to K_21, is given by
φ_31(r_23,u_23,p_23) = (r_21,p_21,q_21) = ( r_23u_23, p_23u_23, 1u_23), for u_23 >0.
From these transition maps, we find that the image of the attracting line ℓ_21^+ is the line ℓ_23^+ and is forward asymptotic to the equilibrium (r_23,u_23,p_23)=(0,0,-1).
That is,
lim_ζ_21→∞φ_13( ℓ_21^+ ) = lim_ζ_23→∞ℓ_23^+ = { (r_23,u_23,p_23)=(0,0,-1) }.
Thus, solutions terminate at the point ℒ_-^a ∩𝕊^2 on the blown-up hemisphere.
See Fig. <ref>.
Transition between K_22:{p_2=1 } and K_23:{q_2=1 }:
The coordinate change, φ_23(r_22,u_22,q_22), from chart K_22 to chart K_23 is given by
φ_23(r_22,u_22,q_22) = (r_23,u_23,p_23) = ( r_22q_22, u_22q_22, 1q_22), for q_22 > 0.
The inverse map, φ_32(r_23,u_23,p_23), which transports coordinates from chart K_23 to K_22, is given by
φ_32(r_23,u_23,p_23) = (r_22,u_22,q_22) = ( r_23p_23, u_23p_23, 1p_23), for p_23 >0.
The images of the lines ℓ_22^± are the lines ℓ_23^± and they are forward asymptotic to the equilibrium at (r_23,u_23,p_23)=(0,0,-1). That is,
lim_ζ_22→∞φ_23( ℓ_22^±) = lim_ζ_23→∞ℓ_23^± = { (r_23,u_23,p_23)=(0,0,-1) }.
Thus, solutions terminate at the point ℒ_-^a ∩𝕊^2 on the equator of the blown-up hemisphere.
See Fig. <ref>.
Symmetry: With the above analysis in hand, we can obtain the dynamics in the coordinate charts K_24:={p_2 = -1 } and K_25:={q_2 = -1 } via the symmetry (<ref>). In particular, the image of the unstable manifold, Γ_22^+, of the saddle equilibrium at the origin in chart K_22: {p_2=1} under the symmetry transformation (<ref>) is the stable manifold, W^s(0,0,0) =: Γ_24^-, of the saddle equilibrium at the origin in chart K_24: {p_2=-1}.
Dynamics on the hemisphere: From our analysis of the dynamics in the charts K_21, K_22, and K_23, together with the transition maps φ_ij between them, we conclude the following.
* Solutions emanate from the point ℒ_-^u ∩𝕊^2 on the equator of the blown-up hemisphere, travel over the top of the hemisphere in the region enclosed by Γ_0^- and Γ_0^+, and terminate at the point ℒ_-^s ∩𝕊^2 on the equator of the blown-up hemisphere. These are the class 1 heteroclinics.
* Solutions emanate from the point ℒ_+^u ∩𝕊^2 on the equator of the blown-up hemisphere, travel over the top of the hemisphere in the region enclosed by Γ_0^+ and the equator, and terminate at the point ℒ_-^s ∩𝕊^2 on the equator of the blown-up hemisphere. These are the class 2 heteroclinics.
* The unstable manifold, Γ_0^+, of the equilibrium point corresponding to the intersection of the positive p_2-axis with 𝕊^2 is the separatrix that divides between class 1 and class 2 heteroclinics.
* Solutions emanate from the point ℒ_-^u ∩𝕊^2 on the equator of the blown-up hemisphere, travel over the top of the hemisphere in the region enclosed by Γ_0^- and the equator, and terminate at the point ℒ_+^s. These are the class 3 heteroclinics.
* The stable manifold, Γ_0^-, of the equilibrium point corresponding to the intersection of the negative p_2-axis with 𝕊^2 is the separatrix that divides between class 1 and class 3 heteroclinics.
§ ANALYSIS OF THE GOVERNING EQUATIONS IN THE ENTRY/EXIT CHART LG
In this appendix, we present the analysis of (<ref>) in the sequence of invariant hyperplanes:
{δ_1=0}∩{ a_1=0},
{ r_1=0}∩{ a_1 = 0 },
{ r_1 = 0 }∩{δ_1 = 0 },
{δ_1=0 },
and { r_1=0},
respectively.
These are the intermediate results used in Section <ref> to obtain the invariant sets and dynamics of the full system (<ref>).
In the invariant hyperplane
{δ_1=0}∩{ a_1=0 }, system (<ref>) reduces to
ṙ_1 = 1/2√() p_1 r_1
ṗ_1 = 1 - v_1 - 3/2√() p_1^2
+ 1/3√()r_1^2
v̇_1 = -2 √() p_1 v_1
q̇_1 = -3/2√() p_1 q_1.
The first three components of this vector field are independent of q_1, and hence the fourth equation decouples from the others.
The line I given by (<ref>)
is also invariant for this larger system,
and E_± are again fixed points on I,
recall (<ref>).
The equilibrium E_+ is a saddle with three stable eigenvalues
-√(6)^1/4,
-2√(2/3)^1/4,
and -√(3/2)^1/4,
and one unstable eigenvalue
1/√(6)^1/4.
The stable subspace is given by
span{[ [ 0; 1; 0; 0 ]],
[ [ 0; -√(3); √(2)^1/4; 0 ]],
[ [ 0; 0; 0; 1 ]]
},
and the unstable subspace is in the r_1-direction,
span{[ [ 1; 0; 0; 0 ]]
}.
The equilibrium E_- is a saddle with three unstable eigenvalues
√(3/2)^1/4,
2√(2/3)^1/4,
and √(6)^1/4,
and one stable eigenvalue
-1/√(6)^1/4.
The unstable subspace is given by
span{[ [ 0; 0; 0; 1 ]],
[ [ 0; √(3); √(2)^1/4; 0 ]],
[ [ 0; 1; 0; 0 ]]
},
and the stable subspace is in the r_1-direction,
span{[ [ 1; 0; 0; 0 ]]
}.
There is also a two-dimensional surface of equilibria
in the hyperplane {δ_1=0 }∩{a_1=0}.
It is given by
𝒮={ r_1 ∈ℝ, δ_1=0, p_1=0,
v_1=1 + 1/3√()r_1^2,
q_1 ∈ℝ, a_1=0}.
The eigenvalues
of the Jacobian at points on 𝒮
are
λ_s = -√(2√() + r_1^2 ), λ_u = √(2√() + r_1^2 ), λ_c = 0,0.
Hence, 𝒮
is a surface of saddle points,
and it corresponds to the saddle branches
S_s^± of the critical manifold S.
The associated stable, unstable, and center subspaces are
𝔼_s = span{[ [ -√() r_1; 2√(2√() + r_1^2 ); 4/3√()(3 + r_1^2 √()); 3 √() q_1 ]] },
𝔼_u = span{[ [ -√() r_1; -2√(2√() + r_1^2 ); 4/3√()(3 + r_1^2 √()); 3 √() q_1 ]] },
𝔼_c
= {[ [ 0; 0; 0; 1 ]],
[ [ 3; 0; 2 √() r_1; 0 ]] }.
The center manifold W^c(𝒮) is two-dimensional
in this hyperplane, and it emanates from ℓ.
Next, we examine the invariant hyperplane
{r_1=0}∩{a_1=0}.
Here, system (<ref>) reduces to
δ̇_1 = -√() p_1 δ_1
ṗ_1 = 1 - v_1 - 3/2√() p_1^2
v̇_1 = √() (-2 p_1 v_1 + δ_1 q_1)
q̇_1 = -3/2√() p_1 q_1 + δ_1.
This system is fully coupled,
in contrast to the above system. The line I given by (<ref>) is also invariant for this larger system, and E_± are again fixed points on I, recall (<ref>).
The equilibrium E_+ is stable with eigenvalues
-√(6)^1/4,
-2√(2/3)^1/4,
-√(3/2)^1/4,
and -√(2/3)^1/4.
The associated eigenvectors are
[ [ 0; 1; 0; 0 ]],
[ [ 0; -√(3); √(2)^1/4; 0 ]],
[ [ 0; 0; 0; 1 ]],
[ [ ^1/4; 0; 0; √(6) ]].
The equilibrium E_- is unstable with eigenvalues
√(2/3)^1/4,
√(3/2)^1/4,
2√(2/3)^1/4,
and √(6)^1/4, and eigenvectors
[ [ -^1/4; 0; 0; √(6) ]],
[ [ 0; 0; 0; 1 ]],
[ [ 0; √(3); √(2)^1/4; 0 ]],
[ [ 0; 1; 0; 0 ]].
Also, the line ℓ (recall (<ref>))
is a line of saddle equilibria
of (<ref>),
since the eigenvalues are
λ_s = -√(2)^1/4, λ_u = √(2)^1/4, λ_c = 0,0.
The associated stable, unstable, and center subspaces are
𝔼_s = span{[ [ 0; 2√(2)^-1/4; 4; 3 q_1 ]] },
𝔼_u = span{[ [ 0; -2√(2)^-1/4; 4; 3 q_1 ]] },
𝔼_c = span{[ [ 0; 0; 0; 1 ]], [ [ 4/4-3√()q_1^2; 2q_1/4-3√()q_1^2; 0; 0 ]] }.
Hence, (<ref>) has a two-dimensional center manifold
N ≡ W^c(ℓ) ∩{ r_1=0 }.
Moreover, for δ_1>0, N is unique in the half-space { p_1 < 0 }.
Also, δ_1 increases on N
in this half-space.
Proceeding, we present the main results for the dynamics in the { r_1=0}∩{δ_1=0} hyperplane. The equations here are given by system (<ref>) with the equation ȧ_1= -3/2√() p_1 a_1 appended. Since the (p_1,v_1,q_1) subsystem is independent of a_1 on this hyperplane, the dynamics in these three variables are the same as the dynamics for (<ref>), and then one may solve for a_1 using quadrature.
The line I given by (<ref>) is invariant, and E_± are fixed points on it.
The equilibrium E_+ is stable with eigenvalues
-√(6)^1/4,
-2√(2/3)^1/4,
-√(3/2)^1/4, and -√(3/2)^1/4,
and eigenvectors
w_1^+ = [ [ 1; 0; 0; 0 ]],
w_2^+ = [ [ -√(3); √(2)^1/4; 0; 0 ]],
w_3^+ = [ [ 0; 0; 1; 0 ]],
w_4^+ = [ [ 0; 0; 0; 1 ]].
The equilibrium E_- is unstable with eigenvalues
√(3/2)^1/4,
√(3/2)^1/4,
2√(2/3)^1/4, √(6)^1/4,
and eigenvectors
w_1^- = [ [ 0; 0; 1; 0 ]],
w_2^- = [ [ 0; 0; 0; 1 ]],
w_3^- = [ [ √(3); √(2)^1/4; 0; 0 ]],
w_4^- = [ [ 1; 0; 0; 0 ]].
There is also a two-dimensional surface of equilibria
𝒮̂={ r_1=0, δ_1=0, p_1=0,
v_1=1,
q_1 ∈ℝ, a_1 ∈ℝ}.
The eigenvalues of the Jacobian at points on 𝒮̂ are
λ_s = -√(2)^1/4, λ_u = √(2)^1/4, λ_c = 0,0.
Hence, 𝒮̂
is a surface of saddle points.
The associated stable, unstable, and center subspaces are
𝔼_s = span{[ [ 2√(2); 4^1/4; 3 ^1/4q_1; 3 ^1/4 a_1 ]] },
𝔼_u = span{[ [ -2√(2); 4^1/4; 3 ^1/4q_1; 3 ^1/4 a_1 ]] },
𝔼_c
= span{[ [ 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 1 ]] }.
Therefore, this subsystem has a two-dimensional center manifold W^c(𝒮̂) ∩{ r_1=0 }∩{δ_1=0 }.
It emanates from ℓ, and W^c(𝒮̂) ∩{ r_1=0 }∩{δ_1 =0 } contains
W^c(ℓ) ∩{ r_1=0 }∩{δ_1 =0 }∩{ a_1=0 }.
Up next is the invariant hyperplane {δ_1 = 0 }.
The full system (<ref>) is fifth-order here:
ṙ_1 = 1/2√() p_1 r_1
ṗ_1 = 1 - v_1 - 3/2√() p_1^2
+ 1/3√() r_1^2
v̇_1 = -2 √() p_1 v_1
q̇_1 = -3/2√() p_1 q_1
ȧ_1 = 3/2√() p_1 a_1.
The line I given by (<ref>) is invariant, and E_± are fixed points on it.
The equilibrium E_+ is a saddle with stable eigenvalues
-√(6)^1/4,
-2√(2/3)^1/4,
-√(3/2)^1/4, and -√(3/2)^1/4, and unstable eigenvalue 1/√(6)^1/4.
The associated stable and unstable eigenspaces are
𝔼_s = span{[ [ 0; 1; 0; 0; 0 ]],
[ [ 0; -√(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]]
}, 𝔼_u = span{[ [ 1; 0; 0; 0; 0 ]] }.
The equilibrium E_- is a saddle with stable eigenvalue
-1/√(6)^1/4
and unstable eigenvalues √(3/2)^1/4,
√(3/2)^1/4,
2√(2/3)^1/4, √(6)^1/4,
The associated stable and unstable eigenspaces are
𝔼_s =span{[ [ 1; 0; 0; 0; 0 ]] },
𝔼_u = span{[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]],
[ [ 0; √(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 1; 0; 0; 0 ]]
}.
There is also a two-dimensional surface of equilibria
𝒮̃={ r_1=0, δ_1=0, p_1=0,
v_1=1 + 1/3√() r_1^2,
q_1 ∈ℝ, a_1 ∈ℝ}
The eigenvalues of the Jacobian at points on 𝒮̃ are
λ_s = -√(2√() + r_1^2), λ_u = √(2√() + r_1^2), λ_c = 0, 0, 0.
Hence, 𝒮̃
is a surface of saddle points.
The associated stable and unstable subspaces are
𝔼_s = span{[ [ -√() r_1; 2√(2√() + r_1^2); 4/3√()(3+√()r_1^2); 3 √()q_1; 3 √() a_1 ]] },
𝔼_u = span{[ [ -√() r_1; -2√(2√() + r_1^2); 4/3√()(3 + √()r_1^2); 3 √() q_1; 3 √() a_1 ]] }.
The associated center subspace is
𝔼_c
= span{[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]],
[ [ 3; 0; 2√()r_1; 0; 0 ]]
}.
Therefore, this subsystem has a three-dimensional center manifold W^c(𝒮̃) ∩{ r_1=0 }∩{δ_1=0 }.
It emanates from ℓ, coincides with the critical manifold S, and W^c(𝒮̃) ∩{δ_1 =0 } contains
W^c(ℓ) ∩{δ_1 =0 }∩{ a_1=0 }.
In the invariant hyperplane { r_1=0 }, system (<ref>) is the following fifth-order set of equations:
d δ_1/dy_1 = -√()p_1 δ_1
dp_1/dy_1 = 1-v_1-3/2√()p_1^2
dv_1/dy_1 = √()(-2p_1 v_1 + δ_1 q_1 )
dq_1/dy_1 = -3/2√() p_1 q_1 +δ_1
da_1/dy_1 = -3/2√() p_1 a_1.
The line I given by (<ref>) is still invariant for this system, and E_± remain as fixed points on I, recall (<ref>).
The equilibrium E_+ is stable with spectrum
σ^s_+ = { -√(6)^1/4,
-2√(2/3)^1/4,
-√(3/2)^1/4,
-√(3/2)^1/4,
-√(2/3)^1/4}
At E_+, the stable subspace is
𝔼^s = span {[ [ 0; 1; 0; 0; 0 ]],
[ [ 0; -√(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]],
[ [ ^1/4; 0; 0; √(6); 0 ]]
}
The other equilibrium, E_-, is unstable with spectrum given by
σ_-^u = {√(2/3)^1/4,
√(3/2)^1/4,
√(3/2)^1/4,
2√(2/3)^1/4, √(6)^1/4}.
At E_-, the unstable eigenspace is
𝔼^u = span {[ [ -^1/4; 0; 0; √(6); 0 ]],
[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]],
[ [ 0; √(3); √(2)^1/4; 0; 0 ]],
[ [ 0; 1; 0; 0; 0 ]]
}.
There is also a manifold of equilibria
𝒮_0 = { r_1=0, δ_1=0, p_1=0, v_1=1, q_1 ∈ℝ, a_1 ∈ℝ}.
It is a manifold of saddle fixed points, since the eigenvalues are
λ_s = -√(2)^1/4, λ_u = √(2)^1/4,
λ_c = 0, 0, 0.
The associated eigenspaces are
𝔼^s = span {[ [ 0; 2 √(2); 4^1/4; 3 ^1/4 q_1; 3^1/4a_1 ]]
},
𝔼^u = span {[ [ 0; - 2 √(2); 4^1/4; 3^1/4 q_1; 3^1/4a_1 ]] },
𝔼^c = span {[ [ 0; 0; 0; 1; 0 ]],
[ [ 0; 0; 0; 0; 1 ]],
[ [ 4/4-3√()q_1^2; 2q_1/4-3√()q_1^2; 0; 0; 0 ]]
}.
Therefore, in the hyperplane { r_1=0 }, there is
a three-dimensional
center manifold
N = W^c(ℓ).
This manifold contains the surface 𝒮_0 of equilibria.
Moreover, as shown in Section <ref>, these key manifolds persist in the full system (<ref>) for 0< r_1 ≪ 1.
§ THE PROOF OF LEMMA LG
The proof is split into two steps. First, we calculate a regular perturbation expansion in powers of r_2 of the solution Γ_0 in the H_2=0 level set. This expansion is valid in K_2 on arbitrary finite intervals of the independent variable y_2. Second, we identify the fixed points on the equator of the hemisphere to which this perturbed solution limits as y_2 →±∞, thereby establishing the persistence of the heteroclinic connection, which corresponds to the intersection of the two invariant manifolds.
All calculations in these steps were performed both by hand and symbolically.
Step 1.
Let
Γ_δ = Γ_0 + r_2 Γ_1 + r_2^2 Γ_2 + r_2^3 Γ_3 + 𝒪(r_2^4), with Γ_k=(u_2k,p_2k,v_2k,q_2k), k=1,2,….
We substitute this expansion into (<ref>) and solve for Γ_k, order by order in powers of r_2. At 𝒪(r_2^0), we recover Γ_0, as given by (<ref>).
At 𝒪(r_2),
the equations are
u_21' = √() p_21
p_21' = 2 u_20u_21 - v_21
v_21' = √() q_21
q_21' = u_21 - a_2.
In the algebraic solution, there is a free parameter χ_21∈ℝ:
u_21(y_2) = χ_21 y_2 + 3/2 a_2,
p_21(y_2)=1/√()χ_21,
v_21(y_2) =1/6√()χ_21y_2^3 + 1/4√() a_2 y_2^2,
q_21(y_2) =1/2χ_21 y_2^2 +1/2 a_2 y_2.
At 𝒪(r_2^2), the equations are
u_22' = √() p_22
p_22' = 2 u_20u_22 + u_21^2 - v_22 + 1/3√() u_20^3
v_22' = √() q_22
q_22' = u_22.
Here, there is also a free parameter χ_22∈ℝ
in the algebraic solution. We find
u_22(y_2) = 3/√()χ_21^2 + 5/96√() + χ_22 y_2 - 5^3/2/3,456 y_2^4,
p_22(y_2) =1/√()χ_22 - 5/864 y_2^3,
v_22(y_2) =9/4 a_2^2 + 3 a_2 χ_21 y_2 + ( 3/2χ_21^2 + 5/192) y_2^2 + 1/6√()χ_22y_2^3 - ^2/20,736 y_2^6,
q_22(y_2) =3/√() a_2 χ_21
+(
3/√()χ_21^2 + 5√()/96) y_2 + 1/2χ_22 y_2^2 - ^3/2/3,456 y_2^5.
At 𝒪(r_2^3), the equations are
u_23' = √() p_23
p_23' = 2 u_20u_23 + 2u_21u_22 - v_23 + √() u_20^2 u_21
v_23' = √() q_23
q_23' = u_23.
The algebraic solution at this order has the free parameter χ_23∈ℝ:
u_23(y_2) = 6/√()χ_21χ_22 + χ_23 y_2 - 7/96 a_2 y_2^2 - 5/144χ_21 y_2^3,
p_23(y_2) =1/√()χ_23 - 7√()/48 a_2 y_2 - 5√()/48χ_21 y_2^2,
v_23(y_2) = a_2 ( 9/√()χ_21^2 + 29 √()/96)
+( 3 a_2 χ_22 + 6/√()χ_21^3 + 5√()/16χ_21) y_2
+ 3 χ_21χ_22 y_2^2 + √()/6χ_23 y_2^3 - 7 ^3/2/1,152a_2 y_2^4
-^3/2/576χ_21 y_2^5,
q_23(y_2) =1/√()( 3 a_2 χ_22 + 6/√()χ_21^3 + 5√()/16χ_21)
+6/√()χ_21χ_22 y_2
+ 1/2χ_23 y_2^2 - 7/288 a_2 y_2^3 - 5/576χ_21 y_2^4.
Step 2.
We substitute the expansions for (u_2,p_2,v_2,q_2) from Step 1 into the Hamiltonian H_2 given by (<ref>). After lengthy calculations, we find that all of the terms that depend on powers of y_2 vanish, and one is left with:
H_2 |_Γ_δ =
-1/12 a_2 r_2 - 5√()/576 r_2^2
-1/6√()a_2^2 r_2^4
+𝒪(r_2^5).
Next, we multiply H_2 by 12√() r_2^2, recall that
ã=√()r_2^3 a_2 by (<ref>),
and impose H_2 |_Γ_δ = 0. This yields
ã = - 5/48 r_2^4 + 𝒪(r_2^6).
Finally, we recall that δ=r_2^2 in chart K_2 and a=1+ã by (<ref>).
Therefore, we have derived the formula a_c(δ) = 1 - 5/48δ^2 + 𝒪(δ^3), given in (<ref>).
99
AACS2024
A. Asch, M. Avery, A. Cortez, and A. Scheel,
Slow passage through the Busse balloon
–Predicting steps on the Eckhaus staircase,
Eur. J. Appl. Math. (2024) in press.
ADK2017 D. Avitabile, M. Desroches, and E. Knobloch, Spatiotemporal canards in neural field equations, Phys. Rev. E 95 (2017), 042205.
ADKK2017
D. Avitabile, M. Desroches, E. Knobloch, and M. Krupa, Ducks in space: From nonlinear absolute instability to noise-sustained structures in a pattern-forming system, Proc. Roy. Soc. A 473 (2017), 20170018.
ADVW2020
D. Avitabile, M. Desroches, R. Veltz, and M. Wechselberger, Local Theory for Spatio-Temporal Canards and Delayed Bifurcations,
SIAM J. Math. An. 52 (2020), 5703–5747.
BCDD1981
E. Benoit, Callot, F. Diener, and M. Diener,
Chasse au canard,
Collectanea Mathematica 31-32 (1981), 37–119.
B2013
M. Brøns,
An iterative method for the canard explosion in general planar systems,
Discrete
and Continuous Dynamical Systems Supplement 2013 (2013), 77–83.
BBE1991
M. Brøns and Bar-Eli,
Canard explosion and excitation in a model of the Belousov-Zhabotinskii reaction,
J. Phys. Chem., 95 (1991), 8706-8713.
Brons2006
M. Brøns, M. Krupa, and M. Wechselberger,
Mixed Mode Oscillations Due to the Generalized Canard Phenomenon,
Fields Institute Communications 49 (2006), 39–63.
BDHL2023
C. Brown, G. Derks, P. van Heijster, and D.J.B. Lloyd,
Analysing transitions from a Turing
instability to large periodic patterns
in a reaction-diffusion system,
Nonlinearity 36 (2023), 6839–-6878.
Buchholtz1995
F. Buchholtz, M. Dolnik, and I.R. Epstein,
Diffusion-induced instabilities near a canard,
J. Phys. Chem. 99 (1995), 15093–15101.
Buric2006
L. Buřič, A. Klíč, and L. Purmová,
Canard solutions and travelling waves in the spruce budworm population model,
Appl. Math. Comput. 183 (2006), 1039–1051.
B1978
F.H. Busse,
Nonlinear properties of thermal convection,
Rep. Prog. Phys. 41 (1978), 1929–1967.
BC1979
F.H. Busse and R.M. Clever
Instabilities of convection rolls in a fluid of moderate Prandtl number,
J. Fluid Mech. 1 (1979), 319–-335.
BW1971
F.H. Busse and J.A. Whitehead,
Instabilities of convection rolls in a high Prandtl number fluid,
J. Fluid Mech. 47 (1971), 305-–320.
C1981
J. Carr,
Applications of centre manifold theoy, Applied Mathematical Sciences series, 35, Springer Verlag,
New York (1981).
CKW2017
P. Carter, E. Knobloch, and M. Wechselberger,
Transonic canards and stellar wind,
Nonlinearity 30 (2017), 1006–1033.
CH1993
M.C. Cross and P.C. Hohenberg,
Pattern formation outside of equilibrium,
Rev. Mod. Phys. 65 (1993) 851–1112.
CN1984
M.C. Cross and A.C. Newell
Convection patterns in large aspect ratio systems,
Physica D 10 (1984), 299–328.
DPK2009
P. de Maesschalck, N. Popovic, and T.J. Kaper,
Canards and bifurcation delays of spatially homogeneous and inhomogeneous types in reaction-diffusion equations,
Advances Differential Equations 14 (2009) 943–962.
DKO2010
M. Desroches, B. Krauskopf, and H. M. Osinga, Numerical continuation
of canard orbits in slow-fast dynamical systems,
Nonlinearity 23 (2010), 739–-765.
D1984
M.J. Diener, The canard unchained or how fast/slow dynamical systems
bifurcate,
Math. Intell. 6, (1984), 38–49.
D2019
A. Doelman,
Pattern formation in reaction-diffusion systems – an
explicit approach,
in Complexity Science, M. Peletier, R. van
Santen, and E. Steur (eds.), 129-182 (2019)
World Scientific.
DRS12
A. Doelman, J.D.M. Rademacher and S. van der Stelt,
Hopf dances near the tips of Busse balloons,
Disc. Cont. Dyn. Syst. Series S
5 (2012) 61–92.
AUTO
E.J. Doedel, A.R. Champneys, T.F. Fairgrieve, Y.A. Kuznetsov, K.E. Oldeman,
R.C. Paffenroth, B. Sandstede, X.J. Wang, and C. Zhang,
AUTO-07P: Continuation and bifurcation software for ordinary differential equations, Technical
Report, Concordia University, Montreal, Canada (2007).
DR1996
F. Dumortier and R. Roussarie,
Canard cycles and center manifolds, Memoirs of the AMS 557 (1996), Amer. Math. Soc, Providence, Rhode Island.
E1965
W. Eckhaus,
Studies in Non-Linear Stability Theory,
Springer Tracts in Natural Philosophy 6,
Springer, Berlin (1965).
E1983
W. Eckhaus,
Relaxation oscillations including a standard chase on French ducks, in Asymptotic Analysis II, F. Verhulst (ed.) Springer, Berlin (1983), 449–497.
E1993
W. Eckhaus,
The Ginzburg-Landau manifold is an attractor,
J. Nonlin. Sci. 3 (1993), 329–348.
EK2005
L. Edelstein-Keshet,
Mathematical Models in Biology, Classics in Applied Mathematics, Series Number 46, Society Industrial Applied Mathematics, Philadelphia (2005).
EHKPPZ2022
M. Engel, F. Hummel, C. Kuehn, N. Popović, M. Ptashnyk and T. Zacharis,
Geometric analysis of fast-slow PDEs with fold singularities,
preprint (2022), arXiv:2207.06134
EP1998
I.R. Epstein and J.A. Pojman,
An Introduction to Nonlinear Chemical Dynamics: Oscillations, Waves, Patterns, and Chaos,
Oxford University Press, Oxford, UK (1998).
Fenichel1979
N. Fenichel,
Geometric singular perturbation theory for ordinary differential equations,
J. Differ. Eq.
31 (1979), 53–98.
GV2007
J. Galan-Vioque and A. Vanderbauwhede, Continuation of periodic orbits in symmetric Hamiltonian systems,
in Numerical Continuation Methods for Dynamical Systems,
B. Krauskopf, H.M. Osinga, and J. Galan-Vioque (eds.), Springer (2007) 269–299.
GZK2018
P. Gandhi, Y.R. Zelnik, and E. Knobloch,
Spatially localized structures in the Gray–Scott model,
Philos. Trans. A: Math. Phys. Eng. Sci.
376
(2018), 20170375.
GKSV2023
R. Goh, T.J. Kaper, A. Scheel, and T. Vo,
Fronts in the wake of a parameter ramp: slow passage through pitchfork and
fold bifurcations,
SIAM J. Appl. Dyn. Sys. 22 (2023),
2312–2356.
GKS2024
R. Goh, T.J. Kaper, and A. Scheel,
Pitchfork bifurcation along a slow parameter
ramp: Coherent structures in the critical scaling,
Studies Appl. Math. (2024), 1–21.
GKV2022
R. Goh, T.J. Kaper, and T. Vo,
Delayed Hopf Bifurcation and Space–Time Buffer Curves in the Complex Ginzburg–Landau Equation,
IMA J. Appl. Math. 87 (2022), 131-–186.
H1980
J. Hale,
Ordinary Differential Equations,
Krieger Pub., Malabar, Florida (1980).
HI2011
M. Haragus and G. Iooss,
Local Bifurcations, Center Manifolds, and Normal Forms in Infinite-dimensional Dynamical Systems,
Springer (2011).
H1991
A. van Harten,
On the validity of the Ginzburg-Landau equation,
J. Nonlin. Sci. 1 (1991), 397–422.
Hasan2018
C.R. Hasan, B. Krauskopf and H.M. Osinga,
Saddle Slow Manifolds and Canard Orbits in ℝ^4 and Application to the Full Hodgkin-Huxley Model,
J. Math. Neurosci. 8 (2018), 5.
HJK2022
F. Hummel, S. Jelbart and C. Kuehn,
Geometric blow-up of a dynamic Turing instability in the Swift-Hohenberg equation, preprint
(2022) arXiv:2207.03967.
IMD1989
G. Iooss, A. Mielke, and Y. Demay,
Theory of Ginzburg-Landau problems in hydrodynamic stability problems,
European J. Mech. B Fluids 8
(1989), 229–268.
IP1993
G. Iooss and M.C. Peroueme,
Perturbed homoclinic solutions in reversible 1:1 resonance vector fields,
J. Diff. Eq. 103
(1993), 62–88.
JK2024
S. Jelbart and C. Kuehn,
A formal geometric blow-up method for pattern forming systems,
Contemp. Math. (2024), to appear.
Jones1995
C.K.R.T. Jones,
Geometric singular perturbation theory,
in Dynamical Systems, R. Johnson, ed., Lecture Notes in Math. 1609, Springer, New York, 1995, pp. 44–118.
KV2018
T.J. Kaper and T. Vo,
Delayed loss of stability due to the slow passage through Hopf bifurcations in reaction-diffusion equations.
Chaos 28 (2018), 091103.
KV2021
T.J. Kaper and T. Vo,
A new class of chimeras in locally coupled oscillators with small-amplitude, high-frequency asynchrony and large-amplitude, low-frequency synchrony,
Chaos 31 (2021), 123111.
KLSBE2021
C. Konow, Z. Li, S. Shepherd, D. Bullara, and I.R. Epstein,
Influence of survival, promotion, and growth on pattern formation in zebrafish skin, Scientific Reports 11 (2021), 9864.
KS2001
M. Krupa and P. Szmolyan,
Extending geometric singular perturbation theory to nonhyperbolic points–fold and canard points in two dimensions,
SIAM J. Math. Anal. 33 (2001), 286–314.
KW2010
M. Krupa and M. Wechselberger,
Local analysis near a folded saddle-node singularity,
J. Diff. Eq.
248 (2010), 2841–2888.
M1982
H. Meinhardt,
Models of Biological Pattern Formation,
Academic Press, New York (1982).
MG2000
H. Meinhardt and A. Gierer,
Pattern formation by local self-activation and lateral inhibition,
Bioessays 22 (2000), 753–760.
MKKR1984
E.F. Mishchenko, Yu.S. Kolesov, A.Yu. Kolesov, N.Kh. Rhozov;
Asymptotic Methods in Singularly Perturbed Systems,
Monographs in Contemporary Mathematics; Consultants Bureau, Plenum Publishing, NY (1984).
Mitry2017
J. Mitry and M. Wechselberger,
Folded saddles and faux canards,
SIAM J. Appl. Dyn. Syst. 16 (2017), 546–596.
Moehlis
J. Moehlis,
Canards in a surface oxidation reaction,
J. Nonlinear Sci. 12 (2002), 319–345.
MDK2001
D.S. Morgan, A. Doelman, and T.J. Kaper,
Stationary periodic patterns in the 1D Gray–Scott model,
Meth. Appl. An. 7 (2001), 105–150.
M1993
J.D. Murray,
Mathematical Biology,
Biomathematics Texts, Springer, Berlin 19
(1993).
P1921
B. van der Pol,
A theory of the amplitude of free and forced triode vibrations,
Rad. Rev. 1 (1920), 701–710.
P1926
B. van der Pol,
On relaxation-oscillations,
Philos. Mag. Ser. VII 2 (1926), 978.
RKZE2003
H.G. Rotstein, N. Kopell, A.M. Zhabotinsky, and I.R. Epstein,
Canard phenomenon and localization of oscillations in the Belousov-Zhabotinsky reaction with global feedback,
J. Chem. Phys. 119 (2003), 8824–8832.
Roussel1990
M.R. Roussel and S.J. Fraser,
Geometry of the steady-state approximation: Perturbation and accelerated convergence methods,
J. Chem. Phys. 93 (1990), 1072–1081.
S2003
W. van Saarloos,
Front propagation into unstable states,
Physics Reports
386 (2003), 29–222.
SS01
B. Sandstede and A. Scheel,
Essential instabilities of fronts: bifurcation, and bifurcation failure,
Dynamical Systems
16 (2001) 1–28.
SU2017
G. Schneider and H. Uecker,
Nonlinear PDEs: A Dynamical Systems Approach,
Grad. Studies Math. 182,
American Mathematical Society, Providence (2017).
SW2022
G. Schneider and M. Winter,
The amplitude system for a simultaneous short-wave Turing and long-wave Hopf instability,
Disc. Cont. Dyn. Sys. S
15 (2022) 2657–2672.
Szmolyan2001
P. Szmolyan and M. Wechselberger,
Canards in ℝ^3,
J. Differ. Eq.
177 (2001) 419–453.
T1952
A.M. Turing,
The chemical basis of morphogenesis,
Phil. Trans. Royal Soc. London B Bio. Sciences 237 (641) (1952), 37–72.
VSC2023
E. Vilar-Sepulveda and A. Champneys, Degenerate Turing bifurcation and the birth of localized patterns in activator-Inhibitor systems, SIAM J. Appl. Dyn. Sys. 22 (2023), 1673–1709.
VBK2020
T. Vo, R. Bertram, and T.J. Kaper,
Multi-mode attractors and spatio-temporal canards,
Physica D 411 (2020), 132544.
W1997
D. Walgraef,
Spatio-temporal Pattern Formation: With examples from physics, chemistry, and material science,
Springer, New York (1997).
ZS1984
A.K. Zvonkin and M.A. Shubin,
Non-standard analysis and singular perturbations of ordinary differential equations,
Russ. Math. Surveys
39 (1984), 69–132.
|
http://arxiv.org/abs/2409.03489v1 | 20240905125639 | Sparsifying Parametric Models with L0 Regularization | [
"Nicolò Botteghi",
"Urban Fasel"
] | cs.LG | [
"cs.LG"
] |
Quantum features of the transport through ion channels in the soft knock-on model
@
September 9, 2024
=================================================================================
This document contains an educational introduction to the problem of sparsifying parametric models with L_0 regularization <cit.>. We utilize this approach together with dictionary learning to learn sparse polynomial policies for deep reinforcement learning to control parametric partial differential equations <cit.>.
The document is organized as follows: in Section <ref> we introduce the L_0 regularization that we use in our method introduced in <cit.>. In Section <ref>, we introduce the general problem setting for sparsifying parametric models. In Section <ref>, we discuss in more detail all the critical steps to derive the L_0 regularization, and in Section <ref>, we show different ways to use the L_0 regularization in deep reinforcement learning. The code and a tutorial are provided here: <https://github.com/nicob15/Sparsifying-Parametric-Models-with-L0>.
§ SPARSIFYING NEURAL NETWORK LAYERS WITH L_0 REGULARIZATION
To sparsify the weight/coefficient matrix Ξ, the differentiable L_0 regularization method introduced in <cit.> can be used. The method relaxes the discrete nature of L_0 to allow efficient and continuous optimization.
Let d be a continuous random variable distributed according to a distribution p(d| ψ), where ψ indicates the parameters of p(d| ψ). Given a sample from d∼ p(d|ψ), we can define the hard-sigmoid rectification:
z = min(1, max(0, d)).
Equation (<ref>) allows z, i.e., the learnable binary gate, to be exactly zero.
Additionally, we can still compute the probability of the gate being active, i.e., non-zero, by utilizing the cumulative distribution function P:
p(z≠ 0| ψ) = 1 - P(d≤ 0|ψ).
We choose as candidate distribution a binary concrete <cit.>. Thus, the random variable d is distributed in (0,1) with probability density p(d|ψ), cumulative density P(d|ψ), and learnable parameters ψ=[logα, β], with logα the location and β the temperature parameter. The distribution can be stretched to the interval (γ, ζ), where γ < 0 and ζ > 1. Then, the hard-sigmoid on the samples analogously to Equation (<ref>) can be applied:
u ∼𝒰(0, 1) ,
d = σ((logu - log (1-u) + logα)/ β) ,
d̅ = d(ζ - γ) + γ ,
z = min(1, max(0, d̅)) ,
where σ corresponds to the sigmoid activation function.
We can now optimize the parameters ψ of the distribution by minimizing the probability of the gate being active (see Equation (<ref>)). This optimization problem can be seen as stretching the distribution in (0,1). Using Equation (<ref>) and the binary concrete distribution in Equation (<ref>), we can conveniently introduce the L_0 regularization loss as:
L_0(ψ) = ∑_j=1^|ξ| (1-P_d̅_j(0|ψ)) = ∑_j=1^|ξ|σ(logα_j - βlogγ/ζ),
where ξ are the parameters of the model we want to sparsify. At test time, we can estimate the sparse parameters ξ^0 by:
z = min(1, max(0, σ(logα)(ζ - γ) + γ)) ,
ξ^0 = ξ⊙z.
In the following two sections, we introduce the above idea and derive the concepts in more detail.
§ PROBLEM SETTINGS
Suppose we have a dataset 𝒟 of input and output pairs {(x_1, y_1), ⋯, (x_N, y_N)}. We consider a standard supervised learning setting, e.g., regression or classification, with an L_0 regularization to promote sparsity of the parameters ξ of a generic parametric model h:𝒳→𝒴; x ↦ h(x;ξ), e.g., a neural network. We can write the loss function for such a problem as:
ℒ(ξ) = 1/N∑_i=1^N F(h(x_i;ξ), y_i) + λ ||ξ||_0, ||ξ||_0 = ∑_j=1^|ξ|𝕀[ξ_j ≠ 0] ,
= ℒ_E + λℒ_C ,
where F(·) corresponds to a generic loss function, e.g., mean-squared error or cross-entropy, |ξ| is the dimensionality of the parameter vector ξ, λ is a weighting/scaling factor, and 𝕀 is the indicator function. The first term of the loss function ℒ_E corresponds to the error loss that measures how well the model fits the training data, while ℒ_C corresponds to the complexity loss that measures the complexity (or sparsity) of the model. The optimal parameters ξ^* can be found as:
ξ^*=_ξℒ(ξ).
The L_0 norm penalizes nonzero entries of the parameter vector and encourages sparsity in ξ^*.
Unfortunately, the optimization constitutes an intractable brute force 2^|ξ| combinatorial search due to the nondifferentiability of the L_0 complexity loss function. Usually, L_1 or Lasso, or L_2 norms are used as proxy of the L_0 norm as they are differentiable and can be used with gradient-based optimization techniques. Examples of these norms are visualized in Figure <ref>.
However, L_1 and L_2 induce an undesirable shrinkage of the parameter values that is not introduced when using L_0.
§ A GENERAL FRAMEWORK FOR L_0 REGULARIZATION
Consider the L_0 norm under reparametrization of ξ:
ξ^0_j = ξ_j z_j, z_j ∈{0, 1}, ξ_j ≠ 0, ||ξ||_0 = ∑_j=1^|ξ|z_j ,
where z_j corresponds to the binary gate that denotes whether a parameter is present or not. Under this reparametrization, the L_0 norm corresponds to the number of gates being active.
Using the Bernoulli distribution, we can reformulate the loss function in Equation (<ref>) as:
ℒ(ξ, π) = 𝔼_q(z|π)[1/N∑_i=1^N F(h(x_i;ξ⊙z), y_i)] + λ ||ξ||_0, ||ξ||_0 = ∑_j=1^|ξ|π_j ,
where ⊙ indicated the element-wise product.
Analogously to Equation (<ref>), we can find the optimal parameters ξ^* and π^* by solving:
ξ^*,π^* =_ξ, πℒ(ξ, π).
We could think of choosing a Bernoulli distribution over each gate z_j:
q(z_j|π_j) = Bern(π_j) .
[enhanced jigsaw,breakable,pad at break*=1mm,
colback=gray!5!white,colframe=gray!75!black,title=Bernoulli Distribution]
A Bernoulli distribution Bern(π) describes the distribution of a random variable X taking value 1 with probability π or value 0 with probability 1-π, with π being the (learnable) parameter of the distribution:
p(X=1) = 1-p(X=0) = π.
The optimization problem in Equation (<ref>) and (<ref>) is a special case of the variational lower bound over the parameters of the neural network involving spike and slab prior <cit.>.
§.§ Spike and Slab Distribution and Relation to Variational Inference
The spike and slab distribution, shown in Figure <ref>, is considered the gold standard in sparsity-promoting Bayesian inference/linear regression.
It is defined as a mixture of a delta spike at 0 and a continuous distribution over the real line, e.g., a standard Gaussian:
p(z) = Bern(π), p(ξ|z=0)=δ(ξ), p(ξ|z=1)=𝒩(ξ|0,1).
The true posterior distribution under the spike and slab prior is intractable. However, we can rely on variational inference <cit.>.
Let q(ξ, z) be a spike and slab approximate posterior over the parameters ξ and the gate variables z. We can write the variational-free energy under the spike and slab prior and approximated posterior over a parameter vector ξ as:
ℱ = -𝔼_q(z)q(ξ|z)[log p(𝒟|ξ)] + ∑_j=1^|ξ|KL(q(z_j)||p(z_j))
+ ∑_j=1^|ξ|(q(z_j=1)KL(q(ξ_j | z_j=1)||p(ξ_j|z_j=1)))
+ ∑_j=1^|ξ|(q(z_j=0)KL(q(ξ_j | z_j=0)||p(ξ_j|z_j=0))) ,
where KL(q(ξ_j | z_j=0)||p(ξ_j|z_j=0))=0 since the KL divergence of two Gaussian distributions with same mean and variance is equal to zero. Therefore, Equation (<ref>) can be rewritten as:
ℱ = -𝔼_q(z)q(ξ|z)[log p(𝒟|ξ)] + ∑_j=1^|ξ|KL(q(z_j)||p(z_j))
+ ∑_j=1^|ξ|(q(z_j=1)KL(q(ξ_j | z_j=1)||p(ξ_j|z_j=1))) ,
where the term KL(q(z_j)||p(z_j)) corresponds to the KL from the Bernoulli prior p(z_j) and the Bernoulli approximate posterior q(z_j), and KL(q(ξ_j | z_j=1)||p(ξ_j|z_j=1)) can be interpreted as the amount of information the parameter ξ_j contains about the data 𝒟, measured by the KL divergence from the prior p(ξ_j|z_j=1).
We can further simplify Equation (<ref>) by assuming, from an empirical Bayesian procedure, the existence of a hypothetical prior p(ξ_j|z_j=1) for each parameter ξ_j that adapts to q(ξ_j|z_j=1) in a way that we need approximately λ NATs (natural units of information) to transform p(ξ_j|z_j=1) to that particular q(ξ_j|z_j=1). Those λ NATs are thus the amount of information that q(ξ_j|z_j=1) can encode about the data if we had used p(ξ_j|z_j=1) as the prior. This assumption translates into KL(q(ξ_j | z_j=1)||p(ξ_j|z_j=1))=λ. The coefficient λ can be viewed as the amount of flexibility of that hypothetical prior.
Eventually, if we consider optimizing over ξ, instead of integrating, we can write the variational-free energy as:
ℱ = -𝔼_q(z)[log p(𝒟|ξ⊙z)] + ∑_j=1^|ξ|KL(q(z_j)||p(z_j)) + λ∑_j=1^|ξ|q(z_j=1) ,
where ξ corresponds to the optimized ξ. Additionally, by using the positivity of the KL divergence, we can obtain the variational lower bound as:
ℱ≥ -𝔼_q(z)[log p(𝒟|ξ⊙z)] + λ∑_j=1^|ξ|π_j ,
which is equivalent to Equation (<ref>) if we take the negative log-probability of the data to be equal to the loss ℒ(·). This shows that the minimization of the L_0 norm is very close to the variational lower bound involving a spike and slab distribution over the parameters and a fixed cost/penalty for the parameters when the gates are active. Note that, if we are interested in quantifying uncertainties over the gate variables z, we should optimize Equation (<ref>) (rather than (<ref>)) as this will properly penalize the entropy of q(z). Equation (<ref>) also allows to incorporate prior information about the behaviour of the gates (e.g., being active 10% of the time on average).
§.§ Efficient Gradient-based Optimization of the L_0 Norm
Minimizing Equation (<ref>) is still not straightforward. While the second term of the loss is easy to minimize, the first term is still challenging due to the discrete nature of the gates z, which does not allow for efficient gradient-based optimization. However, we can replace the Bernoulli distribution to smooth the objective function in Equation (<ref>) and allow for efficient gradient-based optimization of the expected L_0 norm along with zeros in the parameters ξ.
[enhanced jigsaw,breakable,pad at break*=1mm,
colback=gray!5!white,colframe=gray!75!black,title=Gradient Estimators]
In principle, it is possible to estimate the gradient using REINFORCE <cit.>. However, REINFORCE suffers from high variance of the estimates, requires auxiliary models, and multiple evaluations <cit.>. An alternative is to use the straight-through estimator <cit.> as done in <cit.> or the concrete distribution <cit.>. Unfortunately, the first method provides biased gradients (due to ignoring the Heaviside function in the likelihood during the gradient evaluation), while the second one does not allow for the gates to be exactly zero during the optimization (thus precluding the benefits of conditional computation <cit.>).
Let d be a continuous random variable with a distribution q(d) of parameters ψ. We can now define the gates z as hard-sigmoid rectifications of d such that:
d ∼ q(d|ψ) ,
z = min(1, max(0, d))) = g(d).
In this way, the gate is allowed to be exactly zero. Due to the underlying continuous random variable d, we can still compute the probability of the gate being nonzero, i.e., active, from the cumulative distribution function (CDF) Q(d|ψ):
q(z≠ 0| ψ) = 1 - Q(d≤ 0|ψ) ,
where 1 - Q(d≤ 0|ψ) corresponds to the probability of d being positive. This allows to replace the binary Bernoulli gates in Equation (<ref>) with the CDF in Equation (<ref>):
ℒ(ξ,ψ) = 𝔼_q(d|ψ)[1/N∑_i=1^N F(h(x_i; ξ⊙ g(d)), y_i)] + λ∑_j=1^|ξ|(1-Q(d_j≤0|ψ_j)).
Analogously to Equation (<ref>), we can find the optimal parameters as:
ξ^*, ψ^* = _ξ, ψℒ(ξ,ψ)
For a continuous distribution q(d|ψ) that allows for the reparametrization trick <cit.>, we can express Equation (<ref>) as the expectation over a parameter-free noise distribution p(ϵ) and a deterministic and differentiable transformation f(·) of the parameters ψ and ϵ:
ℒ(ξ,ψ) = 𝔼_p(ϵ)[1/N∑_i=1^N F(h(x_i; ξ⊙ g(f(ψ, ϵ))), y_i)] + λ∑_j=1^|ξ|(1-Q(d_j≤0|ψ_j)) .
This allows for a Monte Carlo approximation to the generally intractable expectation over the noise distribution p(ϵ):
ℒ(ξ,ψ) = 1/L∑_l=1^L (1/N∑_i=1^N F(h(x_i; ξ⊙z^(l)), y_i)) + λ∑_j=1^|ξ|(1-Q(d_j≤0|ψ_j)) ,
z^(l) = g(f(ψ, ϵ^(l))) , ϵ^(l)∼ p(ϵ) .
Equation (<ref>) is differentiable with respect to ψ and can be used with (stochastic) gradient-based optimization, while still allowing the parameters to be exactly zero. Additionally, we can choose an appropriate smoothing distribution q(d|ψ) for our problem.
A choice that works well in practice is the binary concrete <cit.>, with CONCRETE derived from CONtinuous relaxation of disCRETE distribution <cit.>.
§.§.§ The Hard-Concrete Distribution
Assume to have a binary concrete random variable d distributed in (0, 1) with a probability density function (PDF) q(d|ψ):
q(d|ψ) = βα d^-β-1(1-d)^-β-1/(α d^-β+(1-d)^-β)^2 ,
and a cumulative distribution function Q(d|ψ):
Q(d|ψ) = σ(log d - log(1-d)+logα/β) ,
where the distribution has parameters (logα, β), with logα the location and β the temperature, and σ indicates the sigmoid activation function.
We can stretch the distribution in the (γ, ξ) interval, with γ < 0 and ξ > 1 to obtain:
d = d(ξ - γ) + γ.
The stretching of d induces the following probability density function and cumulative distribution function q(d|ψ) and Q(d|ψ):
q(d̅|ψ) = 1/|ξ - γ|q(d-γ/ξ - γ|ψ) ,
Q(d̅|ψ) = Q(d-γ/ξ - γ|ψ).
We can further rectify d with the hard-sigmoid such that:
z = min(1, max(0, d)).
We obtain the following distribution over z, which is referred to as the hard-concrete distribution:
q(z|ψ) = Q_d(0|ψ) δ(z) + (1- Q_d(1|ψ))δ(z-1) + (Q_d(1|ψ)) - Q_d(0|ψ)))q_d(z|d∈ (0,1), ψ) .
This distribution is composed of a delta peak at zero with probability Q_d(0|ψ)), a delta peak at one with probability 1-Q_d(1|ψ)), and a truncated version of q_d(d|ψ) in the range (0, 1).
In practice, we can sample a gate z from the hard-concrete distribution by first applying the reparametrization trick, then sampling from a uniform distribution 𝒰(0,1), feeding the sample to the binary concrete CDF, stretching it, and finally passing it to the hard-sigmoid rectification:
u ∼𝒰(0, 1)
d = σ(log u - log(1-u)+logα/β)
d = d(ξ - γ) + γ
z = min(1, max(0, d)).
An example of a resulting gate z is shown in Figure <ref>.
We can derive the CDF in Equation (<ref>) as:
Q(d|ψ) = ∫_-∞^s q(x|ψ) dx ,
= ∫_-∞^s βα x^-β-1(1-x)^-β-1/(α x^-β+(1-x)^-β)^2 dx.
Solving the integral is very challenging due to the form of the PDF. However, we can rely on the Gumbel-Max trick <cit.>. We start from the observation that Bernoulli random variables are a special case of discrete distributions with states in {0, 1}. Consider a discrete distribution D ∼Discrete(α), with α∈ (0, ∞) the parameter of the discrete distribution, and a two-state discrete random variable on {0, 1}^2 such that D_1 + D_2 = 1:
Q(D_1=1) = α_1/α_1 + α_2.
[enhanced jigsaw,breakable,pad at break*=1mm,
colback=gray!5!white,colframe=gray!75!black,title=The Gumbel-Max Trick]
The Gumbel-Max trick proceeds as follows:
* sample U_k ∼𝒰(0, 1) with k=1, 2,
* find the index k maximizing log(log U_k) + logα_k, and
* set D_k=1 and the remaining D_i≠ k=0.
Therefore, if we apply the Gumbel-Max trick, the case of D_1=1 corresponds to the event:
log(log U_1) + logα_1 > log(log U_2) + logα_2 ,
G_1+ logα_1 > log(log U_2) + logα_2 .
The difference between log(log U_1)=G_1 and log(log U_2)=G_2 is a logistic distribution L with CDF the logistic function σ(x) = 1/(1+exp^-x):
G_1 - G_2 ∼ L(U) = log U - log(1-U) ,
where U ∼𝒰(0,1).
Therefore, we can rewrite Equation (<ref>) as:
Q(D_1=1) = Q(log(log U_1) + logα_1 > log(log U_2) + logα_2)
= Q(G_1 - G_2 + logα_1 - logα_2)
= Q( log U - log(1-U) + logα > 0) ,
where logα = logα_1 - logα_2.
Eventually, we can define the binary concrete random variable as:
Z = σ(log U - log(1-U) + logα/β)
and its CDF as:
Q(z) = Q(Z ≤ z) = Q(σ(log U - log(1-U) + logα/β)≤ z)
= Q(log U - log(1-U) + logα/β≤σ(z)^-1)
= Q(log U - log(1-U) + logα/β≤log(z/1-z))
= Q(log U - log(1-U) + logα≤βlog(z/1-z))
where σ(z)^-1=log(z/1-z) is the inverse of σ(z). Equation (<ref>) is equivalent to (<ref>).
§ DEEP REINFORCEMENT LEARNING WITH L_0 REGULARIZATION
To conclude the tutorial, we show how to use the differentiable L_0 regularization in the context of deep reinforcement learning to learn transition models, reward models, and control policies. We use as test case the pendulum environment from OpenAI Gym <cit.> <https://www.gymlibrary.dev/environments/classic_control/pendulum/>.
§.§ Learning Transition and Reward Models
§.§.§ Dataset Collection
We start by creating a training set of 1000 episodes and a test set of 100 episodes. These dataset are obtained by applying a random policy to the environment for 200 steps per episodes. The code is shown and explained in Code <ref> and <ref>.
[language=Python, caption=Creation of training set., label=lst:train_set]
import gymnasium as gym
from utils.replay_buffer import ReplayBuffer
# initialize gym environment
env = gym.make('Pendulum-v1', g=9.81)
# set maximum number of episodes and steps
max_episodes = 1000
max_steps = 200
# get dimension of the observations, actions, and memory buffer
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
buf_dim = int(max_episodes*max_steps)
# initialize training buffer (we will store the training data here)
training_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=buf_dim)
# create training set
for episode in range(max_episodes):
# reset the environment at the beginning of each episode
observation, _ = env.reset()
for steps in range(max_steps+1):
# select action according to a random policy
action = env.action_space.sample()
# apply the action to the environment
next_observation, reward, terminated, truncated, _ = env.step(action)
done = terminated or truncated
# store data tuple in the training buffer
training_buffer.store(observation, action, reward, next_observation, done)
# set next observation as the current observation
observation = next_observation
if done:
break
[language=Python, caption=Creation of test set., label=lst:test_set]
# set maximum number of episodes
max_episodes_test = 100
# set dimension of memory buffer
buf_dim = int(max_episodes*max_steps)
# initialize testing buffer (we will store the test data here)
testing_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=buf_dim)
# create test set
for episode in range(max_episodes_test):
# reset the environment at the beginning of each episode
observation, _ = env.reset()
for steps in range(max_steps + 1):
# select action according to a random policy
action = env.action_space.sample()
# apply the action to the environment
next_observation, reward, terminated, truncated, _ = env.step(action)
done = terminated or truncated
# store data tuple in the testing buffer
testing_buffer.store(observation, action, reward, next_observation, done)
# set next observation as the current observation
observation = next_observation
if done:
break
§.§.§ Transition and reward models
We first show how to learn:
* a transition model T: 𝒮×𝒜→𝒮, mapping a state-action pair (s_t, a_t) to the next state s_t+1, and
* a reward model R: 𝒮×𝒜→ℝ, mapping a state-action pair (s_t, a_t) to the reward r_t.
We approximate T and R in three different ways:
* using a fully-connected neural network with parameters ξ_T and ξ_R, respectively,
* using a fully-connected neural network trained with L_0 regularization with parameters ξ_T and ψ_T and ξ_R and ψ_R, and
* using a SINDy-like model <cit.> trained with L_0 regularization with parameters ξ_T and ψ_T and ξ_R and ψ_R (this is analogous to the sparse policy introduced in <cit.>). We utilize a polynomial-feature library, a Fourier library, and a generalized library composed of polynomial and Fourier features.
The fully-connected neural network model is shown in Code <ref>. The models is composed of three layers with ELU activation for the first two. For learning transition and reward model, we utilize the same architecture.
[language=Python, caption=Fully-connected neural network model., label=lst:FCNN_model]
class FCNN(nn.Module):
def __init__(self, input_dim=3, output_dim=1, h_dim=256, use_bias=True):
super(FCNN, self).__init__()
self.use_bias = use_bias
self.fc = nn.Linear(input_dim, h_dim, bias=use_bias)
self.fc1 = nn.Linear(h_dim, h_dim, bias=use_bias)
self.fc2 = nn.Linear(h_dim, output_dim, bias=use_bias)
def forward(self, obs, act):
# concatenate observation and action before feeding them to the input layer
x = torch.cat([obs, act], dim=1)
x = F.elu(self.fc(x))
x = F.elu(self.fc1(x))
# the output is either the next observation or the reward
out = self.fc2(x)
return out
Similarly to the fully-connected neural network, the sparse fully-connected neural network is composed of three layers with ELU activations. However, the layers include the L_0 mask introduced in <cit.>.
[language=Python, caption=Sparse fully-connected neural network model., label=lst:sparseFCNN_model]
class SparseFCNN(nn.Module):
def __init__(self, input_dim=3, output_dim=1, h_dim=256, weight_decay=0., droprate_init=0.5, temperature=2./3., lambda_coeff=1.):
super(SparseFCNN, self).__init__()
self.fc = L0Dense(in_features=input_dim, out_features=h_dim, bias=True, weight_decay=weight_decay, droprate_init=droprate_init, temperature=temperature, lamba=lambda_coeff, local_rep=False)
self.fc1 = L0Dense(in_features=h_dim, out_features=h_dim, bias=True, weight_decay=weight_decay, droprate_init=droprate_init, temperature=temperature, lamba=lambda_coeff, local_rep=False)
self.fc2 = L0Dense(in_features=h_dim, out_features=output_dim, bias=True, weight_decay=weight_decay, droprate_init=droprate_init, temperature=temperature, lamba=lambda_coeff, local_rep=False)
def forward(self, obs, act):
# concatenate observation and action before feeding them to the input layer
x = torch.cat([obs, act], dim=1)
x = F.elu(self.fc(x))
x = F.elu(self.fc1(x))
# the output is either the next observation or the reward
out = self.fc2(x)
return out
Eventually, in Code <ref>, we show the structure of the L_0 SINDy like model with three different feature libraries. The model makes use of the library features of the PySINDy library <https://github.com/dynamicslab/pysindy> and of a single sparse layer to learn the coefficients of each feature. The sparse layer allows for utilizing the L_0 regularization introduced in <cit.> to learn a sparse linear combination of the nonlinear library features.
[language=Python, caption=L_0 SINDy-like model., label=lst:L0SINDy_model]
class L0SINDy_model(nn.Module):
def __init__(self, input_dim=3, output_dim=1, weight_decay=0., droprate_init=0.5, temperature=2. / 3., lambda_coeff=1., degree=3, frequency=1, lib_type='polynomial'):
super(L0SINDy_reward, self).__init__()
if lib_type == 'polynomial':
self.lib = PolynomialLibrary(degree=degree, include_bias=True, include_interaction=True)
x = np.ones((1, input_dim))
self.lib.fit(x)
xf = self.lib.transform(x)
coef_dim = xf.shape[1]
if lib_type == 'fourier':
self.lib = FourierLibrary(n_frequencies=frequency, include_sin=True, include_cos=True, interaction_terms=True)
x = np.ones((1, input_dim))
self.lib.fit(x)
xf = self.lib.transform(x)
coef_dim = xf.shape[1]
if lib_type == "polyfourier":
poly_lib = PolynomialLibrary(degree=degree, include_bias=True, include_interaction=True)
fourier_lib = FourierLibrary(n_frequencies=frequency, include_sin=True, include_cos=True, interaction_terms=True)
self.lib = GeneralizedLibrary([poly_lib, fourier_lib])
x = np.ones((1, input_dim))
self.lib.fit(x)
xf = self.lib.transform(x)
coef_dim = xf.shape[1]
# single sparse layer learning the coefficents of each dictionary feature
self.fc = L0Dense(in_features=coef_dim, out_features=output_dim, bias=False, weight_decay=weight_decay, droprate_init=droprate_init, temperature=temperature, lamba=lambda_coeff, local_rep=False)
def forward(self, obs, act):
# concatenate observation and action before feeding them to the input layer
x = torch.cat([obs, act], dim=1)
# compute the features of x using the chosen library functions
xf = torch.from_numpy(self.lib.transform((x).cpu().numpy())).to(self.device)
# feed the features to a single layer of sparse fully-connected neural network
out = self.fc(xf)
return out
§.§.§ Training the Models
We indicate with ξ the parameters of the different models and with ψ the L_0-mask parameters.
We train the models to minimize the mean-squared error between the predictions and the ground truth. In particular, we optimize the fully-connected neural network parameters with:
ℒ(ξ_T) = 𝔼_s_t, a_t, s_t+1[||s_t+1 - ŝ_t+1||^2_2] ,
and
ℒ(ξ_R) = 𝔼_s_t, a_t, r_t[||r_t - r̂_t||^2] ,
where ŝ_t+1=T(s_t, a_t, ξ) is the prediction of the transition model T and r̂_t=R(s_t, a_t, ξ) is the prediction of the reward model R, respectively.
The models using the L_0 regularization are instead optimized according to Equation (<ref>):
ℒ(ξ_T, ψ_T) = 𝔼_s_t, a_t, s_t+1[||s_t+1 - ŝ_t+1||^2_2] + λL_0(ψ_T) ,
and
ℒ(ξ_R, ψ_R) = 𝔼_s_t, a_t, r_t[||r_t - r̂_t||^2] + λL_0(ψ_R).
Code <ref> and <ref>, we show the training loop for the transition and reward models.
[language=Python, caption=Transition model training loop., label=lst:dyn_model_training]
def train_dynamics_model(model, optimizer, train_loader, batch_size, num_training_iterations, l0=False):
for _ in range(num_training_iterations):
# random sample a batch of data
data = train_loader.sample_batch(batch_size)
obs = torch.from_numpy(data['obs']).cuda()
next_obs = torch.from_numpy(data['next_obs']).cuda()
act = torch.from_numpy(data['act']).cuda()
optimizer.zero_grad()
# predict the next observation using the parametric model
pred_next_obs = model(obs, act)
# If we do not rely on the L0 regularization, we simply utilize the MSE loss
if l0 == False:
loss = torch.nn.functional.mse_loss(next_obs, pred_next_obs)
total_loss = loss
# If we on the L0 regularization, we add the L0 penalty to the MSE loss
else:
loss = torch.nn.functional.mse_loss(next_obs, pred_next_obs)
reg = -(model.fc.regularization())
total_loss = loss + reg
# compute the loss gradients wrt the model parameters
total_loss.backward()
# update the parameters
optimizer.step()
[language=Python, caption=Reward model training loop., label=lst:rew_model_training]
def train_reward_model(model, optimizer, train_loader, batch_size, num_training_iterations, l0=False):
for _ in range(num_training_iterations):
# random sample a batch of data
data = train_loader.sample_batch(batch_size)
obs = torch.from_numpy(data['obs']).cuda()
rew = torch.from_numpy(data['rew']).cuda().reshape(-1, 1)
act = torch.from_numpy(data['act']).cuda()
optimizer.zero_grad()
# predict the reward using the parametric model
pred_rew = model(obs, act)
# If we do not rely on the L0 regularization, we simply utilize the MSE loss
if l0 == False:
loss = torch.nn.functional.mse_loss(rew, pred_rew)
total_loss = loss
# If we on the L0 regularization, we add the L0 penalty to the MSE loss
else:
loss = torch.nn.functional.mse_loss(rew, pred_rew)
reg = -(model.fc.regularization())
total_loss = loss + reg
# compute the loss gradients wrt the model parameters
total_loss.backward()
# update the parameters
optimizer.step()
§.§.§ Results
In Figure <ref>, we show the accuracy of the different transition and reward models on the training and test sets after 500 training epochs. While the fully-connected neural network seems to be training faster than the sparse models, we need to keep in mind that the L_0 regularization makes the minimization of the loss slightly more challenging due to the need for minimizing not only the MSE but also the number of parameters. Therefore, especially for the sparse models, it is worth mentioning that the results can be further improved with longer training time and hyperparameter optimization techniques.
§.§ Learning Control Policies
We utilize the method proposed in <cit.> to learn sparse, and interpretable policies for the pendulum example. We replace the neural network-based policy of the twin-delayed deep deterministic policy gradient algorithm <cit.> with a polynomial, Fourier, and a polynomial and Fourier policies that are sparsified over training using the L_0 regularization (all the details can be found in our paper <cit.>). It is worth mentioning that our method is independent of the deep reinforcement learning algorithm chosen and other algorithms can be used.
§.§.§ Results
In Figure <ref>, we show the training and evaluation rewards collected by the different agents in the simple task of stabilizing the inverted pendulum in its unstable equilibrium. Due to the simplicity of the task, we do not see substantial differences among the agents with the exception of the Fourier agent. This is due to the fact that a policy only composed of sines and cosines is not suitable for a stabilization task in a single point. However, such a policy may be useful for periodic tasks. Due to the limited number of learnable parameters, the agent with polynomial feature library learns to solve the task slightly faster than the fully-connected neural network agent. In addition, the agents relying on the sparse dictionaries allow for deriving a closed-form equation of the learned policies, opening the door to a-posteriori stability and robustness analysis.
unsrt
.9
|
http://arxiv.org/abs/2409.02419v1 | 20240904035014 | Diffusion-limited settling of highly porous particles in density-stratified fluids | [
"Robert Hunt",
"Roberto Camassa",
"Richard M. McLaughlin",
"Daniel M. Harris"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
[email protected], [email protected]
Center for Fluid Mechanics, School of Engineering, Brown University
Department of Mathematics, The University of North Carolina at Chapel Hill
Department of Mathematics, The University of North Carolina at Chapel Hill
[email protected], [email protected]
Center for Fluid Mechanics, School of Engineering, Brown University
§ ABSTRACT
The vertical transport of solid material in a stratified medium is fundamental to a number of environmental applications, with implications for the carbon cycle and nutrient transport in marine ecosystems. In this work, we study the diffusion-limited settling of highly porous particles in a density-stratified fluid through a combination of experiment, analysis, and numerical simulation. By delineating and appealing to the diffusion-limited regime wherein buoyancy effects due to mass adaptation dominate hydrodynamic drag, we derive a simple expression for the steady settling velocity of a sphere as a function of the density, size, and diffusivity of the solid, as well as the density gradient of the background fluid. In this regime, smaller particles settle faster, in contrast with most conventional hydrodynamic drag mechanisms. Furthermore, we outline a general mathematical framework for computing the steady settling speed of a body of arbitrary shape in this regime and compute exact results for the case of general ellipsoids. Using hydrogels as a highly porous model system, we validate the predictions with laboratory experiments in linear stratification for a wide range of parameters. Lastly, we show how the predictions can be applied to arbitrary slowly varying background density profiles and demonstrate how a measured particle position over time can be used to reconstruct the background density profile.
Diffusion-limited settling of highly porous particles in density-stratified fluids
Daniel M. Harris
September 3, 2024
==================================================================================
§ INTRODUCTION
The settling of solid particles in a fluid is one of the most fundamental problems in fluid dynamics, with applications spanning many scales and scientific fields. One important application of recent interest involves particles settling in the ocean: marine snow, which refers to the transport of organic material introduced near the surface, plays an essential role in the carbon cycle and in the ocean ecosystem. Further, human generated waste and microplastics are found in all corners of the Earth, yet the mechanisms that affect their dispersion are not well understood <cit.>.
Understanding the distribution and transport of these materials is essential for predicting and controlling carbon sequestration and microplastic dispersion. Many organic materials are well represented as porous particles, and particles which float at the ocean surface are generally known to develop coatings of porous and organic material due to biofouling before sinking <cit.>. Smaller particles and particles with higher surface area to volume ratios are speculated to leave the surface sooner and sink faster, but this has not been studied in depth. The distribution of these porous particles is also closely linked with ocean ecology and has been shown to be associated with increased biological activity <cit.>.
The seminal work by G.G. Stokes <cit.> provided an expression for the drag on a sphere in a viscous fluid, which, when balanced with gravity, yields the celebrated Stokes settling law, U_μ = 2 ρ_s g R^2/9 μ, where U_μ is the particle settling velocity, ρ_s is the relative density difference of the particle to the uniform background fluid, g is the gravitational acceleration, R is the particle radius, and μ is the dynamic viscosity of the fluid. This work was expanded experimentally through the 1900s across a large range of Reynolds numbers and has continued to find applications across scales and disciplines. This characterization was extended by <cit.> to include effects of ambient density stratification, a topic which has remained an active area of research and was summarized in a recent review article <cit.>. Stratification is known to strongly influence individual particle settling behavior as well as drive particle aggregation due to diffusion-induced flow <cit.>, with the predicted behaviors depending on porosity. In addition, the total particle mass can change in time as it moves through the stratified environment, dramatically affecting the particle's steady settling behavior in certain regimes. However, there is very limited work on porous particles settling through stratification in cases where the background stratification
changes over length scales much larger than the size of the settling particles despite this representing the regime most relevant to the aforementioned environmental applications. Before moving to the specific focus of the present work, we briefly review some of the most relevant literature in what follows.
In early field work, Alldredge and Gottschalk <cit.> measured the in-situ settling of marine snow particles off of the Southern coast of California, with attempts to measure excess density, porosity, and volume, and noted that the settling did not obey Stokes' law. They also noted the shapes of the settling aggregates were often non-spherical, with a tendency to be axisymmetric and elongated along the settling direction. In similar regions, MacIntyre et al. <cit.> documented large accumulations of marine snow at pycnoclines and proposed diffusive mass exchange into the highly porous flocs as a potential mechanism for such increased retention.
In the lab, Li et al. <cit.> considered the effect of mass-exchange on the settling of biological aggregates through a sharp, two-layer stratification. They noted that, in the bulk of each layer, the aggregated particles were well-approximated by Stokes' settling law. They also investigated the residence time of particles at the interface between the two layers and compared that with a proposed theoretical model for the retention time. Their model predicted the retention time to scale as the particle radius squared, although they had mixed agreement with experiments.
Several years later, Kindler et al. <cit.> performed similar settling experiments of spherical porous hydrogel particles in a two-layer fluid and confirmed the previously posited quadratic scaling of retention time with respect to particle radius. They also proposed a quasi-static model for the particle position versus time, accounting for particle inertia, mass adaptation, and form drag, which correctly predicted the strong deceleration and shape of the particle trajectory but somewhat overpredicted the settling velocity and mass adaptation away from the center of the pycnocline. In the discussion, a scaling argument for the steady settling velocity of a porous sphere in the diffusion-limited regime was derived, by suggesting that the density rate of change of the particle must match that of the background fluid. This scaling law was not directly tested.
In closely related work, Camassa et al. <cit.> also performed experiments of porous particles settling through a two-layer fluid and showed good agreement with a first-principles model that includes the effects of mass exchange and Stokes drag. They accounted for the effect of the entrained lighter fluid, initially investigated experimentally as a competing effect by Prarie et al. <cit.>, by introducing a single adjustable parameter that represents a thin uniform shell surrounding the particle. They noted that for small particles, the settling time is dominated by viscous drag (with settling time scaling as R^2), whereas for large particles the settling time is dominated by mass diffusion into the sphere (with settling time scaling as R^-2). Additional numerical simulations of this problem were completed by Panah et al. <cit.>, who proposed various empirical laws for the settling time in a two-layer system, with additional characterization on the influence of the thickness of the transition layer separating the two fluids.
While the prior modeling and laboratory studies discussed above predominantly focused on spheres in relatively sharply stratified two-layer systems, there has only been very limited attention to the case of linear stratification, or more generally, in cases where the stratification changes over length scales that are significantly larger than the particle. Very recently, Ahmerkamp et al. <cit.> considered the settling of spherical porous particles in linear stratification both numerically and experimentally, motivating their exploration by the large length scales of stratification typically found in environmental settings. In their simulations, they considered a relatively general set of parameters and noted that the expected settling velocities, as one might estimate from hydrodynamic drag alone, can be notably reduced by the effect of mass exchange between the background, boundary layer, and particle interior. Various empirical correlations for the drag force were determined by fitting to their numerical results. However, there is only limited comparison to experimental results and with mixed success, and their study focuses strictly on spherical particles. Nevertheless, this work clearly highlights the richness and complexity of the general problem due to the highly coupled multi-physics involved and the importance of considering mass exchange in estimating steady settling velocities.
In the present work, we study the steady settling of highly porous particles in linear stratification. We consider these particles as solids that allow for diffusion of a solute through their interior but are inpenetrable to fluid flow. This focus is similarly motivated by marine systems where aggregates are generally of very high porosity (≫ 95%) <cit.> and density gradients may vary on the order of meters to kilometers. However, we focus herein on the diffusion-limited regime, which we demonstrate can be delineated by an appropriately defined non-dimensional Rayleigh number. In this regime, we are able to derive an exact formula for the particle settling speed from first principles as well as generalize the result to bodies of arbitrary geometry. Extensions and applications to nonlinear stratifications are also explored.
The theoretical predictions are validated by controlled laboratory experiments that directly measure the steady settling velocities of agar particles immersed in a linear stratification of salt (sodium chloride). Agar is a highly porous material (composed of mostly interstitial water) that permits diffusive transport of salt into its bulk but is essentially impermeable to fluid flow. The agar particles are cast in 3D-printed molds, placed gently in a stably stratified tank of fluid, and visualized from the side as they slowly fall. Further details on the experimental methods are provided in the Methods section.
§ RESULTS
§.§ Model
Consider a spherical particle of radius R that settles vertically in a stable background stratification profile ρ_b(z) with constant density gradient γ=dρ_b/dz, as depicted in Figure <ref>(a,b). The particle is assumed impermeable to fluid flow but diffusively permeable to the stratifying agent such that its mean density can change in time. The equations describing the fully coupled fluid-solute-particle system have been previously outlined in prior work <cit.> but will be described briefly here. The interior of the particle is described by a diffusion equation for the solute, whereas the exterior fluid domain is modeled by the Navier-Stokes equations and an advection-diffusion equation for the stratifying solute. The interior and exterior domains are coupled by enforcing continuity of stress, velocity, solute concentration, and solute flux at the particle boundary. For the general case, the fully coupled system requires numerical solution. However, we seek to simplify the full system to a reduced model by appealing to specific parametric regimes, following Camassa et al. <cit.>. In this formulation, the boundary conditions associated with the exterior problem are greatly simplified by assuming the fluid stresses to be approximated by classical Stokes flow and buoyancy due to the undisturbed background stratification, with the exterior solute concentration approximated by the value of the background concentration profile evaluated at the center height of the particle (the so-called “heat bath” approximation). Under these assumptions, the problem can be simplified to an equation for the position of the particle's center Z(t), representing a force balance between Stokes drag and buoyancy, coupled with a diffusion equation for the evolution of the solute in the particle interior whose boundary condition is given by the background density evaluated at the height corresponding to the particle center. For the case of a sphere of radius R, this corresponds to
6 πμ R d/dt Z(t) = g V [ρ_p(t)-ρ_b(z=Z(t))]
where μ is the dynamic viscosity of the fluid, g is the acceleration due to gravity, and V=4/3π R^3 is the particle volume. The density ρ_b(z=Z(t)) is the density of the stratified background fluid at the height of the particle center, whereas ρ_p(t) is the average total density of the particle itself. For a fixed particle density, this represents the Stokes settling of an impermeable solid particle in a background linear stratification and has been the subject of prior analysis <cit.>. In this work, the particle is considered to be a very low volume fraction solid material (i.e. highly porous) that allows for diffusion of the stratifying solute with an effective diffusivity κ_p throughout its bulk. The extent of fluid flow through the particle itself can be characterized by the Darcy number, defined as Da = K/R^2, where K represents the particle permeability. Here, we appeal to the limit of small K and Da and neglect fluid flow through the solid medium as in prior studies <cit.>. Thus the total density of the particle can be considered as
ρ_p(t) = ρ_s + ρ̅(t)
where ρ_s is a fixed material parameter representing the contribution to the total density from the solid material and
ρ̅(t) = 1/V∫_Ωρ(r,t)
is the average density of the solute field ρ(r,t) in the particle interior, given by the solution of the diffusion equation
d/d tρ(r,t) = κ_p ∇^2 ρ(r,t)
with boundary condition
ρ(r,t)|_δΩ = ρ_b(Z(t))
where δΩ represents the particle boundary.
Furthermore, we will consider the case of a linear background density stratification
ρ_b(z) = γ z
where the parameter γ>0 defines the constant density gradient, representing a stable density configuration with the density increasing with depth. Note that ρ_b(z) represents the excess fluid density associated with the presence of the stratifying agent, such that in a homogeneous fluid ρ_b(z)=0. Equation <ref> can be recast in non-dimensional form as
9/2Rad/dt^* Z^*(t) = ρ^*_p(t) - Z^*(t)
where Z^*(t) = Z(t)/R, t^* = t κ_p / R^2, ρ^*_p = ρ_p/γ R, and Ra=g γ R^4/κ_p μ. The solutal Rayleigh number Ra represents a balance of buoyant to viscous forces. We can find steady solutions to this reduced system by assuming a constant speed U. In the frame of the sphere z = z - U t, ρ_b(z) = (z + U t) γ, and ρ = γ U t + f(𝐫), so that f(𝐫) satisfies the Poisson equation
γ U = κ_p ∇^2 f(𝐫),
with homogeneous boundary condition
f(𝐫)|_δΩ = 0.
For the case of the sphere, the solution is f(𝐫) = γ U/6 κ_p(r^2-R^2). By averaging over the volume of the sphere, we can find an exact expression for the mean excess density due to the solute, ρ̅(t) = γ U t - γ U R^2 / 15 κ_p. The linearly growing part of this expression is balanced with the background density, and the constant term represents a stable contribution due to the lower mean solute concentration in the particle interior, which is balanced by the extra solid density ρ_s and fluid drag.
Then returning to Equation <ref>, we find
U = 15 κ_p ρ_s/γ R^2(1 + 2/135Ra)^-1.
This solution can also be represented as a parallel sum between settling velocities at the two limiting behaviors of the system:
U = (1/U_γ+1/U_μ)^-1
where U_γ = 15 κ_p ρ_s/γ R^2 is the diffusion-limited settling velocity for Ra ≫ 135/2 and U_μ = 2 g ρ_s R^2/9 μ is the Stokes settling velocity for Ra ≪ 135/2. For more details regarding the model and derivation, see the Supplementary Information. In this work, we specifically focus on the diffusion-limited regime corresponding to large Ra, where the fluid drag can be effectively ignored and the settling dynamics are governed by mass exchange. This limit ultimately leads to a simple expression for the steady settling speed of a spherical particle in the diffusion-limited regime:
U = 15 κ_p ρ_s/γ R^2.
This equation suggests that larger particles will settle slower than otherwise equivalent small particles, in direct contrast to the Stokes regime. While a similar scaling was proposed by Kindler <cit.>, to the best of our knowledge this is the first analytical expression derived from first principles for the diffusion-limited settling velocity of a particle falling in linear stratification.
The solutal Rayleigh number Ra can also be interpreted as a ratio of timescales t_γ/t_μ, where t_γ = R^2/κ_p represents the characteristic diffusion time and t_μ = ρ_s/γ U_μ represents the time for a Stokes particle to settle a characteristic distance ρ_s/γ. This distance can be interpreted as the characteristic vertical distance the particle must fall in order for the excess background density to become comparable to the excess solid density of the particle. In the diffusion-limited regime (Ra ≫ 135/2), these timescales are well separated, with the initial transient dynamics depending both on the characteristic diffusion time of the particle t_γ = R^2/κ and the initial position of the particle. At short times, the particle density will not change appreciably (as diffusion has not had time to act) and at first order will exponentially approach its equilibrium height (assuming Stokes drag or asymptotic corrections thereof, as discussed in <cit.>). Transient dynamics due to the initial condition of the solute will then decay on a timescale according to t_γ as the particle then begins its steady diffusion-limited descent (for details see the Supplementary Information).
Our initial assumption of the external fluid stresses arising principally from Stokes drag neglects effects of fluid inertia and the possibility of additional buoyancy due the solute evolution in the background fluid that can take the form of a density boundary layer. Although these effects are not negligible in all cases, we expect the diffusion-limited behavior characterized in this work to apply to any scenario where hydrodynamic drag can be neglected and the density boundary layer surrounding the particle is small relative to the particle size. By restricting focus to this purely diffusion-limited regime, we can in fact readily generalize Equation <ref> to any arbitrary geometry via
U = 15 κ_p ρ_s/γ R_e^2
where R_e is the effective radius of the particle. For a general solid volume, R_e=√(-15ϕ̅), where ϕ̅ is the volume average of the solution to the Poisson problem ∇^2 ϕ=1, ϕ|_δΩ=0 (for more details see the Supplementary Information). For a sphere, R_e is simply the sphere radius R, consistent with Equation <ref>. While the problem can be readily solved numerically for an arbitrary three-dimensional geometry, certain geometries admit analytical solutions. For instance, in the case of a general ellipsoid, one can find a steady solution for the density field in the interior of the particle as
f(𝐫) = γ U/2
κ_p(1/a^2+1/b^2+1/c^2)^-1(x^2/a^2+y^2/b^2+z^2/c^2-1),
which corresponds to an effective radius of R_e of
R_e = √(3(1/a^2+1/b^2+1/c^2)^-1)
where a, b, and c, are the semi-axis lengths of the ellipsoid. More details regarding the general ellipsoid and other geometries are provided in the Supplementary Information.
The predictions for the diffusion-limited settling velocities for both spheres and ellipsoids will be tested experimentally in what follows. Specific trials are compared to simulations of the fully-coupled system developed in COMSOL. For more details regarding experimental and simulation methods, see Methods.
§.§ Spheres
A typical trial is shown in Figure <ref>(a), with raw data included in Movie S1. Here, five agar spheres of varying radius are measured settling in a linearly stratified tank. Other than the particle radius, all other parameters are held fixed. One can clearly see that the settling speed depends inversely on the size, with the smallest sphere settling most rapidly. The measured velocities also agree reasonably well with the prediction of Equation <ref>, although are consistently overpredicted. The discrepancy between the analytical solution and experiment is relatively small and reduces as the particle size increases. Note that the shaded band surrounding theoretical prediction represents the propagation of uncertainty on the measured parameters that contribute to the prediction in Equation <ref>. The simulation results of the fully coupled system are in excellent quantitative agreement with experiment.
We suggest that the remaining discrepancy is due to the convective fluid boundary layer of reduced density fluid just outside of the solid particle <cit.>, which we have neglected in the simplifications leading up to Equation <ref>. In our regime, this layer has the primary effect of furthering delaying the mass adaptation of the solid. To explore this difference, we roughly estimate the thickness δ of this convective layer using classical results for free convection from a vertically oriented heated plate <cit.>, known to have a boundary layer thickness δ that scales along the length of the plate x as δ∼(4 ν^2 x/g Δρ)^1/4. By assuming a characteristic length scale x ∼ R and characteristic density difference Δρ∼ρ_s, we find an approximate scaling for this boundary layer thickness in our problem as
δ∼(4 ν^2 R/g ρ_s)^1/4.
In general, the solute can be transported by free and forced convection. The Richardson number represents the relative magnitudes of free to forced convection and is large for our experimental regime (Ri =g ρ_s R/U_γ^2 = O(10^5)), justifying the assumption of free convection in our estimate. Thus, we might introduce a `virtual' boundary in the simplified model, as in <cit.>, by extending the effective solid boundary so that it includes the convective layer (i.e. R→ R + αδ). The prefactor α is unknown but presumably of O(1). A best fit correction to our experimental velocity data for this trial yields a prefactor of α=0.60. More general empirical relations on this buffering boundary layer are detailed in prior work <cit.>. For all of our experimental trials δ/R < 0.21, suggesting that the influence of the boundary layer on the diffusive transport of solute into the sphere is a secondary effect, and thus we choose to neglect it henceforth. Furthermore, its estimated influence is of similar order to our prediction uncertainty associated with the propagated uncertainty of the parameters. This boundary layer is fully resolved in our COMSOL simulations and is also small relative to the sphere size in all cases.
In Figure <ref>(b), all experimental trials conducted for spheres in linear stratification are plotted and compared with the theoretical prediction of Equation <ref>. Good agreement is demonstrated across orders of settling velocity, for multiple values of sphere density, diffusivity, and background gradients. For all cases considered here, the observed settling velocity is within 27% of the prediction for U_γ, with a mean error of 13%. On average, the spheres move 5.9% slower than predicted.
§.§ Spheroids
Although spheres are a natural starting point and thus represent the overwhelming focus of the literature thus far, there are many cases where the shape is highly non-spherical, for example in thin aggregate discs as often observed in marine snow. As discussed prior, one advantage of the diffusion-limited limit is the ease of extending the prediction to bodies of arbitrary shape.
In Figure <ref>(a), we consider the settling of an oblate spheroid whose vertical semiaxis c, corresponding to its axis of rotational symmetry, is fixed at 0.3 cm, while the horizontal semiaxis a=b is varied from 0.3 to 1.2 cm, spanning a range of aspect ratios a/c from 1 to 4. The measured settling velocities are compared to the prediction of Equation <ref> using the effective radius defined in Equation <ref>. As in Figure <ref>(a), we observe a slight but consistent overprediction of the settling velocity that may be due to the presence of the convective boundary layer just outside of the solid particle boundary. The predictions from the full numerical simulations again show excellent agreement with the experimentally measured velocities. For large aspect ratio, the oblate spheroids asymptotically approach a constant settling velocity, with the predicted asymptotic value pictured as a dotted line (see Supplementary Information for details). In Figure <ref>(b), aggregate settling data for non-spherical spheroids is presented with aspect ratios a/c ranging from 1/4 to 4 and over a range of agar percentages and linear stratifications. Good overall collapse to the theoretical prediction is found, with a maximum error of 35% and a mean error of 15%, moving 13% slower than predicted on average.
As diffusive transport is inherently a surface-dominated phenomenon, it is anticipated that for a fixed particle volume, a sphere will settle at the slowest rate as it represents the shape of minimal surface area. In Figure <ref>, we replot and summarize the data from Figures <ref>(b) and <ref>(b) as a function of particle aspect ratio a/c. The settling speeds are normalized by the theoretical settling speed U_0= 15 κ_p ρ_s/γ R_0^2 of an otherwise equivalent sphere of the same volume (corresponding to R_0 = (a^2c)^1/3). All particles of the same aspect ratio a/c are summarized by the mean and standard deviation of this normalized velocity measure over all parameters. Consistent with the intuitive argument, a minimum normalized velocity is predicted when the object is spherical, and the overall trend is well supported by experiments.
§.§ Slender bodies
Our formula also allows for predictions of the settling of asymptotically slender bodies in the diffusion-limited regime (although only tested to aspect ratio 4). The settling speed for objects characterized by a thin dimension d relative to a sphere of the same diameter is summarized in Table <ref>. For a pancake-like oblate spheroid with long horizontal semiaxes of length a=b=l and short vertical semiaxis c=d/2 ≪ l, the settling speed asymptotically approaches 1/3 that of a sphere of diameter d. For a fiber-like prolate spheroid with long vertical semiaxis of length c=l and short horizontal semiaxes of length a=b=d/2 ≪ l, the settling speed approaches 2/3 that of a sphere of diameter d. Furthermore, for the case of a long cylindrical fiber of diameter d or a thin uniform sheet of thickness of d, it can be shown that the settling speed approaches 8/15 and 1/5 that of a sphere of diameter d, respectively. In all of these extreme cases, the settling velocity simply scales as U ∼ρ_s κ_p/γ d^2, where the dependence on the long dimension is lost, as the relevant diffusion timescale is driven by the small dimension. It is anticipated that any high-aspect ratio body will follow a similar scaling, with the smallest dimension dictating the settling speed. A discussion regarding these limits and criteria for validity of the diffusion-limited regime can be found in the Supplementary Information.
§.§ Non-linear stratifications
Although our primary results focus on steady settling in linear stratifications, for systems where the local background density gradient changes slowly relative to the particle diffusion timescale, these results are also applicable to non-linear stratifications. A particle settling at the diffusion-limited velocity U_γ travels a distance ρ_s/γ over one particle diffusion time (t_γ = R^2/κ). Thus for the prior analysis to remain valid, the local density gradient should be approximately constant over such a length scale. From this argument, one can arrive at a condition for the second derivative of the density profile:
|d^2/d z^2ρ_b(z)| ≪γ^2/ρ_s.
More details regarding this derivation can be found in the Supplementary Information. Thus, in the regime where fluid drag is negligible (large Ra) and in regions where the above criterion is satisfied, we anticipate the local depth-dependent settling velocity to be described by the natural extension of Equation <ref>:
U(Z) = 15 κ_p ρ_s/γ(Z) R^2.
§.§ Reconstruction of density profile from particle trajectory
The condition that the diffusion-induced settling velocity is satisfied locally for an arbitrary density profile (Equation <ref>) further implies that the local background density rate of change (dρ_b/dt=Uγ) is constant in time. This consequence can be seen readily from the relation
dρ_b/dt = U(Z(t))γ(Z(t)) = 15 ρ_s κ_p/R_e^2,
which depends only on the particle's extra solid density, diffusivity, and radius. For a given particle, the value of this rate of change (ψ=15 ρ_s κ_p/R_e^2) can be estimated directly using the particle size and material parameters. Alternatively, if one knows the fluid density at the initial (ρ_i) and final (ρ_f) particle positions, along with the total transit time (t_r), ψ can be computed directly as ψ=ρ_f - ρ_i/t_r. Given knowledge of this density rate of change and the initial density, one can then reconstruct the density profile directly from the particle trajectory, assuming the background stratification itself is evolving slowly relative to the measurement timescale. In practice, this reconstruction is done by exploiting the linear relationship between time and density (ρ_b(t) = ρ_b(0) + ψ t) to reparameterize the position as a function of density. This framework allows for a simple, cheap, highly resolved, and minimally invasive method for density profile reconstruction using only two density measurements and a camera.
An example experiment, as depicted in Figure <ref>,
involves dropping a small agar particle in a tank that was stratified with a hyperbolic tangent density profile using the method described in the Methods section. As can be seen in Figure <ref>(a), the particle does not settle at a constant speed in this case, due to the fact that the background gradient is no longer constant. Using the initial density, final density, and measured transit time, one can reconstruct the density profile as shown in Figure <ref>(b). This result shows excellent agreement with the density profile measured directly using a conductivity probe mounted to a motorized linear positioning stage. One can also faithfully estimate the local density gradient using such data, as shown in Figure <ref>(c).
As a related aside, should the sphere size and properties be known independently, one can calculate the transit time t_r across a pycnocline in the diffusion-limited regime as
t_r = R_e^2/15 κ_p ρ_s(ρ_f - ρ_i).
Notably this time is predicted to depend only on the particle properties (embodied in ψ) and the traversed density difference (ρ_f - ρ_i) but is independent of the overall thickness of the pycnocline and the details of its functional form. As discussed prior, this result should hold provided the particle is in the diffusion-limited regime and when Equation <ref> is satisfied. The actual transit time is anticipated to deviate from this prediction when the stratification is sharp relative to the particle size, for instance. Although not explicitly tested in this work, the square dependence of particle radius on the transit time is consistent with prior laboratory measurements and analyses for the case of spheres <cit.>. Our result goes beyond the scaling and also provides a prediction for bodies of of arbitrary shape via the suitably defined effective radius R_e.
§ DISCUSSION
In this work, we have derived a simple analytical formula for the diffusion-limited settling speed of highly porous particles in a linear background stratification. The assumption of a linear density gradient is in contrast with most previous laboratory studies and simulations that consider variations in the density gradient on the order of the particle size. However, in many systems that motivate the current investigation, there is a vast separation of scales between the particle size and background density variations, motivating the study of particles in a locally linear gradient. Diffusion-limited settling corresponds to scenarios where mass exchange between the particle and background fluid is the dominant mechanism determining the particle position, with hydrodynamic drag being negligible. For the case of simple Stokes drag, this regime can be delineated by a suitably defined Rayleigh number. Despite the assumptions and simplicity of the final result, the prediction shows good agreement with new experiments on spherical hydrogel particles for a range of particle radii, compositions, and background stratifications. On average, the results slightly overpredict the measured settling speeds, which we expect is predominantly due to the neglect of the small density boundary layer surrounding the particle that additionally buffers mass transport into the sphere.
In oceans, salt lakes, and estuaries, vertical salinity stratifications are commonly on the order of γ≈ 10^-5 - 10^-9 g/cm^4 <cit.>. Thus assuming a settling law of the form of equation <ref>, diffusion-limited settling represents a dominant retention mechanism for particles with R ≳ 0.16 - 1.6 cm (given μ = 0.01 g cm^-1, g = 981 cm s^-2, κ_p = 10^-5 cm^2 s^-1). Despite the idealized scenario considered herein, it appears plausible that such a mechanism may play an important role in environmental systems, with the effect being most relevant for larger aggregates. As particle size has important implications for bioavailability and microbial respiration <cit.>, the differential settling due to size imposed by this mechanism may be relevant to these chemical and ecological processes.
A distinguishing feature of the diffusion-limited regime involves the dependence on size and shape. For a given solid density, the settling velocity of a sphere in a uniform viscous fluid scales with the square of the radius, as described by Stokes settling law. In the diffusion-limited regime, the settling velocity for a sphere scales inversely with the square of the radius, as previously proposed and documented for diffusion-limited retention <cit.>. Although more complex settling behaviors for non-porous particles have been investigated to include effects of density stratification <cit.>, these results retain the monotonic increase in particle velocity dependence on size, in contrast with the current work.
Although a sphere gives the simplest geometrical representation of a particle, in many motivating systems of interest particles are highly non-spherical. Essentially all prior related laboratory and modeling studies have restricted attention to spherical particles. The role of shape has been identified as essential for understanding particle transport in the atmosphere and oceans <cit.>. In this work, we have also experimentally validated an analogous prediction for spheroids and outlined a mathematical framework for computing the diffusion-limited settling velocity of an arbitrary geometry. As a boundary-flux driven phenomenon, for a given particle volume, a sphere settles with the slowest speed, as it represents the shape with minimal surface area for a given volume. Our model also provides insight into the settling dynamics of particles with very large aspect ratios that may be harder to access experimentally.
We have also derived a criterion under which our main results can be readily applied to non-linear stratifications. Through this analysis, we developed a simple and accessible method for using particle trajectories settling in the diffusion-limited regime to reconstruct highly resolved background density profiles.
In our model, particles are considered as homogeneous diffusive materials which admit an effective diffusivity κ_p and carry an additional component of density ρ_s due to the presence of solid material. In generality, porous materials may contain regions which are inaccessible to the solute, modeled as a solid volume fraction. The consideration of this effect may be added through a partition function <cit.>, causing a discontinuity in the averaged solute concentration. This can be accounted for in the framework of our study by a prefactor in the settling velocity expression which is captured when measuring Ψ directly, as in the previous section (for details see Supplementary Information). Other considerations, including tortuosity of the medium, hydrodynamic effects, fluid flow through the porous medium, and other complex phenomena relevant to diffusion in porous materials may invalidate the assumption of homogeneous and isotropic diffusivity considered here. Nevertheless, we generally expect our model's assumptions to remain valid in the limit of low solid volume fraction and small pore size.
§ METHODS
§.§ Fluid preparation
To create the background density stratification, we use two programmable pulsatile pumps driven by stepper motors (Kamoer PHM400-ST3B25) that are controlled by an Arduino Uno running the AccelStepper library which sends step and direction information to two TMC2209 drivers. One pump supplies dense saline solution, and the other pump supplies less dense saline solution or fresh deionized water. These two input streams are combined through a 3D-printed Y junction and fed into the experiment tank through a single tube. The experiment tank (REPTIZOO B07CV8L7BK) is made of glass with interior dimensions of 19 cm x 19 cm x 28 cm width, depth, and height, respectively. The tank is rinsed with deionized water and dried before pouring the stratification. A diffuser, consisting of a sponge surrounded by a foam border, floats at the water surface and allows for incident saline solution to not disturb the density-stratified layer structure below.
To prepare the saline solution, we first rinse sodium chloride (Morton Pure and Natural Water Softener Crystals) with deionized, reverse-osmosis filtered water. To avoid introducing dissolved gases into the saline solution during mixing, the deionized water is boiled before pouring the stratification. The sodium chloride is dissolved and the stratification is poured typically within one hour of boiling. A small sample of both the dense and fresh reservoirs is set aside and allowed to cool before measuring the density using an Anton Parr DMA35 densitometer.
To confirm the accuracy of our stratification pouring method, we use a Mettler Toledo SevenCompact S230 conductivity meter and InLab 731-ISM probe, which collects conductivity measurements as a function of depth. The depth is controlled automatically through a stepper driver and motorized stage (HoCenWay DM556, Befenbay BE069-4), triggering a camera to record the measured conductivity. The relation between conductivity and density was calibrated by preparing 22 saline solutions whose density and conductivity was measured. Direct measurements of density gradients for prepared linear stratifications were within the reported uncertainty of 10% from the predicted value.
§.§ Particle preparation
To prepare the agar particles, agar powder (Acros Organics 400405000) was combined with boiling, deionized water at a known weight ratio (typically 2-4 wt.%) and mixed using an immersion blender until well-mixed, typically around 30 seconds. The particles are cast using a 2-part 3D-printed mold. The mold is cleaned and dried, then prepared by clamping the upper and lower section together using spring clamps. A 5 ml syringe is filled with hot agar solution before a 20G needle is attached. The hot agar solution is injected into the mold through a small hole at the top. The solution is inspected to confirm the absence of large bubbles. After allowing for the cast agar to solidify, typically for 15-30 minutes, the clamps are removed and the mold is carefully separated. Particles are placed in deionized water and allowed to rest, typically overnight, before being used in experiments. The particle size uncertainty was estimated from images as ± .025 cm.
§.§ Particle characterization
To measure the particle density, we use a force balance method. A small measuring stage is held by a thin wire and a long horizontal support arm whose base is rested on a scale (U.S. Solid USS-DBS5). The measurement stage is immersed in a small container of deionized water which is placed on a second scale. The scales are zeroed, then a small sample of particles is placed on the measuring stage. Due to this configuration, the scale on which the water cup is resting measures the weight corresponding to the volume of water displaced by the particle sample. The scale which supports the horizontal support arm that the measuring stage is attached to measures only the extra weight that is due to the excess density of the particle sample (the component of the density that exceeds the deionized water in which they are immersed).
To measure the diffusivity of the agar particles, we similarly used particles immersed in a solution of a known density. Typically, a selection of five spheres of different diameter were submerged in a uniform density saline solution and, although they are initially positively buoyant, held underwater using a laser cut acrylic frame and 200 μm diameter nylon monofilament structure. The time at which the particle begins to settle is recorded, and it is assumed at this time that the mean density of the particle is equal to that of the background saline solution. To estimate the effective diffusivity, we then employ a mathematical model, assuming that the sphere can be approximated as a material with a uniform diffusivity whose external boundary condition for density is set by that of the uniform background saline solution density, neglecting potential variations in the external density field. The transient density of the sphere is modeled using an analytic solution for the salt concentration distribution ∑ _n=1^∞6 Δρ/π ^2 n^2 e^-π ^2 n^2 κ_p t / R^2 which is truncated at 1000 terms. We use a bisection search method to find the effective diffusivity of salt in the particle which corresponds to the observed settling time given the above assumptions.
Due to this measurement process’ dependence on the extra solid density measurement, the uncertainty in the diffusivity value is highly correlated with the measured extra density. For this reason, we employ a Monte Carlo method to estimate the variance of the combined product of extra density and diffusivity as used in the empirical settling velocity estimate. First, we collect a set of excess density measurements and find the mean and variance of this distribution. Then, we collect a set of settling times from the diffusivity measurement procedure described above and record the normalized settling time, which is given by the settling time divided by the sphere radius squared. From this, we can deduce the mean and variance of the normalized settling time distribution. At this step, we create synthetic data by modeling the excess density and normalized settling time as normally distributed random variables with mean and variance as measured from experiment. We randomly sample an extra density and a normalized settling time, then calculate the effective diffusivity and the product of the extra density and effective diffusivity. We repeat this procedure for 10,000 sample pairs and then calculate the mean and variance of the generated population of products of extra density and effective diffusivity. The standard deviation due to the extra density and diffusivity was found to be 1.8% for an agar weight ratio of 3.44% and was taken as typical for all agar weight ratios.
§.§ Particle kinematics
To measure the particle position in experiments, a camera (Nikon D850 equipped with a Nikon Nikkor 105mm lens) views the experiment tank from the side and records images typically every 20 seconds, resolved at 53 μm per pixel. The tank is fitted with a dark colored background and illuminated from the side, causing the particles to appear bright relative to the image background. The image is binarized, and the center of the largest bright region is taken as the particle center. To deduce the settling velocity, the center position as a function of time is linearly approximated by least squares over a 5 cm region near the center of the tank. The conversion from image to real space coordinates involves measuring a length scale directly from the image of the tank. Due to parallax, this scale varies for a particle at the front or rear of the tank. The nominal scaling factor is taken as the mean of these two limiting scaling factors, and the uncertainty bounds are taken as the minimum and maximum values (± 6.3%).
§.§ Simulations
The fluid-solute system is modeled in COMSOL 5.6 using the Laminar Flow and Transport of Diluted Species packages as a 2D axisymmetric geometry. The simulation is performed in the frame of the sphere or spheroid, where the mesh is fixed in time. The linear solute concentration is prescribed at the inlet, lateral boundary, and outlet. To simulate the process of settling in the frame of the sphere, the concentration boundary conditions are advected in time in conjunction with the imposed velocity. A uniform velocity is prescribed at the inlet, and the static pressure is prescribed at the outlet. The velocity at the inlet is linearly ramped to a constant velocity U over 1/5 of the diffusion time t_γ. Simulations are run for 2 t_γ, until the density perturbation relative to the background is constant, utilizing a BDF time-stepping scheme with maximum stepsize t_γ/100. The fluid parameters are matched with saline water, with dynamic viscosity μ = .011219 g cm^-1 s^-1, background density .997 g cm^-3, and solute diffusivity in the fluid 1.5 × 10^-5 cm^2 s^-1. This model is coupled to a solid particle region with a no-slip boundary condition for the fluid velocity, continuity of solute concentration, and continuity of solute flux. Inside the solid, the solute diffusivity κ_p is determined from experiment as described in the Particle Characterization section.
The cylindrical domain radius is chosen to be 10 times the horizontal semiaxis, and the height is 20 times the vertical semiaxis. The mesh is generated with a minimum element size of 0.0001 times the horizontal semiaxis, with a maximum element growth rate of 1.0005 and a curvature factor of 0.08. To resolve the solute and velocity boundary layer, a boundary layer mesh is prescribed along the spheroid surface and upper vertical axis with Number of boundary layers equal to 10 and Boundary layer stretching factor equal to 0.8.
At steady-state, the stress on the solid boundary is integrated to calculate the total force acting on the particle. For the particle to have reached a force equilibrium in this configuration, the extra body force due to the solid density ρ_s must offset this value. Thus in essence we impose a settling velocity, which then defines the solid density assuming equilibrium (opposite of the experiments). To compare with specific experimental trials, we iterate this procedure by updating the simulated settling velocity until the force on the particle is equal to the experimentally measured particle solid density ρ_s. This iteration procedure is accelerated by assuming the diffusion-limited velocity relationship between ρ_s and U_γ as described in Results to update the velocity used in the simulation. This iteration scheme is repeated until the calculated solid density from simulations matches the experimentally measured ρ_s within 1.2% tolerance.
§ ACKNOWLEDGEMENTS
This work was partially funded by NSF DMS-1909521, NSF DMS-1910824, NSF DMS-2308063, ONR N00014-18-1-2490, and ONR N00014-23-1-2478. We would like to thank Professors Monica Martinez-Wilhelmus and Roberto Zenit for loaned equipment, Rebecca Rosen for support with preliminary experiments, and John Antolik for discussions.
§ AUTHOR CONTRIBUTIONS
R.H. proposed research. R.H. and D.M.H. designed research. R.H. performed research. R.H., R.C., R.M.M., and D.M.H. discussed and interpreted results. R.H. and D.M.H. wrote the paper. R.H., R.C., R.M.M., and D.M.H. edited the paper.
§ COMPETING INTERESTS
The authors declare no competing interests.
abbrv
|
http://arxiv.org/abs/2409.02789v1 | 20240904150418 | The entries of the Sinkhorn limit of an $m \times n$ matrix | [
"Eric Rowland",
"Jason Wu"
] | math.NT | [
"math.NT",
"math.AC",
"math.CO"
] |
The entries of the Sinkhorn limit of an m × n matrix]The entries of the Sinkhorn
limit of an m × n matrix
Department of Mathematics
Hofstra University
Hempstead, NY
USA
Half Hollow Hills High School East
Dix Hills, NY
USA
Department of Mathematics
Cornell University
Ithaca, NY
USA
§ ABSTRACT
We use Gröbner bases to compute the Sinkhorn limit of a positive 3 × 3 matrix A, showing that the entries are algebraic numbers with degree at most 6.
The polynomial equation satisfied by each entry is large, but we show that it has a natural representation in terms of linear combinations of products of minors of A.
We then use this representation to interpolate a polynomial equation satisfied by an entry of the Sinkhorn limit of a positive 4 × 4 matrix.
Finally, we use the PSLQ algorithm and 1.5 years of CPU time to formulate a conjecture, up to certain signs that we have not been able to identify, for a polynomial equation satisfied by an entry of the Sinkhorn limit of a positive m × n matrix.
In particular, we conjecture that the entries are algebraic numbers with degree at most m + n - 2m - 1.
This degree has a combinatorial interpretation as the number of minors of an (m - 1) × (n - 1) matrix, and the coefficients reflect new combinatorial structure on sets of minor specifications.
[
Jason Wu
September 4, 2024
=====================
§ INTRODUCTION
For a video introduction to this paper, see <https://youtu.be/-uIwboK4nwE>.
In a 1964 paper, Sinkhorn <cit.> considered the following iterative scaling process.
Let A be a square matrix with positive entries.
Scale the rows so that each row sum is 1.
Then scale the columns so that each column sum is 1; generically, this changes the row sums.
To restore the row sums 1, scale the rows again, then scale the columns, and so on.
Sinkhorn showed that the sequence of matrices obtained through this process converges to a matrix whose row and column sums are 1 (in other words, a doubly stochastic matrix).
We call this matrix the Sinkhorn limit of A and denote it (A).
Sinkhorn also showed that (A) is the unique doubly stochastic matrix S with the same size as A such that S = R A C for some diagonal matrices R and C with positive diagonal entries.
Here R can be taken to be the product of the row-scaling matrices and C the product of the column-scaling matrices.
The literature on questions related to the iterative scaling process is large;
Idel's extensive survey <cit.> covers results as of 2016.
Mathematical applications include preconditioning linear systems to improve numerical stability, approximating the permanent of a matrix, and determining whether a graph has a perfect matching.
In a number of other areas, iterative scaling was discovered independently multiple times <cit.>, and it is used in machine learning to efficiently compute optimal transport distances <cit.>.
As a result of its ubiquity and importance, many authors have been interested in fast algorithms for approximating Sinkhorn limits numerically <cit.>.
However, until recently, nothing was known about the exact values of the entries of Sinkhorn limits.
For a 2 × 2 matrix
A =
[ a b; c d ]
with positive entries, Nathanson <cit.> showed that
(A) =
1/√(a d) + √(b c)[ √(a d) √(b c); √(b c) √(a d) ].
In particular, the top left entry x of (A) satisfies
(a d - b c) x^2 - 2 a d x + a d = 0.
For 3 × 3 matrices, analogous descriptions of the entries of (A) were only known in special cases.
In the case that A is a symmetric 3 × 3 matrix containing exactly 2 distinct entries, a formula for (A) was obtained by Nathanson <cit.>.
For a symmetric 3 × 3 matrix, Ekhad and Zeilberger <cit.> used Gröbner bases to compute, for each entry x of (A), a degree-4 polynomial of which x is a root.
For general 3 × 3 matrices, Chen and Varghese <cit.> used numeric experiments to conjecture that the entries of (A) generically have degree 6 over the field generated by the entries of A.
Applying the iterative scaling process to
A =
[ 3 9 1; 3 2 9; 5 3 4 ]
gives the approximation
(A) ≈[ 0.27667 0.64804 0.07527; 0.25194 0.13113 0.61692; 0.47138 0.22081 0.30780 ].
Let x be the top left entry of (A).
The PSLQ integer relation algorithm <cit.> can be used to recognize an algebraic number, given a sufficiently high-precision approximation.
For the approximation
x ≈ 0.2766771162103280503525099931476512576251224460918253185145079454
with target degree 6, PSLQ produces the guess
374752 x^6 - 220388 x^5 - 844359 x^4 - 125796 x^3 + 210897 x^2 + 14346 x - 12312 = 0.
This guess remains stable when we increase the precision or the target degree.
In Section <ref>, we prove the conjecture of Chen and Varghese by carrying out a Gröbner basis computation to obtain an explicit polynomial equation satisfied by an entry of (A) for a general 3 × 3 matrix A with positive entries.
In particular, we obtain formulas for the coefficients in Equation (<ref>).
We show that each coefficient in this polynomial can be written as a linear combination of products of minors of A (that is, determinants of square submatrices); this substantially reduces the amount of information required to specify the coefficients.
In Section <ref>, we use the form of the equation for 3 × 3 matrices to infer the form of the equation for general n × n matrices A.
In particular, we conjecture that the entries of (A) generically have degree 2 n - 2n - 1.
For a general 4 × 4 matrix, the Gröbner basis computation is infeasible, so instead we use PSLQ to recognize entries of Sinkhorn limits for enough integer matrices to solve for each coefficient in the equation.
We also obtain several coefficients in the equation for 5 × 5 matrices with this method.
In Section <ref>, we generalize to non-square matrices by defining the Sinkhorn limit of an m × n matrix A to be the matrix obtained by iteratively scaling so that each row sum is 1 and each column sum is m/n.
Again we use PSLQ and solve for coefficients in equations for matrices of various sizes.
We then interpolate formulas for these coefficients as functions of m and n.
By identifying combinatorial structure in these formulas, we build up to the following conjecture.
Let D(m, n) be the set of all pairs (R, C) where R ⊆{2, 3, …, m}, C ⊆{2, 3, …, n}, and R = C.
The set D(m, n) consists of the specifications of all minors of an m × n matrix A that do not involve the first row or first column.
For each S ⊆ D(m, n), let M(S) be the product of minors of A defined in Section <ref>.
Let _S, σ(S)(m, n) be the S×S matrix defined in Section <ref>;
this matrix resembles an adjacency matrix, and its entries are linear functions of m and n with signs determined by the sign alteration σ(S).
Let m ≥ 1 and n ≥ 1.
For each S ⊆ D(m, n), there exists a sign alteration σ(S) such that, for every m × n matrix A with positive entries, the top left entry x of (A) satisfies
∑_S ⊆ D(m, n)(_S, σ(S)(m, n)) M(S) x^S = 0.
In particular, x is algebraic over the field generated by the entries of A, with degree at most m + n - 2m - 1.
We conclude in Section <ref> with several open questions.
In particular, we mention that a more general process of iteratively scaling was introduced in 1937 by Kruithof in the context of predicting telephone traffic <cit.>.
We conjecture that the entries of this more general limit also have degree at most m + n - 2m - 1.
Our Mathematica package SinkhornPolynomials <cit.> uses the results of this paper to compute Sinkhorn limits rigorously for 3 × 3 matrices and conjecturally for 2 × n, 3 × 4, 3 × 5, and 4 × 4 matrices as well as their transposes.
§ THE SINKHORN LIMIT OF A 3 × 3 MATRIX
We refer to a matrix with positive entries as a positive matrix.
In this section, we determine (A) for a general positive 3 × 3 matrix
A =
[ a_11 a_12 a_13; a_21 a_22 a_23; a_31 a_32 a_33 ],
first in Theorem <ref> as an explicit function of the entries of A and then in Theorem <ref> in a form that shows more structure.
We also introduce notation that will be used throughout the rest of the paper.
It suffices to describe the top left entry of (A).
This is because the iterative scaling process isn't sensitive to the order of the rows or the order of the columns.
Therefore, if R is the permutation matrix swapping rows 1 and i and C is the permutation matrix swapping columns 1 and j, then R (A) C = (R A C).
In particular, the (i, j) entry of (A) is equal to the (1, 1) entry of (R A C).
For example, the (2, 3) entry of (A) is equal to the (1, 1) entry of
([ a_23 a_22 a_21; a_13 a_12 a_11; a_33 a_32 a_31 ]).
To compute a polynomial equation satisfied by the top left entry of (A), we set up three matrices
S =
[ s_11 s_12 s_13; s_21 s_22 s_23; s_31 s_32 s_33 ],
R =
[ r_1 0 0; 0 r_2 0; 0 0 r_3 ],
C =
[ c_1 0 0; 0 c_2 0; 0 0 c_3 ].
The matrix equation S = R A C gives the 9 equations
s_11 = r_1 a_11 c_1 s_12 = r_1 a_12 c_2 s_13 = r_1 a_13 c_3
s_21 = r_2 a_21 c_1 s_22 = r_2 a_22 c_2 s_23 = r_2 a_23 c_3
s_31 = r_3 a_31 c_1 s_32 = r_3 a_32 c_2 s_33 = r_3 a_33 c_3,
and we obtain 6 equations from the requirement that S is doubly stochastic:
s_11 + s_12 + s_13 = 1 s_11 + s_21 + s_31 = 1
s_21 + s_22 + s_23 = 1 s_12 + s_22 + s_32 = 1
s_31 + s_32 + s_33 = 1 s_13 + s_23 + s_33 = 1.
We would like to eliminate the 14 variables s_12, s_13, …, s_33, r_1, r_2, r_3, c_1, c_2, c_3 from this system of 15 polynomial equations, resulting in a single equation in the variables s_11, a_11, a_12, …, a_33.
In principle, this can be done by computing a suitable Gröbner basis.
In practice, the runtime is significantly affected by the algorithm used.
Mathematica's function <cit.> with certain settings[Namely, .] computes a single polynomial in a couple seconds, whereas with other settings the computation does not finish after several hours.
The output gives the following result.
Let A be a positive 3 × 3 matrix.
The top left entry x of (A) satisfies b_6 x^6 + … + b_1 x + b_0 = 0, where the coefficients b_k appear in Table <ref> in factored form.
In particular, the degree of each entry of (A) is at most 6.
For the matrix A in Example <ref>, Theorem <ref> states that the top left entry x of (A) satisfies
81 (374752 x^6 - 220388 x^5 - 844359 x^4 - 125796 x^3 + 210897 x^2 + 14346 x - 12312) = 0.
This agrees with Equation (<ref>).
This polynomial is irreducible, and this confirms the conjecture of Chen and Varghese <cit.> that the entries of (A) for positive 3 × 3 matrices A generically have degree 6.
Let f(x) = b_6 x^6 + b_5 x^5 + … + b_1 x + b_0 be the polynomial in Theorem <ref>.
Project A to a symmetric matrix by setting a_21 = a_12, a_31 = a_13, and a_32 = a_23.
Then the projection of f(x) factors as -((a_11 a_23 - a_12 a_13) x - a_11 a_23)^2 g(x), where g(x) is the degree-4 polynomial computed by Ekhad and Zeilberger <cit.> for the top left entry.
An obvious question is whether there is a better way to write the polynomial f(x) in Theorem <ref>.
In fact there is, using determinants.
We first observe that b_k contains the factor a_11^5 - k for each k ∈{0, 1, …, 5}.
This suggests that the scaled polynomial a_11 f(x) is more natural than f(x), since the coefficient of x^k in a_11 f(x) contains the factor a_11^6 - k not just for k ∈{0, 1, …, 5} but for all k ∈{0, 1, …, 6} (where in fact the coefficient of x^6 also contains a_11^1).
Now the leading coefficient a_11 b_6 of a_11 f(x) is the product of 6 minors of A.
More specifically, it is the product of all the minors of A that involve a_11.
Furthermore, the constant coefficient a_11 b_0 of a_11 f(x) is the product of a_11^6 and the 6 minors of A that do not involve the first row or first column (including the determinant 1 of the 0 × 0 matrix).
To rewrite the other coefficients, we would like to interpolate between these products for a_11 b_6 and a_11 b_0.
One possibility is that a_11 b_k is a linear combination of products of minors, where each product consists of
* k minors that involve the first row and first column,
* 6 - k minors that do not involve the first row or first column, and
* a_11^6 - k (equivalently, one factor a_11 for each of the latter).
We have seen that this is the case for a_11 b_6 and a_11 b_0.
Remarkably, it turns out that a_11 b_5, a_11 b_4, …, a_11 b_1 can be written this way as well.
Some notation for these minors will be useful.
Define
D(m, n) = {(R, C) : R ⊆{2, 3, …, m} and C ⊆{2, 3, …, n} and R = C}.
Define A_R, C to be the submatrix of A obtained by extracting the rows indexed by R and the columns indexed by C.
For each (R, C) ∈ D(m, n), define
Δ([ R; C ])
= A_{1}∪ R, {1}∪ C
Γ([ R; C ])
= a_11 A_R, C.
The minor
Δ([ R; C ])
involves a_11, and
Γ([ R; C ])
is the product of a_11 and a minor that does not involve the first row or first column.
This notation does not reflect the dependence on A, but the matrix will be clear from context.
Each subset S ⊆ D(m, n) specifies a monomial in the expressions
Δ([ R; C ])
and
Γ([ R; C ]), namely
M(S)
= ∏_(R, C) ∈ SΔ([ R; C ])
·∏_(R, C) ∈ D(m, n) ∖ SΓ([ R; C ]).
Each element of D(m, n) contributes to the monomial M(S) as the argument of either Δ or Γ; indeed we specify each monomial by the elements that appear as arguments of Δ.
To make a subset S = {(R_1, C_1), (R_2, C_2), …, (R_k, C_k)} easier to read, we format it as
S =
R_1 R_2 ⋯ R_k
C_1 C_2 ⋯ C_k.
For m = 3 and n = 3, we have
D(3, 3) =
{} {2} {2} {3} {3} {2, 3}
{} {2} {3} {2} {3} {2, 3}.
Moreover, from the expressions for b_6 and b_0 in Table <ref>, we have
a_11 b_6 =
Δ([ {}; {} ])
Δ([ {2}; {2} ])
Δ([ {2}; {3} ])
Δ([ {3}; {2} ])
Δ([ {3}; {3} ])
Δ([ {2, 3}; {2, 3} ])
=
M({} {2} {2} {3} {3} {2, 3}
{} {2} {3} {2} {3} {2, 3})
a_11 b_0 =
Γ([ {}; {} ])
Γ([ {2}; {2} ])
Γ([ {2}; {3} ])
Γ([ {3}; {2} ])
Γ([ {3}; {3} ])
Γ([ {2, 3}; {2, 3} ])
=
M(
).
Each of these monomials involves Δ or Γ but not both, whereas the monomial
M({} {3}
{} {2})
=
Δ([ {}; {} ])
Γ([ {2}; {2} ])
Γ([ {2}; {3} ])
Δ([ {3}; {2} ])
Γ([ {3}; {3} ])
Γ([ {2, 3}; {2, 3} ])
involves both Δ and Γ, for example.
We continue to let A be the general 3 × 3 matrix in Equation (<ref>), and let x be the top left entry of (A).
To rewrite the coefficients a_11 b_k, we look for coefficients c_S ∈, indexed by subsets S ⊆ D(3, 3), such that
a_11 b_k = ∑_S ⊆ D(3, 3)
S = k c_S M(S)
for each k ∈{0, 1, …, 6}.
This will give the polynomial equation
∑_k = 0^6(∑_S ⊆ D(3, 3)
S = k c_S M(S)) x^k = 0.
The expressions for a_11 b_6 and a_11 b_0 in Example <ref> allow us to choose c_D(3, 3) = 1 and c_{} = 1.
For each k ∈{1, 2, 4, 5}, one checks that a_11 b_k can be written uniquely as a linear combination of the monomials M(S) where S = k.
This determines c_S for such subsets S.
For example,
a_11 b_1 =
-3 M({}
{})
- M({2}
{2})
- M({2}
{3})
- M({3}
{2})
- M({3}
{3})
+ M({2, 3}
{2, 3}).
However, the coefficient a_11 b_3 has multiple representations.
The family of such representations is 1-dimensional, due to the relation
M({} {2} {3}
{} {2} {3})
+ M({} {2} {2, 3}
{} {3} {2, 3})
+ M({} {3} {2, 3}
{} {2} {2, 3})
+ M({2} {2} {3}
{2} {3} {2})
+ M({2} {3} {2, 3}
{2} {3} {2, 3})
+ M({2} {3} {3}
{3} {2} {3})
=
M({} {2} {2, 3}
{} {2} {2, 3})
+ M({} {2} {3}
{} {3} {2})
+ M({} {3} {2, 3}
{} {3} {2, 3})
+ M({2} {2} {3}
{2} {3} {3})
+ M({2} {3} {3}
{2} {2} {3})
+ M({2} {3} {2, 3}
{3} {2} {2, 3}).
Therefore the expression for b_3 in Table <ref> does not uniquely determine the coefficients c_S where S = 3.
However, there is additional information we can use to obtain uniqueness.
The polynomial f(x) possesses symmetries arising from the following invariance properties.
Since the iterative scaling process isn't sensitive to row order, the top left entry of (A) is invariant under row permutations of A that fix the first row.
Similarly for column permutations.
Additionally, the top left entry of (A) is invariant under transposition of A; this follows from Sinkhorn's result that (A) is the unique doubly stochastic matrix S such that S = R A C for some diagonal matrices R and C.
This suggests the following equivalence relation.
Let S and T be subsets of D(m, n).
We write T ≡ S if the set of minors specified by T is transformed into the set of minors specified by S by some composition of
* row permutations that fix the first row,
* column permutations that fix the first column, and
* transposition if m = n.
For each S ⊆ D(m, n), define the class sum
Σ(S) = ∑_T ⊆ D(m, n)
T ≡ S M(T)
to be the sum of the monomials corresponding to the elements in the equivalence class of S.
Let m = 3 and n = 3, and consider the subset
S =
{2}
{2} of size 1.
The equivalence class of S is
{{2}
{2}, {2}
{3}, {3}
{2}, {3}
{3}}
since these 4 specifications of 1 × 1 minors can be obtained from each other by row and column permutations.
This equivalence class is reflected in the linear combination (<ref>) for a_11 b_1.
Namely, the four monomials
M({2}
{2}),
M({2}
{3}),
M({3}
{2}),
M({3}
{3})
all have the same coefficient -1.
In particular, their contribution to a_11 b_1 is
-Σ({2}
{2})
=
-M({2}
{2})
-
M({2}
{3})
-
M({3}
{2})
-
M({3}
{3}).
Motivated by Example <ref>, we add the constraint that c_T = c_S when T ≡ S, so that the coefficients c_S reflect the symmetries of f(x).
With this constraint, the coefficient a_11 b_3 has a unique representation as a linear combination of monomials M(S) where S = 3.
Writing each coefficient in Table <ref> using class sums gives the following improvement of Theorem <ref>.
Here d_k = a_11 b_k.
Let A be a positive 3 × 3 matrix.
The top left entry x of (A) satisfies d_6 x^6 + d_5 x^5 + d_4 x^4 + d_3 x^3 + d_2 x^2 + d_1 x + d_0 = 0, where
d_6 =
Σ({} {2} {2} {3} {3} {2, 3}
{} {2} {3} {2} {3} {2, 3})
d_5 =
-3 Σ({} {2} {2} {3} {3}
{} {2} {3} {2} {3})
- Σ({} {2} {2} {3} {2, 3}
{} {2} {3} {2} {2, 3})
+ Σ({2} {2} {3} {3} {2, 3}
{2} {3} {2} {3} {2, 3})
d_4 =
4 Σ({} {2} {2} {3}
{} {2} {3} {2})
+ Σ({} {2} {3} {2, 3}
{} {2} {3} {2, 3})
- 3 Σ({2} {2} {3} {3}
{2} {3} {2} {3})
d_3 =
-4 Σ({} {2} {2}
{} {2} {3})
- 5 Σ({} {2} {3}
{} {2} {3})
+ Σ({} {2} {2, 3}
{} {2} {2, 3})
+ Σ({2} {2} {3}
{2} {3} {2})
- Σ({2} {3} {2, 3}
{2} {3} {2, 3})
d_2 =
4 Σ({} {2}
{} {2})
- 3 Σ({} {2, 3}
{} {2, 3})
+ Σ({2} {3}
{2} {3})
d_1 =
-3 Σ({}
{})
- Σ({2}
{2})
+ Σ({2, 3}
{2, 3})
d_0 =
Σ(
).
Several class sums do not appear in Theorem <ref>, namely
Σ({} {2} {2} {2, 3}
{} {2} {3} {2, 3}), Σ({2} {2} {3} {2, 3}
{2} {3} {2} {2, 3}), Σ({2} {2} {2, 3}
{2} {3} {2, 3}), Σ({2} {2}
{2} {3}), Σ({2} {2, 3}
{2} {2, 3}).
These class sums get assigned the coefficient 0 when the coefficients d_4, d_3, d_2 are written as linear combinations of Σ(S).
In total, there are 24 equivalence classes of subsets S ⊆ D(3, 3), so we have compressed the information in Table <ref> down to a function from the set of these 24 equivalence classes to the set {-5, -4, -3, -1, 0, 1, 4}.
For the particular matrix A in Example <ref>, Theorem <ref> gives 91064736 x^6 - 53554284 x^5 - 205179237 x^4 - 30568428 x^3 + 51247971 x^2 + 3486078 x - 2991816 = 0 for the top left entry.
This equation can be computed quickly with SinkhornPolynomials <cit.>.
Dividing this equation by 243 produces Equation (<ref>).
Some special cases can be obtained from Theorem <ref>.
For example, the following corollary shows that the degree drops if a minor is 0.
(If multiple minors are 0, the degree can drop further.)
Let A be a positive 3 × 3 matrix, and let a_11 be the (1, 1) entry of A.
If one of the 2 × 2 minors involving a_11 is 0 and all minors not involving a_11 are not 0, then the top left entry of (A) has degree at most 5.
The coefficient of x^6 in Theorem <ref> is
d_6
=
Σ({} {2} {2} {3} {3} {2, 3}
{} {2} {3} {2} {3} {2, 3})
=
M({} {2} {2} {3} {3} {2, 3}
{} {2} {3} {2} {3} {2, 3})
=
∏_(R, C) ∈ D(3, 3)Δ([ R; C ]).
Since one of the minors involving a_11 is 0, this product is 0, so d_6 = 0.
On the other hand, since all minors not involving a_11 are not 0, we have
d_0
=
Σ(
)
=
M(
)
=
∏_(R, C) ∈ D(3, 3)Γ([ R; C ])
≠ 0.
Therefore the polynomial in Theorem <ref> is not the 0 polynomial, and its degree in x is at most 5.
If a matrix is sufficiently degenerate, then Theorem <ref> is vacuously true and does not immediately give any information about (A).
For example, let
A =
[ 4 5 6; 1 2 3; 2 4 6 ].
Here rows 2 and 3 are scalar multiples of each other, so
Δ([ {2, 3}; {2, 3} ])
= A
= 0
and
Γ([ {2, 3}; {2, 3} ])
= a_11 A_{2, 3}, {2, 3}
= 0.
Since each monomial M(S) contains one of these two determinants as a factor, we have M(S) = 0 for all S ⊆ D(3, 3).
Therefore d_k = 0 for each k.
However, we can still use Theorem <ref> to determine the entries of (A), as the following result shows.
Let A be a positive 3 × 3 matrix, and let a_ij be the (i, j) entry of A.
If rows 2 and 3 are scalar multiples of each other, then the top left entry x of (A) satisfies e_3 x^3 + e_2 x^2 + e_1 x + e_0 = 0, where
e_3 = a_11 (a_11 a_22 - a_12 a_21) (a_11 a_23 - a_13 a_21)
e_2 = a_11 (a_11 a_12 a_21 a_23 + a_11 a_13 a_21 a_22 - 3 a_11^2 a_22 a_23 + a_12 a_13 a_21^2)
e_1 = 3 a_11^3 a_22 a_23
e_0 = -a_11^3 a_22 a_23.
In particular, x is independent of row 3.
The analogous result holds if columns 2 and 3 are scalar multiples of each other.
For the matrix in Equation (<ref>), Corollary <ref> gives 24 (3 x^3 - 25 x^2 + 48 x - 16) = 0.
The idea is to carefully factor out the minors that are 0.
First we describe how to obtain the polynomial e_3 x^3 + e_2 x^2 + e_1 x + e_0, and then we justify it.
Begin with a general 3 × 3 matrix A with symbolic entries;
in particular, there are no algebraic relations between the entries.
Let r, s be symbols; eventually we will set r to be the scalar factor a_31/a_21.
Apply Theorem <ref> to A to obtain a polynomial equation satisfied by the top left entry of (A).
Then replace each instance of A in this polynomial with s A_{2, 3}, {2, 3}.
By the definition of M(S), each monomial now contains A_{2, 3}, {2, 3} as factor; divide by this factor.
Finally, replace a_3,j with r a_2,j for each j ∈{1, 2, 3}.
The resulting polynomial factors as a product two cubic polynomials in x.
One of these cubic factors is independent of r and s;
this is the polynomial e_3 x^3 + e_2 x^2 + e_1 x + e_0.
To justify the construction, let A now be the matrix in the statement of the corollary, let r = a_31/a_21 be the scalar factor, and fix a real number s ≠ 0.
We approximate A by matrices with no 0 minors.
Namely, let B(t) be a continuous (3 × 3 matrix)-valued function such that lim_t → 0^+ B(t) = A and, for all t > 0, no minor of B(t) is 0.
Further, we assume for all t > 0 that B(t)/ B(t)_{2, 3}, {2, 3} = s.
Apply Theorem <ref> to B(t) where t > 0.
Since B(t) = s B(t)_{2, 3}, {2, 3} by assumption, the polynomial given by Theorem <ref> is of the form ( B(t)_{2, 3}, {2, 3}) g(t, x).
The entries of (B(t)) are continuous functions of the entries of B(t), and the roots of a polynomial are continuous functions of its coefficients, so g(x) lim_t → 0^+ g(t, x) is a polynomial for the top left entry of (A).
Moreover, since only one of the two cubic factors of g(x) is independent of s, the top left entry of (A) is a root of that cubic factor.
§ SQUARE MATRICES AND SOLVING FOR COEFFICIENTS
In this section, we use Theorem <ref> to infer the form of an equation satisfied by the top left entry of the Sinkhorn limit of a positive n × n matrix.
We then interpolate the coefficients in the equation for 4 × 4 matrices to obtain Conjecture <ref>.
For a positive 2 × 2 matrix A, the top left entry x of (A) satisfies Equation (<ref>), namely (a_11 a_22 - a_12 a_21) x^2 - 2 a_11 a_22 x + a_11 a_22 = 0.
Multiplying by a_11, this can be written using Δ and Γ as
Δ([ {}; {} ])
Δ([ {2}; {2} ])
x^2
- 2
Δ([ {}; {} ])
Γ([ {2}; {2} ])
x
+
Γ([ {}; {} ])
Γ([ {2}; {2} ])
= 0
or, equivalently,
M({} {2}
{} {2})
x^2
- 2
M({}
{})
x
+
M(
)
= 0.
Along with Theorem <ref>, this suggests that, for an n × n matrix, the coefficient of x^k is a -linear combination of the monomials M(S) where S = k.
In particular, we expect the degree of the equation to be D(n, n) = ∑_k = 0^n - 1n - 1k^2 = 2 n - 2n - 1, so that generically the entries of (A) for a 4 × 4 matrix have degree 63 = 20 and for a 5 × 5 matrix have degree 84 = 70.
We generalize the notation for the coefficients c_S from the previous section to c_S(n) for an n × n matrix, where c_S(3) c_S.
Let n ≥ 1.
There exist integers c_S(n), indexed by subsets S ⊆ D(n, n), such that, for every positive n × n matrix A, the top left entry x of (A) satisfies
∑_k = 0^2 n - 2n - 1(∑_S ⊆ D(n, n)
S = k c_S(n) M(S)) x^k = 0
or, equivalently,
∑_S ⊆ D(n, n) c_S(n) M(S) x^S = 0.
This conjecture predicts that, since both
Δ([ R; C ]) and Γ([ R; C ])
are homogeneous degree-(R + 1) polynomials in the entries of A, each monomial M(S) (and therefore also the coefficient of each x^k in Equation (<ref>)) is a homogeneous polynomial with degree
∑_(R, C) ∈ D(n, n) (R + 1)
= ∑_i = 0^n - 1n - 1i^2 (i + 1)
= n + 1/22 n - 2n - 1.
For n = 1, 2, 3, …, this degree is 1, 3, 12, 50, 210, 882, 3696, 15444, … A092443.
We mention two surprising properties of the polynomial in Theorem <ref> that we expect to generalize to the polynomial in Conjecture <ref>.
The first is a symmetry.
Each coefficient d_k in Theorem <ref> can be obtained from d_6 - k by replacing
Δ([ R; C ])
↦Γ([ {2, 3}∖ R; {2, 3}∖ C ])
and Γ([ R; C ])
↦Δ([ {2, 3}∖ R; {2, 3}∖ C ]).
The second is that, for each k, the sum of the coefficients c_S(3) for S = k is a signed binomial coefficient, namely
∑_S ⊆ D(3, 3) c_S(3) x^S = (x - 1)^6.
Analogous statements also hold for the quadratic polynomial in Equation (<ref>);
each coefficient d_k is related to d_2 - k by a symmetry, and
∑_S ⊆ D(2, 2) c_S(2) x^S = (x - 1)^2.
We do not have explanations for either of these properties.
It remains to determine the coefficients c_S(n).
They are not uniquely determined by the conditions in Conjecture <ref> since we can scale the polynomial.
To remove this source of non-uniqueness, we define c_{}(n) = 1 for all n ≥ 1.
Based on the polynomials for n = 2 and n = 3, we conjecture that c_D(n, n)(n) = 1 for all n ≥ 2.
For n = 3 we determined the coefficients c_S(3) from the output of a Gröbner basis computation, but for n ≥ 4 this computation seems to be infeasible.
For a general 4 × 4 matrix, we aborted the computation after 1 week.
Instead, we generate many pseudorandom n × n matrices A, identify the top left entry of (A) as an algebraic number for each, and set up systems of linear equations in c_S(n).
We describe these three steps next.
For the first step, we generate matrices with entries from {1, 2, …, 20}.
Since the entries are integers, each M(S) is also an integer; this will be important in the third step.
For each matrix, we check that none of its minors are 0, since a 0 minor implies M(S) = 0 for several S, removing the dependence on the corresponding coefficients c_S(n).
If any minors are 0, we discard that matrix.
In the second step, for each matrix A generated in the first step, we determine a polynomial equation satisfied by the top left entry x of (A).
There are two possible methods.
One method is to use Gröbner bases;
this is faster than the Gröbner basis computation for a matrix with symbolic entries, but for n ≥ 5 it is still slow.
Therefore we use another method, which is to apply the iterative scaling process to obtain a numeric approximation to (A) and then guess a polynomial for its top left entry.
We begin by numericizing the integer entries of A to high precision.
For n = 4 we use precision 2^12, and for n = 5 we use precision 2^15.
Then we iteratively scale until we reach a fixed point.
The precision of the entries drops during the scaling process, but with the initial precisions 2^12 and 2^15 we get entries with sufficiently high precision that we can reliably recognize them using PSLQ.
The expected degree of the polynomial is d = 2 n - 2n - 1 according to Conjecture <ref>.
Building in redundancy, we use Mathematica's <cit.> to approximate x by an algebraic number with target degree d + 2.
When the output has degree d, which is almost always the case, this is strong evidence that the approximation is in fact the exact algebraic number we seek.
Occasionally the output has degree less than d, in which case we discard the matrix; for example, one 4 × 4 matrix produced a polynomial with degree 8 rather than 20, presumably because the general degree-20 polynomial, when evaluated at the entries of the matrix, is reducible.
Finally, we perform a check on the output by computing the ratio of its leading coefficient to its constant coefficient.
This ratio should be M(D(n, n))/M({}), assuming Conjecture <ref> is correct and c_D(n, n)(n) = 1.
All outputs passed this test.
We record the matrix A along with the polynomial equation satisfied by x, and this will give us 1 equation in the third step.
(In fact we can get n^2 equations by applying PSLQ to each entry of the numeric approximation to (A) and recording each polynomial along with the matrix obtained by swapping the appropriate rows and columns of A.
For n = 5 this is worthwhile, since our implementation of the iterative scaling process takes roughly 3 minutes to reach a fixed point.)
The third step is to determine the coefficients c_S(n) in the coefficient of x^k in Conjecture <ref>.
For a given k, we do this by solving a system of linear equations involving the coefficients c_S(n) where S = k.
We assume c_T(n) = c_S(n) if T ≡ S, so it suffices to determine c_S(n) for one representative S of each equivalence class, analogous to Theorem <ref>.
This reduces the number of unknown coefficients, which reduces the number of equations we need, which reduces the number of matrices A we apply the iterative scaling process to in the second step above.
However, first we must partition {S ⊆ D(n, n) : S = k} into its equivalence classes under ≡.
Some care must be taken to do this efficiently;
we make use of the fact that row permutations commute with column permutations, so it suffices to apply row permutations first.
Once we have computed the equivalence classes, we take each polynomial computed in the second step above, scale it by an integer so that its leading coefficient is M(D(n, n)) (and therefore its constant coefficient is M({})), extract the coefficient of x^k, and set this coefficient equal to ∑_S c_S(n) Σ(S) where the sum is over one representative from each equivalence class of size-k subsets.
The number of equivalence classes tells us how many such equations we need in order to solve for the unknown coefficients c_S(n).
We include more equations than necessary in the system, building in redundancy, so that if a solution is found then we can be confident that the conjectured form is correct.
Then we solve the system.
In practice, even setting up the system can be computationally expensive.
For n = 5 and k = 4, our initial implementation took 6 days to set up the system (before solving!) because there are 1518 unknown coefficients c_S(n), so we need at least that many equations, and each equation involves 704 = 916895 monomials, each of which is a product of 70 determinants.
For small k, a more efficient way to construct each equation is to take advantage of the fact that, for each pair S, T of subsets of D(5, 5), the products M(S) and M(T) have most factors in common, and almost all are Γ factors.
Therefore, we form an equivalent but substantially simpler equation by dividing both sides by
M({})
= ∏_(R, C) ∈ D(5, 5)Γ([ R; C ]).
On the left, we compute M({}) once and divide the extracted coefficient of x^k by M({}).
On the right, we divide each M(S) by M({}).
Instead of computing M(S)/M({}) from definitions, we precompute the ratio
Δ([ R; C ])
/
Γ([ R; C ])
for each (R, C) ∈ D(5, 5); then, for each S with S = k, we use the precomputed ratios to compute
M(S)/M({})
=
∏_(R, C) ∈ SΔ([ R; C ])/Γ([ R; C ]).
Here we're using the fact that no minor is 0 to divide by
Γ([ R; C ]).
This method is also used by SinkhornPolynomials to compute polynomials more quickly when no minor is 0.
We now carry out these three steps for n = 4 to obtain a polynomial for the top left entry x of (A) for 4 × 4 matrices A.
This polynomial is analogous to the polynomial in Theorem <ref> for 3 × 3 matrices.
The number of equivalence classes of size-k subsets of D(4, 4) for k = 0, 1, …, 20 is
1, 4, 12, 40, 123, 324, 724, 1352, 2108, 2760, 3024, 2760, …, 4, 1.
The number of unknown coefficients c_S(4) is the sum of these numbers, which is 17920.
We use the definition c_{}(4) = 1 and the conjecture c_D(4, 4)(4) = 1, and we solve the remaining 19 systems of linear equations, the largest of which requires 3024 equations.
Unfortunately, for each k ∈{4, 5, …, 16} the system has multiple solutions.
For example, when we solve the system for k = 4, only 104 of the 123 unknown coefficients c_S(4) with S = 4 are uniquely determined;
the other 19 are parameterized by 2 free variables.
For k = 10, the solution space has dimension 1141.
This is not a weakness of the interpolation strategy but rather implies that the coefficients of x^4, x^5, …, x^16 have multiple representations as linear combinations of class sums Σ(S) and therefore that relations exist among these class sums.
However, by setting the free variables to 0 (or any other values), we obtain variable-free coefficients c_S(4).
The 17920 coefficients
c_D(4, 4)(4) = 1
⋮
c_{}
{}(4) = -4
c_{2}
{2}(4) = -2
c_{2, 3}
{2, 3}(4) = 0
c_{2, 3, 4}
{2, 3, 4}(4) = 2
c_{}(4) = 1
determine a polynomial equation satisfied by the top left entry x of the Sinkhorn limit of a 4 × 4 matrix, namely
∑_k = 0^20(∑ c_S(4) Σ(S)) x^k = 0
where the inner sum is over one representative S from each equivalence class of size-k subsets of D(4, 4).
The full list of coefficients is included in SinkhornPolynomials <cit.>.
Consider the matrix
A =
[ 3 1 2 2; 2 2 2 1; 1 2 3 2; 1 4 2 3 ],
and let x be the top left entry of (A).
Conjecture <ref> gives
382625520076800 x^20 - 15753370260418560 x^19 + 224644720812019200 x^18
- 1949693785825830912 x^17 + 11625683820163305984 x^16 - 50547801347982259200 x^15
+ 165827284134596798976 x^14 - 419342005165888558080 x^13 + 828111699533723747328 x^12
- 1284220190788992755712 x^11 + 1558933050581256001536 x^10 - 1456458194243244008448 x^9
+ 999710159534823121920 x^8 - 435645828109071673344 x^7 + 31060141423020794880 x^6
+ 122853118332060905472 x^5 - 110123924197151416320 x^4 + 53612068706701295616 x^3
- 16383341182381572096 x^2 + 2975198930601246720 x - 246790694704250880
= 0.
The values of the coefficients c_S(4) in Conjecture <ref> are not all canonical, since we don't have natural conditions under which they are uniquely determined.
However, we continue under the assumption that there is a unique natural function c_S(n) and seek to identify it.
In principle, we can use the same method to interpolate a polynomial equation for n × n matrices for any given n.
However, the computation is formidable.
For 5 × 5 matrices, we were able to compute the coefficients for k = 4 and k = 5 (where the families of representations are respectively 8-dimensional and 44-dimensional), but for k = 6 we could not get the computation of equivalence classes to finish (aborting after 2 weeks).
Completing the computation up to the halfway point k = 1/284 = 35, after which we could use the conjectural symmetry, seems to be infeasible.
For 6 × 6 matrices, PSLQ must recognize algebraic numbers with degree 105 = 252.
We couldn't get any of these computations to finish with sufficiently high precision.
§ RECTANGULAR MATRICES AND COMBINATORIAL STRUCTURE
The coefficients c_S(n) for n ∈{2, 3, 4, 5} that we have computed so far are not enough to guess the general formula for c_S(n).
In this section, we expand the scope to include matrices that are not necessarily square.
This allows us to compute enough coefficients to identify formulas in special cases.
These formulas reveal the relevant combinatorial structure on subsets S, and this allows us to piece together all but one detail of the general formula for the coefficients, resulting in Conjecture <ref>.
Doubly stochastic matrices are necessarily square.
This is because, in every matrix, the sum of the row sums is equal to the sum of the column sums.
Therefore we must generalize how we scale.
Let A be a positive m × n matrix.
The Sinkhorn limit of A is the matrix obtained by iteratively scaling so that each row sum is 1 and each column sum is m/n.
Its existence was established (in a more general form) by Sinkhorn in a 1967 paper <cit.>.
Conjecture <ref> generalizes to m × n matrices as follows.
We extend the coefficients c_S(n) in the previous section to coefficients c_S(m, n), where c_S(n, n) = c_S(n).
For each m and n, we scale the coefficients so that c_{}(m, n) = 1.
We have D(m, n) = ∑_k = 0^min(m, n) - 1m - 1kn - 1k = m + n - 2m - 1.
Let m ≥ 1 and n ≥ 1.
There exist rational numbers c_S(m, n), indexed by subsets S ⊆ D(m, n), such that, for every positive m × n matrix A, the top left entry x of (A) satisfies
∑_S ⊆ D(m, n) c_S(m, n) M(S) x^S = 0.
In particular, x has degree at most m + n - 2m - 1.
To interpolate values of c_S(m, n), we use the method described in Section <ref>.
Overall, we used 1.5 years of CPU time to iteratively scale matrices of various sizes and recognize 102000 algebraic numbers.
An additional month of CPU time was spent setting up and solving systems of linear equations in the coefficients c_S(m, n), which resulted in the identification of 63000 rational coefficients (and an additional 56000 coefficients parameterized by free variables).
Rather than fixing m and n and varying S as in previous sections, we change our perspective now by fixing S and working to identify c_S(m, n) as a function of m and n.
Accordingly, we define c_S(m, n) to be 0 if S ⊈ D(m, n) (that is, S contains row or column indices that are larger than an m × n matrix supports).
We begin with subsets S where S = 1.
Let S = {}
{}.
Equation (<ref>) implies c_S(2, 2) = -2, Theorem <ref> implies c_S(3, 3) = -3, and Conjecture <ref> implies c_S(4, 4) = -4.
The values of c_S(m, n) we computed for several additional matrix sizes appear in the following table.
[ n = 1 2 3 4; m = 1 -1 -2 -3 -4; 2 -1 -2 -3 -4; 3 -1 -2 -3 -4; 4 -1 -2 -3 -4 ]
This suggests the formula c_S(m, n) = -n.
There is an asymmetry in our definition of (A) for non-square matrices, since (A) has row sums 1 and column sums m/n.
However, we expect some symmetry in the coefficients c_S(m, n) since m (A) = n (A).
We can obtain this symmetry by considering 1/m(A), which has row sums 1/m and column sums 1/n.
The form of the polynomial for the top left entry y of 1/m(A) can be obtained from Conjecture <ref> by substituting x = m y.
Therefore, the coefficients c_S(m, n) should satisfy m^S c_S(m, n) = n^S c_S(n, m), where S is defined by
(
R_1 R_2 ⋯ R_k
C_1 C_2 ⋯ C_k)
=
C_1 C_2 ⋯ C_k
R_1 R_2 ⋯ R_k.
As in the previous example, let S = {}
{}.
Since S = S, we have m c_S(m, n) = n c_S(n, m).
In other words, m c_S(m, n) is a symmetric function of m and n.
Indeed, the table of values suggests m c_S(m, n) = -m n.
The coefficients for other size-1 subsets S also seem to be simple polynomial functions of m and n.
For S = {2}
{2}, the data suggests m c_S(m, n) = m + n - m n (for all m ≥ 2 and n ≥ 2).
For S = {2}
{3}, the data suggests the same formula m c_S(m, n) = m + n - m n); indeed this is expected, since {2}
{2}≡{2}
{3}.
Since each subset
R
C with R = i is equivalent to S = {2, 3, …, i + 1}
{2, 3, …, i + 1}, it suffices to consider the latter.
For S = {2, 3}
{2, 3}, the data suggests m c_S(m, n) = 2 m + 2 n - m n.
For S = {2, 3, 4}
{2, 3, 4}, it suggests m c_S(m, n) = 3 m + 3 n - m n.
These formulas lead to the following.
Let m ≥ 1 and n ≥ 1, and let
S =
R
C ⊆ D(m, n).
Then m c_S(m, n) = R (m + n) - m n.
Next we consider subsets S where S = 2.
Let S = {} {2}
{} {2}.
Several values of m^2 c_S(m, n) appear in the following table.
[ n = 2 3 4 5; m = 2 4 12 24 40; 3 12 36 72 120; 4 24 72 144 240; 5 40 120 240 400 ]
This suggests that m^2 c_S(m, n) = (m - m n) (n - m n).
Let S = {2} {2}
{2} {3}.
The two minor specifications in S involve the same row but different columns.
Therefore there is no subset T in the equivalence class of S such that T = T, so we do not expect m^2 c_S(m, n) to be a symmetric function of m and n.
Here are several of its values (where the bottom right entry is not included because we have no data for 6 × 6 matrices):
[ n = 3 4 5 6; m = 2 -3 0 5 12; 3 0 16 40 72; 4 9 48 105 180; 5 24 96 200 336; 6 45 160 325 ]
This suggests that m^2 c_S(m, n) = (n - m n) (2 m + n - m n).
By interpolating polynomial formulas for additional subsets S, one conjectures that the scaled coefficient m^S c_S(m, n) is a polynomial function of m and n with degree S in each variable.
Moreover, there seem to be two basic relationships between minor specifications in S that play a central role in the form of the polynomial function.
These are illustrated by the previous two examples.
We say that two minor specifications (R_1, C_1), (R_2, C_2) ∈ D(m, n) are linked if either of the following conditions holds.
* Their sizes differ by 1, and the smaller is a subset of the larger.
That is,
* R_1 = R_2 ∖{i} and C_1 = C_2 ∖{j} for some i ∈ R_2 and j ∈ C_2, or
* R_2 = R_1 ∖{i} and C_2 = C_1 ∖{j} for some i ∈ R_1 and j ∈ C_1.
In this case, we say that they form a type-1 link.
* Their sizes are the same, and they differ in exactly 1 row index or 1 column index.
That is,
* R_1 = R_2 and C_1 ∖{i} = C_2 ∖{j} for some i ∈ C_1 and j ∈ C_2 such that i ≠ j, or
* C_1 = C_2 and R_1 ∖{i} = R_2 ∖{j} for some i ∈ R_1 and j ∈ R_2 such that i ≠ j.
In this case, we say that they form a type-2 link.
The minor specifications ({}, {}) and ({2}, {2}) in Example <ref> form a type-1 link, and the minor specifications ({2}, {2}) and ({2}, {3}) in Example <ref> form a type-2 link.
Let S = {2} {2, 3}
{2} {2, 3}.
The two minor specifications ({2}, {2}) and ({2, 3}, {2, 3}) form a type-1 link.
The values of m^2 c_S(m, n) suggest m^2 c_S(m, n) = (m + 2 n - m n) (2 m + n - m n).
Let S = {2, 3} {2, 4}
{2, 3} {2, 3}, which consists of a type-2 link.
The data suggests m^2 c_S(m, n) = (2 m + n - m n) (2 m + 3 n - m n).
Let S = {2} {2, 3, 4}
{2} {2, 3, 4}, whose two elements are not linked.
The data suggests m^2 c_S(m, n) = (m + n - m n) (3 m + 3 n - m n).
These examples, along with others, suggest general formulas for size-2 subsets.
Let m ≥ 1 and n ≥ 1, and let
S =
R_1 R_2
C_1 C_2⊆ D(m, n).
* If (R_1, C_1) and (R_2, C_2) form a type-1 link with R_1 + 1 = R_2, then
m^2 c_S(m, n)
= (R_1 (m + n) + n - m n) (R_1 (m + n) + m - m n).
* If (R_1, C_1) and (R_2, C_2) form a type-2 link with R_1 = R_2, then
m^2 c_S(m, n)
= (R_1 (m + n) - m - m n) (R_1 (m + n) + m - m n).
* If (R_1, C_1) and (R_2, C_2) form a type-2 link with C_1 = C_2, then
m^2 c_S(m, n)
= (R_1 (m + n) - n - m n) (R_1 (m + n) + n - m n).
* If (R_1, C_1) and (R_2, C_2) are not linked, then
m^2 c_S(m, n)
= (R_1 (m + n) - m n) (R_2 (m + n) - m n).
Conjectures <ref> and <ref> imply that, if S≤ 2, then the polynomial m^S c_S(m, n) is a product of factors that are linear in m and linear in n.
When we consider subsets S with S≥ 3, we find additional polynomials with this property.
In particular, when S has no linked pairs, there seems to be a simple description of m^S c_S(m, n) as follows.
Let m ≥ 1 and n ≥ 1, and let
S =
R_1 R_2 ⋯ R_k
C_1 C_2 ⋯ C_k⊆ D(m, n).
If S contains no linked pairs, then
m^k c_S(m, n)
= ∏_i = 1^k (R_i (m + n) - m n).
This conjecture suggests more generally that the structure of the polynomial m^k c_S(m, n) is determined by the connected components of linked pairs in S and, moreover, that the contributions of the components are independent of each other.
This is the content of Conjecture <ref> below.
We make this precise by defining the following graph.
For each S ⊆ D(m, n), let G_S be the graph whose vertex set is S and whose edges connect pairs of linked vertices.
Let S = {} {2} {2, 3} {2, 4}
{} {2} {3, 4} {3, 4}.
The first two elements of S form a type-1 link, and the last two form a type-2 link.
These are the only links, so G_S is the graph on 4 vertices with two non-adjacent edges.
The two connected components are equivalent to the subsets in Examples <ref> and <ref>, respectively.
The values we computed of m^4 c_S(m, n) are as follows.
[ n = 4 5 6 7; m = 4 -2304 -5040 -7200 -6552; 5 -2880 0; 6 0; 7 10080 ]
This is consistent with m^4 c_S(m, n) = (m - m n) (n - m n) (2 m + n - m n) (2 m + 3 n - m n), which is the product of the formulas in Examples <ref> and <ref>.
Let m ≥ 1 and n ≥ 1.
For every S ⊆ D(m, n),
m^S c_S(m, n) = ∏_T m^T c_T(m, n)
where the product is over the connected components T of S.
Moreover, since S = ∑_T T, this implies c_S(m, n) = ∏_T c_T(m, n).
Conjecture <ref> is supported by all the rational coefficients we computed.
Assuming it is true, it suffices to determine m^S c_S(m, n) for subsets S consisting of a single connected component.
When a connected component consists of more than one linked pair, the corresponding polynomial is not necessarily a product of linear factors.
Let S = {2} {3} {2, 3}
{2} {3} {2, 3}.
The minor specifications ({2}, {2}) and ({3}, {3}) are not linked, but each of the other two pairs is.
Therefore G_S consists of a single connected component.
Here are several values of m^3 c_S(m, n):
[ n = 3 4 5 6; m = 3 -27 -70 -161 -324; 4 -70 -256 -682 -1456; 5 -161 -682 -1875 -4028; 6 -324 -1456 -4028 ]
Without the value for 6 × 6 matrices, we cannot interpolate a cubic polynomial in m and n.
However, by interpolating cubic polynomials in n for the first three rows, we see that m + n - m n is likely a factor.
Dividing each value in the table by this factor, we then interpolate a quadratic polynomial to obtain m^3 c_S(m, n) = (m + n - m n) (2 m^2 + 2 n^2 + 6 m n - 3 m^2 n - 3 m n^2 + m^2 n^2).
This quadratic factor is irreducible.
An obvious question is whether there is a better way to write such polynomials, so that we can see the general structure.
In fact there is, using determinants.
Since the determinant of a block diagonal matrix is the product of the determinants of the blocks, determinant formulas are good candidates for functions that decompose as products over connected components.
In this direction, we next rewrite Conjecture <ref> using determinant formulas; these are likely more natural than the factorizations in Conjecture <ref>.
Let m ≥ 1 and n ≥ 1, and let
S =
R_1 R_2
C_1 C_2⊆ D(m, n).
* If (R_1, C_1) and (R_2, C_2) form a type-1 link with R_1 + 1 = R_2, then
m^2 c_S(m, n)
= [ R_1 (m + n) - m n m; -n R_2 (m + n) - m n ].
* If (R_1, C_1) and (R_2, C_2) form a type-2 link with R_1 = R_2, then
m^2 c_S(m, n)
= [ R_1 (m + n) - m n -m; -m R_2 (m + n) - m n ].
* If (R_1, C_1) and (R_2, C_2) form a type-2 link with C_1 = C_2, then
m^2 c_S(m, n)
= [ R_1 (m + n) - m n -n; -n R_2 (m + n) - m n ].
* If (R_1, C_1) and (R_2, C_2) are not linked, then
m^2 c_S(m, n)
= [ R_1 (m + n) - m n 0; 0 R_2 (m + n) - m n ].
For size-1 subsets
S =
R
C ⊆ D(m, n), we can rewrite Conjecture <ref> as the 1 × 1 determinant m c_S(m, n) = [ R (m + n) - m n ].
For the size-0 subset S = {}, the definition c_{}(m, n) = 1 is consistent with c_{}(m, n) being the determinant of the 0 × 0 matrix.
Along with Conjecture <ref>, this suggests a determinant formula for m^S c_S(m, n) for an arbitrary subset S.
Since the matrix in this determinant formula resembles an adjacency matrix, we introduce the following notation.
For each
S =
R_1 R_2 ⋯ R_k
C_1 C_2 ⋯ C_k, define _S(m, n) to be the k × k matrix with the property that, for all i, j satisfying 1 ≤ i < j ≤ k, the 2 × 2 submatrix (_S(m, n))_{i, j}, {i, j} is the matrix in Conjecture <ref> for (R_i, C_i) and (R_j, C_j).
In particular, the ith diagonal entry of _S(m, n) is R_i (m + n) - m n, and the off-diagonal entries are elements of {-m, -n, 0, m}.
As in Example <ref>, let S = {2} {3} {2, 3}
{2} {3} {2, 3}.
We have
_S(m, n)
= [ m + n - m n 0 m; 0 m + n - m n m; -n -n 2 m + 2 n - m n ].
Indeed, _S(m, n) is equivalent to the formula in Example <ref> for m^3 c_S(m, n).
Unfortunately, this construction doesn't always work.
Let S = {2, 3} {2, 3} {2, 3}
{2, 3} {2, 4} {2, 5}.
Each pair of elements in S forms a type-2 link, and the common column is the same for all three links.
We have
_S(m, n)
= [ 2 m + 2 n - m n -m -m; -m 2 m + 2 n - m n -m; -m -m 2 m + 2 n - m n ].
The formula _S(m, n) does not produce the following values we computed for m^3 c_S(m, n).
[ n = 5 6 7 8 9 10 11; m = 3 28 54 80 100 108 98 64; 4 216 256 200 0; 5 500 338; 6 784 ]
However, if we alter the signs of the off-diagonal terms, we can get a determinant formula that produces these values, namely
m^3 c_S(m, n)
= [ 2 m + 2 n - m n m m; m 2 m + 2 n - m n m; m m 2 m + 2 n - m n ].
We conjecture below that the signs of the off-diagonal terms in _S(m, n) can always be altered in such a way, independent of m and n, that its determinant gives the value of m^S c_S(m, n).
Exactly how to alter them is the detail we have not been able to determine.
This alteration is not unique, since if we pick a set V ⊆{1, 2, …, k} of indices and negate the rows and columns indexed by V then the determinant does not change.
An equivalence class of correct sign alterations is generated in this way, all of which lead to the value of m^S c_S(m, n).
This equivalence class seems to depend only on the link structure of S and not on the sizes of its elements.
Let S = {2, 3, 4} {2, 3, 4} {2, 3, 4}
{2, 3, 4} {2, 3, 5} {2, 3, 6}.
The elements of S are related to each other in the same way as in the previous example;
each pair forms a type-2 link, and the common columns are the same for all three links.
Using the same signs as in the previous example, the determinant formula
m^3 c_S(m, n)
= [ 3 m + 3 n - m n m m; m 3 m + 3 n - m n m; m m 3 m + 3 n - m n ]
agrees with the 4 values we computed.
These same signs also work for S = {2} {2} {2}
{2} {3} {4}, which also has the same link structure.
Our most general result is therefore Conjecture <ref>, which appears in Section <ref> and which we restate below with more information.
We introduce one last bit of notation to enable sign alterations.
We also take the opportunity to scale the entries of _S(m, n) by 1/m; this allows us to dispense with the factor m^S that would otherwise appear in the summand in Conjecture <ref>.
Let S ⊆ D(m, n), and let k = S.
Let σ{1, 2, …, k}^2 →{-1, 1}.
Define _S, σ(m, n) to be the k × k matrix whose (i, j) entry is 1/mσ((i, j)) times the (i, j) entry of _S(m, n).
Let m ≥ 1 and n ≥ 1.
For each S ⊆ D(m, n), there exists a function σ(S) {1, 2, …, S}^2 →{-1, 1} with σ(S)((i, i)) = 1 for all i such that, for every positive m × n matrix A, the top left entry x of (A) satisfies
∑_S ⊆ D(m, n)(_S, σ(S)(m, n)) M(S) x^S = 0.
Moreover, the sign alterations σ(S) can be chosen so that they
* satisfy σ(T) = σ(S) for all T ≡ S,
* are independent of m and n, and
* depend only on the link structure of S and not on the sizes of its elements.
We conclude this section by identifying the correct sign alterations for 2 × n matrices for general n, leading to an explicit formula for the equation satisfied by x.
For each exponent k, there are at most 2 equivalence classes of subsets — an equivalence class containing subsets S such that ({}, {}) ∈ S and another containing the rest.
We consider them separately.
Let m = 2 and S = {} {2} {2} ⋯ {2}
{} {2} {3} ⋯ {k}.
Then
_S(2, n)
= [ -2 n 2 2 ⋯ 2; -n 2 - n -2 ⋯ -2; -n -2 2 - n ⋯ -2; ⋮ ⋮ ⋮ ⋱ ⋮; -n -2 -2 ⋯ 2 - n ].
If k ∈{1, 2}, then 2^S c_S(2, n) = _S(2, n), so these signs are correct.
If k ≥ 3, they are not, but negating each -2 entry (that is, the off-diagonal entries that aren't in the first row or first column) gives a matrix whose determinant is 2^S c_S(2, n).
Let m = 2 and T = {2} {2} {2} ⋯ {2}
{2} {3} {4} ⋯ {k + 1}.
Then
_T(2, n)
= [ 2 - n -2 ⋯ -2; -2 2 - n ⋯ -2; ⋮ ⋮ ⋱ ⋮; -2 -2 ⋯ 2 - n ].
If k ∈{0, 1, 2}, then these signs are correct.
If k ≥ 3, then (again) negating the off-diagonal entries that aren't in the first row or first column gives a matrix whose determinant is 2^S c_T(2, n).
The determinants of the two previous sign-corrected matrices evaluate as follows.
Let n ≥ 1.
Let S_k = {} {2} {2} ⋯ {2}
{} {2} {3} ⋯ {k} for each k ∈{1, …, n - 1, n}, let T_k = {2} {2} {2} ⋯ {2}
{2} {3} {4} ⋯ {k + 1} for each k ∈{0, 1, …, n - 1}, and define the associated coefficients by
2^k c_S_k(2, n) =
(-n)^k - 1 (2 k - 2 n - 2) if 1 ≤ k ≤ n
0 if k = 0
2^k c_T_k(2, n) =
(-n)^k - 1 (2 k - n) if 0 ≤ k ≤ n - 1
0 if k = n.
(We have defined c_S_0(2, n) = 0 and c_T_n(2, n) = 0 despite S_0 and T_n being undefined; this allows us to write the following sum simply.)
For every positive 2 × n matrix A, the top left entry x of (A) satisfies
∑_k = 0^n (
c_S_k(2, n) Σ(S_k)
+
c_T_k(2, n) Σ(T_k)
) x^k
= 0.
Gröbner basis computations are feasible for 2 × n matrices and establish that the previous conjecture is true for n ≤ 12.
§ OPEN QUESTIONS
The main open question is to identify the equivalence class of correct sign alterations σ(S) in Conjecture <ref> for each S, ideally in the form of a canonical representative.
Since _S(m, n) resembles an adjacency matrix, it seems likely that _S, σ(S)(m, n) has a combinatorial interpretation, and this may suggest the correct signs.
A natural candidate is a signed graph, which is a graph along with a function assigning -1 or 1 to each edge.
The switching class of a signed graph is the set of all signed graphs that can be obtained by choosing a subset V of its vertices and negating the signs of all edges incident to a vertex in V (with multiplicity if an edge is incident to multiple vertices in V) <cit.>.
The equivalence class of correct sign alterations for S corresponds to a switching class on the graph G_S defined in Section <ref>.
We haven't been able to use this to identify the correct signs, however.
Second, how can we prove Conjecture <ref>?
In the absence of a combinatorial proof, one could interpolate polynomial equations representing the entries of diagonal matrices R and C, as we did for the entries of (A), and hope to prove that all three matrices are correct by checking that they satisfy (A) = R A C.
This is how Nathanson established the Sinkhorn limit of a 2 × 2 matrix <cit.>.
However, this works for 2 × 2 matrices because we have explicit expressions for the entries of (A), rather than specifications as roots.
For 3 × 3 matrices, the diagonal entries of R and C are roots of degree-6 polynomials, and generically the product of two such roots has degree 6^2;
this is too big, since the entries of (A) have degree 6.
Therefore we would need to find a single degree-6 field extension that contains all entries of (A), R, and C so that we can perform arithmetic on them symbolically.
What is this extension?
Third, as we saw in Section <ref>, the lack of a unique representation of the coefficient of x^3 for 3 × 3 matrices is due to a relation among 12 monomials M(S) with S = 3.
What is the combinatorial structure of such relations?
Similarly, is there structure in relations among Σ(S)?
For example, the coefficient of x^5 for 3 × 4 matrices has a 1-dimensional family of representations due to the relation
Σ({} {2} {2} {3} {2, 3}
{} {2} {3} {4} {2, 4})
+ Σ({} {2} {2} {2, 3} {2, 3}
{} {2} {3} {2, 3} {2, 4})
+ Σ({2} {2} {3} {3} {2, 3}
{2} {3} {2} {4} {2, 3})
+ 2 Σ({} {2} {3} {2, 3} {2, 3}
{} {2} {3} {2, 4} {3, 4})
+ 2 Σ({2} {2} {2} {3} {2, 3}
{2} {3} {4} {2} {3, 4})
+ 2 Σ({2} {2} {3} {2, 3} {2, 3}
{2} {3} {4} {2, 4} {3, 4})
=
Σ({} {2} {3} {2, 3} {2, 3}
{} {2} {3} {2, 3} {2, 4})
+ Σ({2} {2} {2} {3} {2, 3}
{2} {3} {4} {2} {2, 3})
+ Σ({2} {2} {3} {2, 3} {2, 3}
{2} {3} {4} {2, 3} {2, 4})
+ 2 Σ({} {2} {2} {3} {2, 3}
{} {2} {3} {4} {2, 3})
+ 2 Σ({} {2} {2} {2, 3} {2, 3}
{} {2} {3} {2, 4} {3, 4})
+ 2 Σ({2} {2} {3} {3} {2, 3}
{2} {3} {2} {4} {3, 4}).
It would be interesting to understand these better.
Fourth, how does Corollary <ref> generalize to m × n matrices with linear dependencies among their rows or columns?
The coefficients in Corollary <ref> have unique representations as linear combinations of class sums Σ(S) where S ⊆ D(2, 3), namely
e_3 =
Σ({} {2} {2}
{} {2} {3})
e_2 =
-2 Σ({} {2}
{} {2})
+ Σ({2} {2}
{2} {3})
e_1 =
3 Σ({}
{})
e_0 =
-Σ(
).
What is the general formula?
Finally, there is a further generalization of Conjecture <ref> whose form is not known.
Decades before Sinkhorn's paper, the iterative scaling process was introduced by Kruithof <cit.> in the context of predicting telephone traffic.
In this application, rather than scaling to obtain row and column sums of 1, each row and column has a potentially different target sum.
Sinkhorn <cit.> showed that the limit exists.
We call this limit the Kruithof limit.
Kruithof <cit.> considered the matrix
A =
[ 2000 1030 650 320; 1080 1110 555 255; 720 580 500 200; 350 280 210 160 ]
with target row sums V = [ 6000 4000 2500 1000 ] and target column sums W = [ 6225 4000 2340 935 ].
Let x be the top left entry of the Kruithof limit.
Numerically, x ≈ 3246.38700234.
A Gröbner basis computation gives an equation
62 11170485642866385308015185014605806684592592997303612 x^20
- 1911288675240357642608985257264441863326549355081446688219995 x^19
+ ⋯
- 980316295756763597938629190043558577216660563425441394040234375 · 10^72 x
+ 60077293526471262201893650291744622440239260152893558984375 · 10^79
= 0
satisfied by x.
In particular, x has degree 20.
The previous example suggests that the degrees of entries of Kruithof limits are the same as those of Sinkhorn limits of the same size.
Let m ≥ 1 and n ≥ 1.
Let A be a positive m × n matrix, let V be a positive m × 1 matrix, and let W be a positive 1 × n matrix such that the sum of the entries of V equals the sum of the entries of W.
The top left entry x of the Kruithof limit of A with target row sums V and target column sums W is algebraic over the field generated by the entries of A, V, and W, with degree at most m + n - 2m - 1.
Since the Kruithof limit specializes to the Sinkhorn limit when V = [ 1 1 ⋯ 1 ] and W = [ m/n m/n ⋯ m/n ], we suspect that the entries in the determinant formulas in Conjecture <ref> generalize somehow to involve entries of V and W, and this should give a generalization of Conjecture <ref> to Kruithof limits.
Moreover, Gröbner basis computations suggest that the surprising property we mentioned in Section <ref> regarding the coefficients for square matrices satisfying
∑_S ⊆ D(n, n) c_S(n, n) x^S = (x - 1)^2 n - 2n - 1
generalizes as follows.
With the notation of Conjecture <ref>, let V = [ r_1 r_2 ⋯ r_m ] and W = [ c_1 c_2 ⋯ c_n ].
If the general equation satisfied by the top left entry x of the Kruithof limit of A with target row sums V and target column sums W is
∑_S ⊆ D(m, n) c_S(V, W) M(S) x^S = 0,
then
∑_S ⊆ D(m, n) c_S(V, W) x^S
= (x - r_1)^m + n - 3n - 1(x - c_1)^m + n - 3m - 1.
In particular, this sum is independent of r_2, …, r_m and c_2, …, c_n.
99
Allen-Zhu–Li–Oliveira–Wigderson
Zeyuan Allen-Zhu, Yuanzhi Li, Rafael Oliveira, and Avi Wigderson,
Much faster algorithms for matrix scaling,
58th Annual IEEE Symposium on Foundations of Computer Science (2017) 890–901.
Brown
David T. Brown,
A note on approximations to discrete probability distributions,
Information and Control 2 (1959) 386–392.
Chen–Varghese
Kevin Chen and Abel Varghese, personal communication, Summer 2019.
Cohen–Madry–Tsipras–Vladu
Michael B. Cohen, Aleksander Madry, Dimitris Tsipras, and Adrian Vladu,
Matrix scaling and balancing via box constrained Newton's method and interior point methods,
58th Annual IEEE Symposium on Foundations of Computer Science (2017) 902–913.
Cuturi
Marco Cuturi,
Sinkhorn distances: lightspeed computation of optimal transportation distances,
Advances in Neural Information Processing Systems 26 (2013) 2292–2300.
Deming–Stephan
W. Edwards Deming and Frederick F. Stephan,
On a least squares adjustment of a sampled frequency table when the expected marginal totals are known,
The Annals of Mathematical Statistics 11 (1940) 427–444.
Ekhad–Zeilberger
Shalosh B. Ekhad and Doron Zeilberger,
Answers to some questions about explicit Sinkhorn limits posed by Mel Nathanson,
<https://arxiv.org/abs/1902.10783> (6 pages).
Ferguson–Bailey
Helaman R. P. Ferguson and David H. Bailey,
A polynomial time, numerically stable integer relation algorithm,
RNR Technical Report RNR-91-032 (1992) (14 pages).
Franklin–Lorenz
Joel Franklin and Jens Lorenz,
On the scaling of multidimensional matrices,
Linear Algebra and its Applications 114–115 (1989) 717–735.
Idel
Martin Idel,
A review of matrix scaling and Sinkhorn's normal form for matrices and positive maps,
<https://arxiv.org/abs/1609.06349> (101 pages).
Kalantari–Khachiyan 1
Bahman Kalantari and Leonid Khachiyan,
On the rate of convergence of deterministic and randomized RAS matrix scaling algorithms,
Operations Research Letters 14 (1993) 237–244.
Kalantari–Khachiyan 2
Bahman Kalantari and Leonid Khachiyan,
On the complexity of nonnegative-matrix scaling,
Linear Algebra and its Applications 240 (1996) 87–103.
Kalantari–Lari–Ricca–Simeone
B. Kalantari, I. Lari, F. Ricca, and B. Simeone,
On the complexity of general matrix scaling and entropy minimization via the RAS algorithm,
Mathematical Programming 112 (2008) 371–401.
Kruithof
J. Kruithof,
Telefoonverkeersrekening,
De Ingenieur 52 (1937) E15–E25.
English translation by Pieter-Tjerk de Boer:
<https://wwwhome.ewi.utwente.nl/ ptdeboer/misc/kruithof-1937-translation.html>
Linial–Samorodnitsky–Wigderson
Nathan Linial, Alex Samorodnitsky, and Avi Wigderson,
A deterministic strongly polynomial algorithm for matrix scaling and approximate permanents,
Combinatorica 20 (2000) 545–568.
Nathanson 2x2
Melvyn B. Nathanson,
Alternate minimization and doubly stochastic matrices,
Integers 20A (2020) Article #A10 (17 pages).
Nathanson 3x3
Melvyn B. Nathanson,
Matrix scaling and explicit doubly stochastic limits,
Linear Algebra and its Applications 578 (2019) 111–132.
SinkhornPolynomials
Eric Rowland and Jason Wu,
SinkhornPolynomials,
a Mathematica package available as an ancillary file associated with this arXiv submission.
Sinkhorn
Richard Sinkhorn,
A relationship between arbitrary positive matrices and doubly stochastic matrices,
The Annals of Mathematical Statistics 35 (1964) 876–879.
Sinkhorn 1967
Richard Sinkhorn,
Diagonal equivalence to matrices with prescribed row and column sums,
The American Mathematical Monthly 74 (1967) 402–405.
OEIS
Neil Sloane et al.,
The On-Line Encyclopedia of Integer Sequences,
<http://oeis.org>.
GroebnerBasis
Wolfram Research,
GroebnerBasis,
Wolfram Language function,
<https://reference.wolfram.com/language/ref/GroebnerBasis.html> (1991, updated 2007).
RootApproximant
Wolfram Research,
RootApproximant,
Wolfram Language function,
<http://reference.wolfram.com/language/ref/RootApproximant.html> (2007, updated 2008).
Zaslavsky
Thomas Zaslavsky,
Characterizations of signed graphs,
Journal of Graph Theory 5 (1981) 401–406.
|
http://arxiv.org/abs/2409.02268v1 | 20240903195614 | Lissajous dynamics of a quantum particle in a tilted two-dimensional discrete lattice | [
"Grzegorz Jaczewski",
"Tomasz Sowiński"
] | quant-ph | [
"quant-ph"
] |
Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02093 Warsaw, Poland
Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, Poland
Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, Poland
§ ABSTRACT
The quantum dynamics of a single particle in a discrete two-dimensional tilted lattice is analyzed from the perspective of the classical-quantum correspondence. Utilizing the fact that tilting the lattice results in oscillatory dynamics, we show how the parameters of the lattice and the initial state of the particle can be tuned so that during evolution the probability distribution does not change its shape while its center follows the trajectory known in classical mechanics as Lissajous curves.
Lissajous dynamics of a quantum particle in a tilted two-dimensional discrete lattice
Tomasz Sowiński
September 9, 2024
=====================================================================================
§ INTRODUCTION
The question of the relationship between quantum dynamics and its classical counterpart has been analyzed in many different ways since the birth of quantum mechanics. In the simplest cases of systems described by quadratic Hamiltonians (both time-dependent and time-independent) in any number of dimensions, e.g., such as the harmonic oscillator, it can be shown straightforwardly that the evolution of the expectation values of certain operators is exactly the same as the evolution of the classical counterparts <cit.>. This observation has been elegantly generalized by Ehrenfest to all mechanical systems, with his theorem showing exactly where deviations of the quantum mechanical description from the classical one occur <cit.>. These approaches have allowed also the discovery of approximate solutions of the Schrödinger equation describing a particle moving in an arbitrary electromagnetic field and conclusively to introduce so-called trajectory‐coherent states, i.e., specific-shape wave packets following trajectories determined by the classical equation of motion <cit.>.
In our work, we explore a slightly different aspect of the correspondence between classical and quantum dynamics. We base this on the well-known observation that a quantum particle moving in a periodic potential subjected to the additional influence of a constant force performs specific oscillations, the so-called Bloch oscillations <cit.>. Extending this, we perform a detailed analysis of the two-dimensional dynamics of a localized wave packet in a discrete two-dimensional lattice and show that the parameters of the system can be tuned so that the trajectory traced by the packet has the shape of the Lissajous curves known from classical physics, i.e., trajectories drawn by a classical oscillating particle in two perpendicular directions. In this way, we organise previously known results <cit.>
and give a clear classification of possible curves and the parameters realizing them. In contrast to previous attempts, we consider dynamics in a purely discrete system. In this way, we extract the simplest generic system manifesting quantum Lissajous dynamics.
It is worth mentioning that we consider a scenario with a coin-free walker, i.e., a quantum particle hopping to neighboring sites with equal probabilities independent of its internal quantum state. From this perspective, our system is essentially different from generic discrete quantum walk systems studied extensively in the literature in many different contexts in which spatial dynamics is substantially entangled with the internal degree of freedom of the walker <cit.>.
§ THE MODEL
In our work, we consider the simplest possible scenario of a quantum particle moving in a two-dimensional square lattice, additionally subjected to the external constant force. Assuming that a vector |𝗑,𝗒⟩ describes a quantum state of a particle occupying lattice site with coordinates (𝗑,𝗒) (where 𝗑 and 𝗒 are integers) and the tunneling is allowed only to the neighboring lattice sites, one can write the Hamiltonian of the system as
Ĥ=Ĥ_0 + Ĥ_F,
where
Ĥ_0 = -∑_𝗑,𝗒J_x(|𝗑-1,𝗒⟩+|𝗑+1,𝗒⟩)⟨𝗑,𝗒|
-∑_𝗑yJ_y(|𝗑,𝗒-1⟩+|𝗑,𝗒+1⟩)⟨𝗑,𝗒|
describes the dynamics in the homogenous lattice unaffected by external force, while
Ĥ_F = ∑_𝗑,𝗒(𝗑 F_x+y F_y)|𝗑,𝗒⟩⟨𝗑,𝗒|
introduces external linear potential energy shift with slopes F_x and F_y. These parameters have a direct interpretation of mutually perpendicular components of the external constant force acting on the particle (for example external electric field). Although in general, tunneling in perpendicular directions can be tuned independently, in the following we will set both amplitudes equal J_x=J_y=J.
At any moment t the quantum state of moving particle |Ψ(t)⟩ can be decomposed into basis states {|𝗑,𝗒⟩} as |Ψ(t)⟩=∑_𝗑𝗒ψ_𝗑𝗒(t)|𝗑,𝗒⟩, where ψ_𝗑𝗒(t) has a natural interpretation of the probability amplitude of finding a particle at lattice site (𝗑,𝗒). In this framework, evolution is provided by a set of dynamical equations derived directly from the Schrödinger equation
iħd/dtψ_𝗑𝗒(t) = ∑_𝗑'𝗒'⟨𝗑,𝗒|Ĥ|𝗑'𝗒'⟩ψ_𝗑',𝗒'(t).
To form a direct bridge with a classical concept of the Lissajous curves, in the following, we consider situations in which quantum particle is initially localized in space and momentum domains. Formally it means that variances of expectation values of corresponding operators are small enough and they can be used to introduce the concept of trajectory of a quantum particle, that later can be compared with its classical counterpart. In our approach we consider the simplest possible initial state fulfilling these requirements in the form of Gaussian wave-packet
ψ_𝗑𝗒(0)= Nexp[-(𝗑- X)^2+(𝗒- Y)^2/4σ^2+i(𝗑 P_x + 𝗒 P_y)],
where X, Y and P_x, P_y are average position and momentum
of the wave-packet, σ defines its spatial width, and N is numerical constant guaranteing appropriate normalization, ∑_𝗑𝗒|ψ_𝗑𝗒(0)|^2=1. One of our aims is to determine widths σ for which classical-quantum similarity of the dynamics is clearly visible.
§ ONE-DIMENSIONAL DYNAMICS
Before we start to analyze the dynamics in a two-dimensional scenario let us first recall known results in the one-dimensional case. This is simply the limiting case of the original problem (<ref>) when J_y is set to 0. In this case, the Hamiltonian reduces simply to the following form:
Ĥ = -∑_𝗑(J|𝗑-1⟩+J|𝗑+1⟩- 𝗑 F|𝗑⟩)⟨𝗑|,
where J and F are tunneling and slope along the chain, respectively. It is known that the Hamiltonian (<ref>) is diagonal in the following basis of eigenstates enumerated with integer n <cit.>
|n⟩=∑_𝗑𝒥_𝗑-n(2J/F)|𝗑⟩,
where J is the Bessel function of the first kind. Corresponding eigenvalues are expressed simply as λ_n=n F. Thus, if initially, the particle's wave function has a form |Ψ(0)⟩=∑_𝗑ψ_𝗑(0)|𝗑⟩,
one straightforwardly finds the wave function at an arbitrary moment
|Ψ(t)⟩=∑_n,𝗑,𝗑'ψ_𝗑'(0)𝒥_𝗑'-n(2J/F)𝒥_𝗑-n(2J/F)e^-inFt/ħ|𝗑⟩.
This expression can be simplified further by utilizing the generalized sum rule for Bessel functions
∑_n𝒥_n(z)𝒥_𝗑+n(z)e^inϕ=𝒥_𝗑[2zsin(ϕ/2)]e^i𝗑(π-ϕ)/2.
After applying this identity and appropriate shifting of sums one finds temporal probability amplitudes of finding a particle at individual lattice sites
ψ_𝗑(t) =⟨𝗑|Ψ(t)⟩ =
∑_𝗑'ψ_𝗑'(0)𝒥_𝗑-𝗑'[4J/Fsin(Ft/2ħ)]e^i/2[π(𝗑-𝗑')-F(𝗑+𝗑')t/ħ].
In the case of the particle initially prepared in the Gaussian state (<ref>) (reduced to one dimension along X axis) we can explicitly write the final expression
ψ_𝗑(t) =⟨𝗑|Ψ(t)⟩ =
N∑_𝗑'𝒥_𝗑-𝗑'[4J/Fsin(Ft/2ħ)] ×
exp[-(𝗑'- X)^2/4σ^2+i𝗑' P+iπ(𝗑-𝗑')/2-i(𝗑+𝗑')Ft/2ħ].
Let us now consider several interesting limiting cases of this result. First, we note that in the limit of vanishing external force F→ 0, one straightforwardly restores a well-known formula for the evolution of probability amplitudes for a quantum diffusion of Gaussian wave packet <cit.>
ψ_𝗑(t)=
N∑_𝗑'𝒥_𝗑-𝗑'(2Jt/ħ)
×exp[-(𝗑'- X)^2/4σ^2+i𝗑' P+iπ(𝗑-𝗑')/2].
Another interesting case is obtained when the external force is present and the particle is initially exactly localized at one of the lattice sites 𝗑_0, i.e., ψ_x(0)=δ_𝗑,𝗑_0. Then the general solution (<ref>) is significantly simplified and reduces to
ψ_𝗑(t)=𝒥_𝗑-𝗑_0[4J/Fsin(Ft/2ħ)] e^i/2[π(𝗑-𝗑_0)-F(𝗑+𝗑_0)t/ħ].
This solution shows that in the case of an initially localized particle, its spatial distribution periodically changes in time and alternately expands and contracts around the initial position 𝗑_0. This behavior is displayed in Fig. <ref> for different F. Again, in the limit of vanishing force F/J→ 0, we restore a well-known diffusive solution manifesting characteristic interference pattern
ψ_𝗑(t)=𝒥_𝗑-𝗑_0(2Jt/ħ)e^iπ(𝗑-𝗑_0)/2.
Finally, let us examine the most important for further analysis case of particle initially significantly spread, i.e., when the spatial width of the initial wave packet is large when compared to the distance between lattice sites, σ≫ 1. Then, it is very convenient to express the general solution (<ref>) in terms of the Fourier transform of the initial wave function
ψ̃(k) = 1/2π∑_𝗑ψ_𝗑(0)e^ik𝗑
since in the momentum domain, the wave function is well-localised. After substituting an inverse relation
ψ_𝗑(0) = ∫dk ψ̃(k)e^-ik𝗑
into general solution (<ref>) and performing some straightforward algebraic transformations one finds
ψ_𝗑(t) = ∫dk ψ̃(k) e^i𝗑(k-Ft/ħ)
×exp[4iJ/Fsin(Ft/2ħ)cos(k-Ft/2ħ)].
Let us emphasize that expression (<ref>) is exactly equivalent to relation (<ref>) since any approximation has not been introduced so far. Now, we exploit the fact that for large σ amplitude ψ_k(0) is well-localized around initial momentum P and therefore we expand the last term in the difference (k- P) and keep only constant and linear term
cos(k-Ft/2ħ) ≈
cos( P-Ft/2ħ)-sin( P-Ft/2ħ)(k- P).
Then, after using explicit expression for the initial wave function and performing integration, one finds the time dependence of probability amplitudes in the position representation
ψ_𝗑(t)= Nexp[-(𝗑-Δ(t))^2/4σ^2+i𝗑Γ(t)+iΦ(t)],
where
Δ(t) = X+2J/F[cos(Ft/ħ- P)-cos( P)],
Γ(t) = P-Ft/ħ,
Φ(t) =2J/F[sin(Ft/ħ- P)+sin( P).
This result clearly shows that a sufficiently wide Gaussian wave packet preserves its spatial shape during the evolution. At the same time, its center harmonically oscillates with frequency proportional to the force F and amplitude proportional to the ratio J/F. This counterintuitive behavior of a quantum wave packet in a tilted periodic potential is known as Bloch oscillations <cit.> and was not experimentally confirmed until 1992 <cit.>. The exact evolution of a wave packet with σ=10 for various tilting and initial momentum P are presented in Fig. <ref>. For clarity, in all plots, we mark with a dashed red line a trace of the center of the wavepacket as predicted by approximation (<ref>). Agreement between the two results is clearly visible.
For completeness of the discussion, in Fig. <ref> we also show how the behavior of the wave function changes from breathing mode to oscillations when one varies the width of the initial wave packet σ from very small to very large.
§ DYNAMICS IN TWO DIMENSIONS
After an extended discussion of the dynamics in one dimension let us now switch to the initial problem of our work. The general aim is to adjust physical parameters of the system, i.e., parameters of the Hamiltonian as well as the particle's initial state, to make the wave packet move along trajectories being counterparts of the classical Lissajous figures. To make a first step in this direction let us recall one-dimensional result (<ref>) explaining why a sufficiently broad wave packet moves in the lattice in an oscillatory manner without changing its shape. In full analogy, due to the separability of motions in perpendicular directions guaranteed by the Hamiltonian (<ref>), in a two-dimensional case the resulting wave function of a sufficiently wide Gaussian state has a form
ψ_𝗑𝗒(t)= Nexp[-(𝗑-Δ_x(t))^2+(𝗒-Δ_y(t))^2/4σ^2]
×e^i𝗑Γ_x(t)+i𝗒Γ_y(t)+iΦ(t),
where
Δ_x(t) = X+2J/F_x[cos(F_x t/ħ- P_x)-cos( P_x)],
Δ_y(t) = Y+2J/F_y[cos(F_y t/ħ- P_y)-cos( P_y)],
Γ_x(t) = P_x-F_xt/ħ,
Γ_y(t) = P_y-F_yt/ħ,
Φ(t) =2J/F_x[sin(F_xt- P_x)+sin( P_x)]
+2J/F_y[sin(F_yt- P_y)+sin( P_y)].
By setting the initial position and momenta as
X=2J/F_xcos( P_x), Y=2J/F_y, P_y=0
we find that the center of the wave packet moves along the celebrated Lissajous curve parametrically described by
Δ_x(t) =Acos(Ω_x t + φ),
Δ_y(t) =Bcos(Ω_y t),
where A=2J/F_x, B=2J/F_y, Ω_x=F_x/ħ, Ω_y=F_y/ħ, and φ=- P_x. Of course, the above reasoning is valid as long as the wave packet remains localized during a whole evolution. To clarify this, let us focus on the simplest Lissajous curve – a circle obtained for Ω_x=Ω_y, A=B, and φ=π/2. In Fig. <ref> we show the time evolution of the density distribution for parameters tailored to this scenario, i.e., 2J/F_x=2J/F_y=25, P_x=π/2. The left panel presents the evolution for a sufficiently wide packet with σ=5 and the distribution nicely follows the corresponding Lissajous curve (red line). On the contrary, in the case of the too-narrow wave packet with σ=1 (right panel), the density significantly changes its shape during evolution and the reasoning clearly breaks up. For convenience, movies showing these evolutions are accessible online <cit.>.
It is straightforward to adjust lattice and initial state parameters to make the density distribution moving along different Lissajous curves. In Fig. <ref> we present several quantum mechanical scenarios in which the density distribution follows the most popular curves recognized in classical mechanics. Typically they are labeled by rational ratios of frequencies Ω_x/Ω_y (controlled by the ratio F_x/F_y) and relative phase shifts φ (controlled by initial momentum P_x). Corresponding movies presenting a whole evolution are accessible online <cit.>.
§ CONCLUSIONS
In this work, we discussed the possibility of observing the manifestations of classical dynamics in the motion of a quantum particle on a two-dimensional tilted lattice. We originated from the well-known observation that in a periodic potential with a constant force, a quantum particle oscillates, and its probability density distribution does not change over time (for a sufficiently wide wave packet). We then pointed out that in the case of a two-dimensional lattice, the combination of two oscillatory motions can result in the motion of the packet along a Lissajous curve well known in classical mechanics. We have presented in detail the reasoning that allows us to predict the parameters for which the desired Lissajous curve can be obtained, and thus we have given a precise recipe for how to prepare the system to realize this.
The analysis presented here can be straightforwardly extended to the three-dimensional case, in which the wave packet will follow a three-dimensional Lissajous trajectory. One of the open questions we leave for further analysis concerns the stability of the presented solutions for wave packets formed by many interacting particles, for example, bosons with contact interactions or multi-component mixtures. Exploring quantum correlations induced by interactions during this kind of motion in systems of several particles (bosons as well as fermions) is also an interesting direction to extend previous results in one-dimensional systems <cit.>.
This research was supported by the (Polish) National Science Centre within OPUS Project No. 2023/49/B/ST2/03744.
|
http://arxiv.org/abs/2409.02993v1 | 20240904180003 | An effective framework for strange metallic transport | [
"Benoit Doucot",
"Ayan Mukhopadhyay",
"Giuseppe Policastro",
"Sutapa Samanta",
"Hareram Swain"
] | hep-th | [
"hep-th",
"cond-mat.str-el"
] |
( |
http://arxiv.org/abs/2409.03306v1 | 20240905072219 | Towards training digitally-tied analog blocks via hybrid gradient computation | [
"Timothy Nest",
"Maxence Ernoult"
] | cs.LG | [
"cs.LG"
] |
Quantum Algorithm For Testing Convexity of Function
Tzu-Chieh Wei
Received 16 July 2024; accepted 04 September 2024
=====================================================
§ ABSTRACT
⧫Equal contributionfootnote
Power efficiency is plateauing in the standard digital electronics realm such that novel hardware, models, and algorithms are needed to reduce the costs of AI training. The combination of energy-based analog circuits and the Equilibrium Propagation (EP) algorithm constitutes one compelling alternative compute paradigm for gradient-based optimization of neural nets. Existing analog hardware accelerators, however, typically incorporate digital circuitry to sustain auxiliary non-weight-stationary operations, mitigate analog device imperfections, and leverage existing digital accelerators.This heterogeneous hardware approach calls for a new theoretical model building block. In this work, we introduce Feedforward-tied Energy-based Models (ff-EBMs), a hybrid model comprising feedforward and energy-based blocks accounting for digital and analog circuits. We derive a novel algorithm to compute gradients end-to-end in ff-EBMs by backpropagating and “eq-propagating” through feedforward and energy-based parts respectively, enabling EP to be applied to much more flexible and realistic architectures. We experimentally demonstrate the effectiveness of the proposed approach on ff-EBMs where Deep Hopfield Networks (DHNs) are used as energy-based blocks. We first show that a standard DHN can be arbitrarily split into any uniform size while maintaining performance. We then train ff-EBMs on ImageNet32 where we establish new SOTA performance in the EP literature (46 top-1 %). Our approach offers a principled, scalable, and incremental roadmap to gradually integrate self-trainable analog computational primitives into existing digital accelerators.
§ INTRODUCTION
Gradient-based optimization, the cornerstone and most energy greedy component of deep learning, fundamentally relies upon three factors: i) highly parallel digital hardware such as GPUs, ii) feedforward models and iii) backprop (BP). With skyrocketing demands of AI compute, cutting the energy consumption of AI systems, learning has become a economical, societal and environmental stake <cit.> and calls for the exploration of novel compute paradigms <cit.>.
One promising path towards this goal is analog in-memory computing <cit.>: when mapping weights onto a crossbar of resistive devices, Kirchoff current and voltage laws inherently achieve matrix-vector multiplications in constant time complexity <cit.>.
Stacking multiple such crossbars, an entire neural network can be mapped onto a physical system. An important formalism for such systems is that of energy-based (EB) analog circuits <cit.> which are “self-learning” systems that compute loss gradients through two relaxations to equilibrium (i.e. two “forward passes”), a procedure falling under the umbrella of energy-based learning (EBL) algorithms <cit.>.
One of these learning algorithms, Equilibrium Propagation (EP) <cit.>, particularly stands out with strong theoretical guarantees, relative scalability in the realm of backprop alternatives <cit.> and experimental demonstrations on small analog systems which are 10,000× more energy-efficient and substantially faster than their GPU-based counterpart <cit.>. This suggests an alternative triad as a new compute paradigm for gradient-based optimization: i) analog hardware, ii) EBMs, iii) EP.
[22]l0.4
< g r a p h i c s >
Illustrating BP-EP backward gradient chaining through feedforward (red) and energy-based (yellow) blocks, accounting for digital and analog circuits respectively.
In this paper, we propose a theoretical framework to extend end-to-end gradient computation to a realistic setting where the system at use may or may not be fully analog. Such a setting is plausible in the near term, due to by two major limitations. First, analog circuits exhibit many non-ideal physical behaviors which affect both the inference pathway <cit.> and parameter optimization <cit.> , in-turn compromising performance. Second, owing to the latency and energy-consumption of resistive devices' write operations, such analog circuits should be fully weight stationary – weights must be written before the inference procedure begins – which excludes many operations used conventionally in machine learning such as activation functions, normalization, attention <cit.>. Therefore, analog systems are likely to be used in combination with auxiliary digital circuitry, resulting in hybrid mixed precision systems <cit.>. While the design of purely inferential engines made of analog and digital parts is nearing commercial maturity <cit.>, in-situ learning of such systems has barely been explored. One final challenge lies in proving EBL algorithms can scale in a manner comparable to backprop, given the requirement of simulating EB systems on GPUs. Given the necessity of convergence, this amounts in practice in performing lengthy root finding algorithms to simulate physical equilibrium, limiting proof-of-concepts thereof to relatively shallow models <cit.>.
Our work contends that the best of both worlds can be achieved with the following triad: i) hybrid digital and analog hardware, ii) feedforward and EB models, iii) BP and EP. Namely, by modelling digital and analog parts as feedforward and EB modules respectively, the core contribution of our paper is to show how backprop and EP error signals can be chained end-to-end through feedforward and EB blocks respectively in a principled fashion. Rather than opposing digital and analog, or backprop and “alternative” learning algorithms as often done in the literature, we propose a novel hardware-aware building block which can, in principle, leverage advances from both digital and analog hardware in the near-term. More specifically:
* We propose Feedforward-tied Energy-based Models (ff-EBMs, Section <ref>) as high-level models of mixed precision systems whose inference pathway read as the composition of feedforward and EB modules (Eq. (<ref>), Alg. <ref>).
* We show that gradients in ff-EBMs can be computed in an end-to-end fashion (Section <ref>), backpropagating through feedforward blocks and “eq-propagating” through EB blocks (Theorem <ref>, Alg. <ref>) and that this procedure is rooted in a deeply-nested optimization problem (Section <ref>).
* Finally, we experimentally demonstrate the effectiveness of our algorithm on ff-EBMs where EBM blocks are Deep Hopfield Networks (DHNs) (Section <ref>). We show that i) gradient estimates computed by our algorithm (Alg. <ref>) near perfectly match gradients computed by end-to-end automatic differentiation (Section <ref>), ii) a standard DHN model can be arbitrarily split into a ff-DHN with the equivalent layers and architectural layers while maintaining performance and remaining on par with automatic differentiation (Section <ref>), iii) the proposed approach yields 46 % top-1 (70% top-5) validation accuracy on ImageNet32 when training a ff-EBM of 16 layers, thereby significantly beating EP current performance state-of-the-art by a large margin without using holomorphic transformations inside EBM blocks <cit.>
§ BACKGROUND
Notations. Denoting A: ℝ^n →ℝ^m a differentiable mapping, we denote its total derivative with respect to s_j as d_s_j A(s) := d A(s)/ ds_j ∈ℝ^m, its partial derivative with respect to s_j as ∂_jA(s) := ∂ A(s)/ ∂ s_j ∈ℝ^m. When A takes scalar values (m=1), its gradient with respect to s_j is denoted as ∇_j A(s) := ∂_j A(s)^⊤.
§.§ Energy-based models (EBMs)
For a given static input and set of weights, Energy-based models (EBMs) implicitly yield a prediction through the minimization of an energy function – as such they are a particular kind of implicit model. Namely, an EBM is defined by a (scalar) energy function E: s, θ, x → E(s, θ, x) ∈ℝ where x, s, and θ respectively denote a static input, hidden and output neurons and model parameters, and each such tuple defines a configuration with an associated scalar energy value. Amongst all configurations for a given input x and some model parameters θ, the model prediction s_⋆ is implicitly given as an equilibrium state which minimizes the energy function:
s_⋆ := min_s E(s, θ, x).
§.§ Standard bilevel optimization
Assuming that ∇_s^2 E(x, s_⋆, θ) is invertible, note that the equilibrium state s_⋆ implicitly depends on x and θ by virtue of the implicit function theorem <cit.>. Therefore our goal when training an EBM, for instance in a supervised setting, is to adjust the model parameters θ such that s_⋆(x, θ) minimizes some cost function ℓ: s, y →ℓ(s, y) ∈ℝ where y is some ground-truth label associated to x. More formally, this learning objective can be stated with the following bilevel optimization problem <cit.>:
min_θ𝒞(x, θ, y) := ℓ(s_⋆, y) s.t. s_⋆ = min_s E(s, θ, x) .
Solving Eq. (<ref>) in practice amounts to computing the gradient of its outer objective 𝒞(x, θ) with respect to θ (d_θ𝒞(x, θ)) and then perform gradient descent over θ.
§.§ Equilibrium Propagation (EP)
An algorithm used to train an EBM model in the sense of Eq. (<ref>) may be called an EBL algorithm <cit.>. Equilibrium Propagation (EP) <cit.> is an EBL algorithm which computes an estimate of d_θ𝒞(x, θ) with at least two phases. During the first phase, the model is allowed to evolve freely to s_⋆ = min_s E(s, θ, x). Then, the model is slightly nudged towards decreasing values of cost ℓ and settles to a second equilibrium state s_β. This amounts to augment the energy function E by an additional term βℓ(s, y) where β∈ℝ^⋆ is called the nudging factor. Then the weights are updated to increase the energy of s_⋆ and decrease that of s_β, thereby “contrasting” these two states. More formally,
<cit.> prescribe in the seminal EP paper:
s_β := min_s [ E(s, θ, x) + βℓ(s, y)], Δθ ^ EP := α/β(∇_2 E(s_⋆, θ, x) - ∇_2 E(s_β, θ, x) ),
where α denotes some learning rate. EP comes in different flavours depending on the sign of β inside Eq. (<ref>) or on whether two nudged states of opposite nudging strengths (±β) are contrasted, a variant called Centered EP (C-EP) which was shown to work best in practice <cit.> and reads as:
Δθ ^ C-EP := α/2β(∇_2 E(s_-β, θ, x) - ∇_2 E(s_β, θ, x) ),
§ TYING ENERGY-BASED MODELS WITH FEEDFORWARD BLOCKS
This section mirrors the background section by introducing a new model, the naturally associated optimization problem and a new learning algorithm. We first introduce Feedforward-tied EBMs (ff-EBMs, section <ref>) which read as composition of feedforward and EB transformations (Alg. <ref>). We then show how optimizing ff-EBMs amounts to solving a multi-level optimization problem (Section <ref>) and propose a BP-EP gradient chaining algorithm as a solution (Section <ref>, Theorem <ref>, Alg. <ref>). We highlight as an edge case that ff-EBMs reduce to standard feedforward nets (Lemma <ref>) and the proposed BP-EP gradient chaining algorithm to standard BP (Corollary <ref>) when each EB block comprises a single hidden layer.
§.§ Feedforward-tied Energy-based Models (ff-EBMs)
Inference procedure. We define Feedforward-tied Energy-based Models (ff-EBMs) as compositions of feedforward and EB transformations. Namely, an data sample x is fed into the first feedforward transformation F^1 parametrized by some weights ω^1, which yields an output x^1_⋆. Then, x^1_⋆ is fed as a static input into the first EB block E^1 with parameters θ^1, which relaxes to an equilibrium state s^1_⋆. s^1_⋆ is in turn fed into the next feedforward transformation F^1 with weights ω^1 and the above procedure repeats until reaching the output layer ô. More formally, denoting F^k and E^k the k^ th feedforward and EB blocks parametrized by the weights ω^k and θ^k respectively, the inference pathway of a ff-EBM reads as:
{[ s^0 := x; x^k_⋆ := F^k(s^k-1_⋆, ω^k), s^k_⋆ := smin E^k(s, θ^k, x^k_⋆) ∀ k = 1 ⋯ N-1; ô_⋆ := F^N(s^N-1_⋆, ω^N) ].
ff-EBM inference procedure is depicted more compactly inside Fig. <ref> (left) and Alg. <ref>.
[c]0.46
< g r a p h i c s >
figureDepiction of the forward (left) and backward (right) pathways through a ff-EBM, with yellow and pink blocks denoting EB and feedforward transformations.
[c]0.5
Form of the energy functions. We specify further the form of the energy of the k^ th EB block of a ff-EBM as defined per Eq. (<ref>). The associated energy function E^k takes some static input x^k from the output of the preceding feedforward transformation, has hidden neurons s^k and is parametrized by weights θ^k and more precisely defined as:
E^k(s^k, θ^k, x^k) := G^k(s^k) - s^k^⊤· x^k + U^k(s^k, θ^k)
Eq. (<ref>) reveals three different contributions to the energy. The first term determines the non-linearity applied inside the EB block <cit.>: for a given invertible and continuous activation function σ, G is defined such that ∇ G = σ^-1 (see Appendix <ref>).
The second term inside Eq. (<ref>) accounts for a purely feedforward contribution from the previous feedforward block F^k. Finally, the third term accounts for internal interactions within the layers of the EB block.
Recovering a feedforward net. When taking the gradient of E^k as defined in Eq. (<ref>) with respect to s^k and zeroing it out, it can be seen that s^k_⋆ is implicitly defined as:
s^k_⋆ := σ(x^k - ∇_1 U^k(s^k_⋆, θ^k) )
An interesting edge case highlighted by Eq. (<ref>) is when U^k = 0 for all k's, i.e. when there are no intra-block layer interactions, or equivalently when the EB block comprises a single layer only. In this case, s_⋆^k is simply a feedforward mapping x^k through σ and in turn the ff-EBM is simply a standard feedforward architecture (see Lemma <ref> inside Appendix <ref>).
§.§ Multi-level optimization of ff-EBMs
In the same way as learning EBMs can naturally be cast into a bilevel optimization problem, learning ff-EBMs can be inherently be mapped into a multi-level optimization problem where the variables being optimized over in the inner subproblems are the EB block variables s^1, ⋯, s^N-1. To make this clearer, we re-write the energy function of the k^ th block E^k from Eq. (<ref>) to highlight the dependence between two consecutive EB block states:
E^k(s^k, θ^k, s^k-1_⋆, ω^k) := E^k(s^k, θ^k, F^k(s^k-1_⋆, ω^k-1))
It can be seen from Eq. (<ref>) that the equilibrium state s^k_⋆ obtained by minimizing E^k will be dependent upon the equilibrium state s^k-1_⋆ of the previous EB block, which propagates back through prior EB blocks. Denoting W := {θ^1, ⋯, θ^N-1, ω^1, ⋯, ω^N}, the learning problem for a ff-EBM can therefore be written as:
min_W 𝒞(x, W, y) :=ℓ(ô_⋆ = F^N(s^N-1_⋆, ω^N), y)
s.t. s^N-1_⋆ = smin E^N-1(s, θ^N-1, s^N-2_⋆, ω^N-1) ⋯ s.t. s^1_⋆ = smin E^1(s, θ^1, x, ω^1)
Here again and similarly to bilevel optimization, solving Eq. (<ref>) in practice amounts to computing g_θ^k := d_θ^k𝒞 and g_ω^k := d_ω^k𝒞 and perform gradient descent on θ^k and ω^k.
§.§ A BP–EP gradient chaining algorithm
Main result: explicit BP-EP chaining. Based on the multilevel optimization formulation of ff-EBMs learning in Eq. (<ref>), we state the main theoretical result of this paper in Theorem <ref> (see proof in Appendix <ref>).
Assuming a model of the form Eq. (<ref>), we denote s^1_⋆, x^1_⋆, ⋯, s^N-1_⋆, ô_⋆ the states computed during the forward pass as depicted in Alg. <ref>.
We define the nudged state of block k, denoted as s^k_β, implicitly through ∇_1 ℱ^k(s^k_β, θ^k, x^k_⋆, δ s^k, β) = 0 with:
ℱ^k(s^k, θ^k, x^k_⋆, δ s^k, β) := E^k(s^k, θ^k, x^k_⋆) + β s^k^⊤·δ s^k
Denoting δ s^k and Δ x^k the error signals computed at the input of the feedforward block F^k and of the EB block E^k respectively, then the following chain rule applies:
δ s^N-1 := ∇_s^N-1ℓ(ô_⋆, y), g_ω^N = ∇_ω^Nℓ(ô_⋆, y)
∀ k=2 ⋯ N-1:
{[ Δ x^k = .d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0, g_θ^k = .d_β(∇_2E^k (s^k_β, θ^k, x^k_⋆))|_β=0; δ s^k-1 = ∂_1 F^k(s^k-1_⋆, ω^k )^⊤·Δ x^k, g_ω^k = ∂_2 F^k(s^k-1_⋆, ω^k)^⊤·Δ x^k ].
Proposed algorithm: implicit BP-EP chaining. Theorem <ref> reads intuitively: it prescribes an explicit chaining of EP error signals passing backward through E^k (δ s^k →Δ x^k) and BP error signals passing backward through ∂ F^k^⊤ (Δ x^k →δ s^k-1), which directly mirrors the ff-EBM inference pathway as depicted in Fig. <ref>. Yet noticing that:
{[ δ s^k-1 = ∂_1 F^k(s^k-1_⋆, ω^k )^⊤·Δ x^k = d_β.(∇_3 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k))|_β=0,; g_ω^k = ∂_2 F^k(s^k-1_⋆, ω^k )^⊤·Δ x^k = d_β.(∇_4 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k))|_β=0, ].
the same error signal can by passed through E^k (δ s^k →δ s^k-1) where BP and EP are implicitly chained inside E^k (see Appendix <ref>). This insight, along with a centered scheme to estimate derivatives with respect to β around 0 as done for the C-EP algorithm (Eq. (<ref>)), motivates the implicit BP-EP gradient chaining algorithm in Alg. <ref> we used for our experiments (see Alg. <ref> inside Appendix <ref> for its explicit counterpart). For simplicity for here onwards and as the proposed algorithm appears to be a generalization of EP, we may refer to Alg. <ref> as “EP” in the experimental section.
Recovering backprop. When the ff-EBM under consideration is purely feedforward (U^k=0), we show that Eqs. (<ref>)–(<ref>) reduce to standard BP through a feedforward net (Corollary <ref>, Alg. <ref> and Alg. <ref> in Appendix <ref>). Therefore, since this case is extremely close to standard BP through feedforward nets, we do not consider this setting in our experiments.
§ EXPERIMENTS
In this section, we first present the ff-EBMs at use in our experiments (Section <ref>) and carry out static gradient analysis – computing and analyzing ff-EBM parameter gradients for some x and y (Section <ref>). We extend the observation made by <cit.> to ff-EBMs that transient EP parameter gradients throughout the second phase match those computed by automatic differentiation through equilibrium and across blocks (Fig. (<ref>)), with resulting final gradient estimates near perfectly aligned (Fig. <ref>). We then show on the CIFAR-10 task that performance of ff-EBMs can be maintained across various block splits and on par with automatic differentiation while keeping the same number of layers (Section <ref>). Finally, we perform further ff-EBM training experiments on CIFAR-100 and ImageNet32 where we establish a new performance state-of-the-art in the EP literature (Section <ref>).
§.§ Setup
Model. Using the same notations as in Eq. (<ref>), the ff-EBMs at use in this section are defined through:
{[ U^k_ FC(s^k, θ^k) := -1/2s^k⊤·θ^k · s^k,; U^k_ CONV(s^k, θ^k) := -1/2s^k ∙(θ^k ⋆ s^k) ]. , F^k(s^k-1, ω^k) := BN(𝒫(ω^k_ CONV⋆ s^k-1_L); ω^k_α, ω^k_β)
with BN(·; ω_α^k ω_β^k), 𝒫 and ⋆ the batchnorm, pooling and convolution operations, ∙ the generalized dot product for tensors and s^k := (s^k^⊤_1, ⋯ s^k^⊤_L)^⊤ the state of block k comprising L layers. The EBM blocks are usually called Deep Hopfield Networks (DHNs) and the weight matrix θ^k is symmetric and has a sparse, block-wise structure such that each layer s^k_ℓ is bidirectionally connected to its neighboring layers s^k_ℓ - 1 and s^k_ℓ + 1 through connections θ^k_ℓ -1 and θ^k^⊤_ℓ respectively (see Appendix <ref>), either with fully connected (U^k_ FC) and convolutional operations (U^k_ CONV). Finally, the non-linearity σ applied within EB blocks is σ(x) := min(max(x/2, 0), 1).
Equilibrium computation. As depicted in Alg. <ref>, the steady states s_±β may be computed with any loss minimization algorithm. Here and as done in most past works on EP <cit.>, we employ a fixed-point iteration scheme to compute the EB blocks steady states. Namely, we iterate Eq. (<ref>) until reaching equilibrium (the same scheme is used for ff-EBM inference, Alg. <ref>, with β=0.):
s^k_±β, t + 1σ(x^k - ∇_1 U^k(s^k_±β, t, θ^k) ∓βδ s^k )
We employ a scheme to asynchronously update even (s^k_2ℓ') and odd (s^k_2ℓ' + 1) layers <cit.> – see Appendix <ref>.
Algorithm baseline. As an algorithmic baseline, we simply use automatic differentiation (AD) backward through the fixed-point iteration scheme Eq. (<ref>) with β=0 and directly initializing s^k_t=0 = s_⋆. This version of AD, where we backpropagate through equilibrium, is known as “Recurrent Backpropagation” <cit.> or Implicit Differentiation (ID).
§.§ Static comparison of EP and ID on ff-EBMs
In order to study the transient dynamics of ID and EP, we define, with W^k := {θ^k, ω^k}:
{[ g^ ID_W^k(t) := ∑_k=0^T d_W^k(T - k)𝒞(x, W, y),; g^ EP_W^k(t) := 1/2β(∇_W^kE^k(s^k_β(t), W^k, s^k-1_⋆) - ∇_W^kE^k(s^k_-β(t), W^k, s^k-1_⋆)), ].
where s^±β(t) is computed from Eq. (<ref>) with the nudging error current δ s^k computed with Alg. <ref>, and T is the total number of iterations used for both ID and EP in the gradient computation phase.
For a given block k, d_W^k(T - k)𝒞(x, W, y) is the “sensitivity” of the loss 𝒞 to parameter W^k at timestep T - k so that g^ AD_W^k(t) is a ID gradient truncated at T-t. Similarly, g^ EP_W^k(t) is an EP gradient truncated at t steps forward through the nudged phase. When T is sufficiently large, g^ ID_W^k(T) and g^ EP_W^k(T) converge to d_W^k𝒞(x, W, y). Fig. <ref> displays (g^ ID_W^k(t))_t ≥ 0 and (g^ EP_W^k(t))_t ≥ 0 on an heterogeneous ff-EBM of 6 blocks and 15 layers with blocks comprising 2 or 3 layers for a randomly selected sample x and its associated label y – see caption for a detailed description. It can be seen EP and ID error weight gradients qualitatively match very well throughout time, across layers and blocks. More quantitatively, we display the cosine similarity between the final EP and ID weight gradient estimate g^ ID_W^k(T) and g^ EP_W^k(T) for each layer and observe that EP and ID weight gradients are near perfectly aligned.
§.§ Splitting experiment
For a given EBM (standard, single block) and a fixed number of layers, we ask whether block splitting of this EBM into a ff-EBM with multiple EB blocks affects training performance. We address this question with two different depths (L=6 and L=12 layers in total) and various block splits maintaining a total number of layers (e.g. for L=6, 1 block of 6 layers, 2 blocks of 3 layers, etc.) and display the results obtained on the CIFAR-10 task inside Table <ref>. We observe that the performance achieved by EP on the 6-layers deep EBM is maintained across 4 different block splits between 89% and 90% and is consistently on par with the ID baseline for each ff-EBM and with the literature on EBMs of same depth <cit.>. Similarly, we observe that the performance achieved by EP on ff-EBMs with a total of 12 layers is maintained around 92% with three different block sizes, matches ID performance on each of these and surpasses EP state-of-the art on CIFAR-10 <cit.>. Overall these results suggest the agnosticity of ff-EBMs to EB block sizes and therefore appear to be flexible in design.
§.§ Scaling experiment
Finally, unlike Section <ref>, we now consider ff-EBMs of fixed block size 2 and train them with two different depths (L=12 and L=15) on CIFAR-100 and ImageNet32 by EP and ID and show the results obtained in Table <ref>. Here again we observe that EP matches ID performance on all models and tasks, ff-EBMs benefit from depth, and the performance obtained by training the 16-layers deep ff-EBM by EP exceeds state-of-the-art performance on ImageNet32 by around 10% top-1 validation accuracy <cit.> and by around 5% the best performance reported on this benchmark among backprop alternatives <cit.>.
§ DISCUSSION
Related work. Since fixed-point iteration schemes were proposed to facilitate EP experiments <cit.>, there is a growing body of work revolving around algorithmic extensions of EP and assessments of its scalability on vision tasks. Most notably, <cit.> introduced a holomorphic version of EP where loss gradients are computed with adiabatic oscillations of the model through nudging in the complex plane, and
was very recently extended to more general implicit models <cit.>. Moving further towards physical implementations of EP, <cit.> proposed a fully black-box version of EP where details about the system may not be known. All these advances could be readily applied inside our EP-BP chaining algorithm to EB blocks. The work closest to ours, albeit with a purely theoretical motivation and without clear algorithmic prescriptions, is that of <cit.> where feedforward model learning is cast into a deeply nested optimization where consecutive layers are tied by elemental pair-wise energy functions, which more recently inspired the Dual Propagation algorithm <cit.>. This setting can be construed as a particular case of ff-EBM learning by EP where each EB block comprises a single layer (U^k = 0 inside Eq. (<ref>) which, however, remains extremely similar to BP (see last paragraph of Section <ref>).
Limitations and future work. Since our recipe advocates EP–BP chaining by construction, it is fair to say that ff-EBM learning partially inherits the pitfalls of BP.
Fortunately, nothing prevents feedforward modules inside ff-EBMs to be trained by any BP alternative to mitigate specific issues. For instance: BP can be parameterized by feedback weights to obviate weight transport from the inference circuit to the gradient computation circuit <cit.>; BP gradients can be approximated as finite differences of feedback operators <cit.>; or computed via implicit forward-mode differentiation by applying random weight perturbations in the inference circuit <cit.>; local layer-wise self-supervised or supervised loss functions can be used to prevent “backward locking” <cit.>. This insight may help exploring many variants of ff-EBM training.
Pursuing the core motivation of this work, one natural extension of this study is to incorporate more hardware realism into ff-EBMs. Beyond Deep Hopfield networks, Deep Resistive Nets (DRNs) – concurrently developed by <cit.> and strongly inspired by <cit.> – are exact models of idealized analog circuits, are fast to simulate and were shown to be trainable by EP. As such, using DRNs as EB blocks inside ff-EBMs is an exciting research direction. Yet, going further into analog hardware modeling for ff-EBMs comes with new challenges when taking into account device non-idealities which may affect the inference pathway, such as analog-to-digital and digital-to-analog noise <cit.>.
Finally, considerable work is needed to prove ff-EBM further at scale on more difficult tasks (e.g. standard ImageNet), considerably deeper architectures and beyond vision tasks. One other exciting research direction would be the design of ff-EBM based transformers, with attention layers being chained with energy-based fully connected layers inside attention blocks.
Concluding remarks and broader impact. We show that ff-EBMs constitute a novel framework for deep-learning in heterogeneous hardware settings. We hope that the algorithm proposed can help to move beyond the typical division between digital versus analog or BP versus BP-free algorithms and that the greater energy-efficiency afforded by this framework provides a more pragmatic, near-term blueprint to mitigate the dramatic carbon footprint of AI training <cit.>. Being still a long way from fully analog training accelerators at commercial maturity, we believe this work offers an incremental and sustainable roadmap to gradually integrate analog, energy-based computational primitives as they are developed into existing digital accelerators.
§ ACKNOWLEDGEMENTS AND DISCLOSURE OF FUNDING
The authors warmly thank Irina Rish, Jack Kendall and Suhas Kumar for their support of the project idea from the very start, as well as Gregory Kollmer and Mohammed Fouda for useful feedback on the manuscript. TN acknowledges the support from the Canada Excellence Research Chairs Program, as well as CIFAR and Union Neurosciences et Intelligence Artificielle Quebec (UNIQUE). This research was enabled by the computational resources provided by the Summit supercomputer, awarded through the Frontier DD allocation and INCITE 2023 program for the project "Scalable Foundation Models for Transferable Generalist AI" and SummitPlus allocation in 2024. These resources were supplied by the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, with support from the Office of Science of the U.S. Department of Energy. ME acknowledges funding from Rain AI which commercializes technologies based on brain-inspired learning algorithms, as well as Constance Castres Saint-Martin for her unwavering support at the maternity hospital where most of this manuscript was written.
abbrvnat
§ APPENDIX
§.§ Further insights on ff-EBMs
In this section:
* We formally define Feedforward-tied Energy-based Models (ff-EBMs) with precise assumptions on the energy-based and feedforward blocks (Def. <ref>).
* We show that when energy-based blocks comprise a single layer only, the ff-EBM becomes purely feedforward (Lemma <ref>).
A Feedforward-tied Energy-based Model (ff-EBM) of size N comprises N twice differentiable feedforward mapping F^1, ⋯, F^N and N-1 twice differentiable energy functions E^1, ⋯, E^N-1. For a given x, the inference procedure reads as:
{[ s^0 := x; x^k_⋆ := F^k(s^k-1_⋆, ω^k), s^k_⋆ := smin E^k(s, θ^k, x^k_⋆) ∀ k = 1 ⋯ N-1; ô_⋆ := F^N(s^N-1_⋆, ω^N) ].
Finally, we assume that ∀ k = 1 ⋯ N-1, ∇_1^2 E^k(s^k_⋆, θ^k, ω^k) is invertible.
We consider ff-EBM per Def. (<ref>) where the energy functions E^k have the form:
E^k(s^k, θ^k, x^k) := G^k(s^k) - s^k^⊤· x^k + U^k(s^k, θ^k).
We assume that U^k = 0 for k = 1 ⋯ N-1, s →∇ G(s) is invertible and we denote σ := ∇ G^-1. Then, the resulting model is a feedforward model described by the following recursive equations:
{[ s^0_⋆ = x; x^k_⋆ = F^k(s^k-1_⋆, ω^k), s^k_⋆ = σ(x^k_⋆) ∀ k = 1 ⋯ N-1; ô_⋆ := F^N(s^N-1_⋆, ω^N) ].
Let k ∈ [1, N-1]. By definition of s^k_⋆ and x^k_⋆:
∇_1 E^k(s^k_⋆, θ^k, x^k_⋆) = 0
⇔ ∇ G^k(s^k_⋆) - x^k_⋆ + ∇_1 U^k(s^k_⋆, θ^k) = 0
⇔ s^k_⋆ = σ(x^k_⋆ - ∇_1 U^k(s^k_⋆, θ^k) )
Therefore Eq. (<ref>) is immediately obtained from Eq. (<ref>) with U^k=0.
§.§ Proof of Theorem <ref>
The proof of Theorem <ref> is structured as follows:
* We directly solve the multilevel problem optimization defined inside Eq. (<ref>) using a Lagrangian-based approach (Lemma <ref>), yielding optimal Lagrangian multipliers, block states and loss gradients.
* We show that by properly nudging the blocks, EP implicitly estimates the previously derived Lagrangian multipliers (Lemma <ref>).
* We demonstrate Theorem <ref> by combining Lemma <ref> and Lemma <ref>.
* Finally, we highlight that when a ff-EBM is a feedforward net (Lemma <ref>), then the proposed algorithm reduces to BP (Corollary <ref>).
Assuming a ff-EBM (Def. <ref>), we denote s^1_⋆, x^1_⋆, ⋯, s^N-1_⋆, ô_⋆ the states computed during the forward pass as prescribed by Eq. (<ref>). Then, the gradients of the objective function 𝒞:=ℓ(ô(s^N-1_⋆), y) as defined in the multilevel optimization problem (Eq. (<ref>)), where it is assumed that ℓ is differentiable, read:
{[ d_ω^N𝒞 = ∂_2F^N(s_⋆^N-1, ω^N)^⊤·∂_1 ℓ(ô_⋆, y),; d_θ^k𝒞 = ∇^2_1, 2E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) ·λ_⋆^k ∀ k = 1 ⋯ N-1,; d_ω^k𝒞 = ∇^2_1, 4E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) ·λ_⋆^k ∀ k = 1 ⋯ N-1, ].
where λ^1_⋆, ⋯, λ^N-1_⋆ satisfy the following conditions:
{[ ∇_s^N-1ℓ(ô(s^N-1_⋆), y) + ∇^2_1 E^N-1(s^N-1_⋆, θ^N-1, s^N-2_⋆, ω^N-1) ·λ_⋆^N-1 = 0; ∀ k =N-2, ⋯, 1:; ∇^2_1, 3E^k+1(s^k+1_⋆, θ^k+1, s^k_⋆, ω^k+1) ·λ_⋆^k+1 + ∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)·λ_⋆^k = 0 ].
Denoting s := (s^1, ⋯, s^N-1)^⊤ the state variables of the energy-based blocks, λ := (λ^1, ⋯, λ^N-1)^⊤ the Lagrangian multipliers associated with each of these variables, W := {θ_1, ω_1, ⋯, θ_N-1, ω_N-1} the energy-based and feedforward parameters and ô(s^N-1) := F^N(s^N-1, ω^N-1) the logits, the Lagrangian of the multilevel optimization problem as defined in Eq. (<ref>) reads:
ℒ(s, λ, W) := ℓ(ô(s^N-1), y) + ∑_k=1^N-1λ^k^⊤·∇_1 E^k(s^k, θ^k, s^k-1, ω^k), s^0 := x
Writing the associated Karush-Kuhn-Tucker (KKT) conditions ∂_1, 2ℒ(s_⋆, λ_⋆, W) := 0 satisfied by optimal states and Lagrangian multipliers s_⋆ and λ_⋆, we get :
∇_1 E^k (s^k_⋆, θ^k, s^k-1_⋆, ω^k) = 0 ∀ k = 1, ⋯, N- 1
∇_s^N-1ℓ(ô(s^N-1_⋆), y) + ∇^2_1 E^N-1(s^N-1_⋆, θ^N-1, s^N-2_⋆, ω^N-1) ·λ_⋆^N-1 = 0
∇^2_1, 3E^k+1(s^k+1_⋆, θ^k+1, s^k_⋆, ω^k+1) ·λ_⋆^k+1 + ∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)·λ_⋆^k = 0 ∀ k =N-2, ⋯, 1
Eq. (<ref>) governs the bottom-up block-wise relaxation procedure (as depicted in Alg. <ref>), while Eq. (<ref>) and Eq. (<ref>) governs error propagation in the last block and previous blocks respectively. Given s_⋆ and λ_⋆ by Eq. (<ref>) – Eq. (<ref>), the total derivative of the loss function with respect to the model parameters read:
d_W ℓ(ô_⋆, y) = d_W (ℓ(ô_⋆, y) + ∑_k=1^N-1λ^k^⊤_⋆·∇_1 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)_=0 )
= d_W ℒ(s_⋆, λ_⋆, W)
= d_W s_⋆^⊤·∂_1 ℒ(s_⋆, λ_⋆, W)_=0 )) + d_W λ_⋆^⊤·∂_2 ℒ(s_⋆, λ_⋆, W)_=0 + ∂_3 ℒ(s_⋆, λ_⋆, W)
= ∂_3 ℒ(s_⋆, λ_⋆, W)
More precisely, applying Eq. (<ref>) to the feedforward and energy-based block parameters yields:
d_ω^Nℓ(ô_⋆, y) = ∂_2F^N(s_⋆^N-1, ω^N)^⊤·∇_1 ℓ(ô_⋆, y),
d_θ^kℓ(ô_⋆, y) = ∇^2_1, 2E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) ·λ_⋆^k ∀ k = 1 ⋯ N-1
d_ω^kℓ(ô_⋆, y) = ∇^2_1, 4E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) ·λ_⋆^k ∀ k = 1 ⋯ N-1
Under the same hypothesis as Lemma <ref>, we define the nudged state of block k, denoted as s^k_β, implicitly through ∇_1 ℱ^k(s^k_β, θ^k, x^k_⋆, δ s^k, β) = 0 with:
ℱ^k(s^k, θ^k, x^k_⋆, δ s^k, β) := E^k(s^k, θ^k, x^k_⋆) + β s^k^⊤·δ s^k.
Defining (δ s^k)_k=1 ⋯ N-1 recursively as:
δ s^N-1 := ∇_s^N-1ℓ(ô_⋆, y), δ s^k := d_β.(∇_3 E^k+1(s^k+1_β, θ^k+1, s^k_⋆, ω^k+1))|_β=0 ∀ k=1⋯ N-2,
then we have:
λ_⋆^k = d_β(s_β^k )|_β=0 ∀ k = 1 ⋯ N - 1,
where (λ_k)_k=1 ⋯ N-1 are the Lagrangian multipliers associated to the multilevel optimization problem defined in Eq. (<ref>).
We prove this result by backward induction on k.
Initialization (k = N-1). By definition, s^N-1_β satisfies :
β∇_s^N-1ℓ(ô_⋆, y) + ∇_1 E^N-1(s^N-1_β, θ^N-1, s^N-2_⋆, ω^N-1) = 0
Differentiating Eq. (<ref>) with respect to β and evaluating the resulting expression at β = 0, we obtain:
∇_s^N-1ℓ(ô_⋆, y) + ∇_1^2 E^N-1(s^N-1_⋆, θ^N-1, s^N-2_⋆, ω^N-1)· d_β s^N-1_β|_β=0 = 0
Substracting out Eq. (<ref>) defining the Lagrangian multiplier λ^N-1_⋆ and Eq. (<ref>), we obtain:
∇_1^2 E^N-1(s^N-1_⋆, θ^N-1, s^N-2_⋆, ω^N-1)·( d_β s^N-1_β|_β=0 - λ_⋆^N-1) = 0
By invertibility of ∇_1^2 E^N-1(s^N-1_⋆, θ^N-1, s^N-2_⋆, ω^N-1), we therefore have that:
λ_⋆^N-1=d_β s^N-1_β|_β=0
Backward induction step (k + 1 → k). Let us assume that λ^k+1_⋆ = d_β s^k+1_β |_β=0. We want to prove that λ^k_⋆ = d_β s^k_β |_β=0. Again, s^k+1_β satisfies by definition:
βδ s^k + ∇_1 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k) = 0, δ s^k := d_β.(∇_3 E^k+1(s^k+1_β, θ^k+1, s^k_⋆, ω^k+1))|_β=0.
On the one hand, proceeding as for the initialization step, differentiating Eq. (<ref>) with respect to β and taking β=0 yields:
δ s^k + ∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)· d_β s^k_β |_β=0 = 0.
On the other hand, note that δ s^k rewrites :
δ s^k = d_β.(∇_3 E^k+1(s^k+1_β, θ^k+1, s^k_⋆, ω^k+1))|_β=0
=∇_1, 3^2 E^k+1(s^k+1_⋆, θ^k+1, s^k_⋆, ω^k+1) ·.d s^k+1_β|_β=0
= ∇_1, 3^2 E^k+1(s^k+1_⋆, θ^k+1, s^k_⋆, ω^k+1) ·λ^k+1_⋆,
where we used at the last step the recursion hypothesis. Therefore combining Eq. (<ref>) and Eq. (<ref>), we get:
∇_1, 3^2 E^k+1(s^k+1_⋆, θ^k+1, s^k_⋆, ω^k+1) ·λ^k+1_⋆ + ∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)· d_β s^k_β |_β=0 = 0.
Finally, we substract out Eq. (<ref>) and Eq. (<ref>) to obtain:
∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)·( d_β s^k_β |_β=0 - λ_⋆^k)= 0.
We conclude again by invertibility of ∇_1^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) that λ^k_⋆ = d_β s^k_β |_β=0.
Assuming a model of the form Eq. (<ref>), we denote s^1_⋆, x^1_⋆, ⋯, s^N-1_⋆, ô_⋆ the states computed during the forward pass as prescribed by Alg. <ref>.
We define the nudged state of block k, denoted as s^k_β, implicitly through ∇_1 ℱ^k(s^k_β, θ^k, x^k_⋆, δ s^k, β) = 0 with:
ℱ^k(s^k, θ^k, x^k_⋆, δ s^k, β) := E^k(s^k, θ^k, x^k_⋆) + β s^k^⊤·δ s^k.
Denoting δ s^k and Δ x^k the error signals computed at the input of the feedforward block F^k and of the energy-based block E^k respectively, g_θ^k and g_ω^k the gradients of the loss function:
∀ k = 1, ⋯, N-1: g_θ^k := d_θ^k𝒞, ∀ k =1 ⋯ N: g_ω^k := d_ω^k𝒞,
then the following chain rule applies:
δ s^N-1 := ∇_s^N-1ℓ(ô_⋆, y), g_ω^N = ∂_2 F^N(s^N-1_⋆, ω^N )^⊤·∇_1ℓ(ô_⋆, y)
∀ k=1 ⋯ N-1:
{[ Δ x^k = .d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0, g_θ^k = .d_β(∇_2E^k (s^k_β, θ^k, x^k_⋆))|_β=0; δ s^k-1 = ∂_1 F^k(s^k-1_⋆, ω^k )^⊤·Δ x^k, g_ω^k = ∂_2 F^k(s^k-1_⋆, ω^k)^⊤·Δ x^k ].
Combining Lemma <ref> and Lemma <ref>, the following chain rule computes loss gradients correctly:
δ s^N-1 := ∇_s^N-1ℓ(ô_⋆, y), g_ω^N = ∂_2 F^N(s^N-1_⋆, ω^N )^⊤·∇_1ℓ(ô_⋆, y)
∀ k=1 ⋯ N-1:
{[ Δ s^k-1 = d_β.(∇_3 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k))|_β=0, g_θ^k = ∇^2_1, 2E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) · d_β s^k_β |_β=0; g_ω^k = ∇_1, 4^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)· d_β s^k_β |_β=0 ].
Therefore to conclude the proof, we need to show that ∀ k = 1, ⋯, N-1:
d_β.(∇_3 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k))|_β=0 = ∂_1 F^k(s^k-1_⋆, ω^k )^⊤·.d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0
∇^2_1, 2E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) · d_β s^k_β |_β=0 =.d_β(∇_2E^k (s^k_β, θ^k, x^k_⋆))|_β=0
∇_1, 4^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)· d_β s^k_β |_β=0 = ∂_2 F^k(s^k-1_⋆, ω^k)^⊤·.d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0
Let k ∈ [1, N-1]. We prove Eq. (<ref>) as:
d_β.(∇_3 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k))|_β=0 =d_β.(∇_s^k-1 E^k(s^k_β, θ^k, F^k(s^k-1_⋆, ω^k)))|_β=0
= ∂_1 F^k(s^k-1_⋆, ω^k)^⊤·.d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0
Eq. (<ref>) can be obtained as:
∇^2_1, 2E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k) · d_β s^k_β |_β=0 = .d_β(∇_2 E^k(s^k_β, θ^k, s^k-1_⋆, ω^k) )|_β=0
= .d_β(∇_2E^k (s^k_β, θ^k, x^k_⋆))|_β=0
Finally and similarly, we have:
∇_1, 4^2 E^k(s^k_⋆, θ^k, s^k-1_⋆, ω^k)· d_β s^k_β |_β=0 = . d_β( ∇_4E^k(s^k_β, θ^k, s^k-1_⋆, ω^k)) |_β=0
= . d_β( ∇_ω^k E^k(s^k_β, θ^k, F^k(s^k-1_⋆, ω^k))) |_β=0
= . d_β(∂_2 F(s^k-1_⋆, ω^k )^⊤·∇_3 E^k(s^k_β, θ^k, x^k_⋆)) |_β=0
= ∂_2 F(s^k-1_⋆, ω^k )^⊤·. d_β(∇_3 E^k(s^k_β, θ^k, x^k_⋆)) |_β=0
Under the same hypothesis as Theorem <ref> and Lemma <ref>, then the following chain rule applies to compute error signals backward from the output layer:
{[ δ s^N-1 := ∇_s^N-1ℓ(ô_⋆, y), g_ω^N = ∇_ω^Nℓ(ô_⋆, y); Δ x^k = σ'(x^k_⋆) ⊙δ s^k; δ s^k-1 = ∂_1 F^k(s^k-1_⋆, ω^k )^⊤·Δ x^k, g_ω^k = ∂_2 F^k(s^k-1_⋆, ω^k)^⊤·Δ x^k ].
Let k ∈ [1, N-1]. As we can directly apply Theorem <ref> here, proving the result simply boils down to showing that:
Δ x^k = σ'(x^k_⋆) ⊙δ s^k
First, we notice that when E^k is of the form of Eq. (<ref>), then Δ x^k reads as:
Δ x^k = .d_β(∇_3E^k (s^k_β, θ^k, x^k_⋆))|_β=0 = -. d_β(s^k_β) |_β=0.
s^k_β satisfies, by definition and when U^k = 0:
σ^-1(s^k_β) - x^k_⋆ + βδ s^k = 0
⇔ s^k_β = σ( x^k_⋆ - βδ s^k)
Combining Eq. (<ref>) and Eq. (<ref>)
yields Eq. (<ref>), and therefore, along with Theorem <ref>, the chain-rule Eq. (<ref>).
We showcase in Alg. <ref> the resulting algorithm with finite β and implicit BP-EP chaining, with lines in blue highlighting differences with the general algorithm Alg. <ref>.
§.§ Explicit BP-EP chaining
We presented in Alg. <ref> a “pure” EP algorithm where the BP-EP gradient chaining is implicit. We show below, inside Alg. <ref>, an alternative implementation (equivalent in the limit β→ 0) where the use of BP through feedforward modules is explicit and which is the direct implementation of Theorem <ref>. We also show the resulting algorithm when the ff-EBM reduces to a feedforward net (Lemma <ref>) inside Alg. <ref>, highlight in blue the statements which differ from the general case presented inside Alg. <ref>.
§.§ Model and algorithm details
Equilibrium computation. As mentioned in Section <ref>, the energy function of the k^ th EB block has the form:
E^k(s^k, θ^k, x^k) := G^k(s^k) - s^k^⊤· x^k + U^k(s^k, θ^k)
,
where x^k is the output of the preceding feedforward block. For a given choice of a continuously invertible activation function, G^k_σ is defined as:
G^k_σ(s^k) :=∑_i=1^(s^k)∫^s_iσ_i^-1(u_i)du_i ∇ G^k_σ(s^k)_i = σ^-1_i(s^k_i) ∀ i = 1 ⋯(s^k).
To be more explicit and as we did previously, we re-write the augmented energy-function which encompasses both the k^ th EB block and the feedforward module that precedes it:
E^k(s^k, θ^k, s^k-1_⋆, ω^k) := E^k(s^k, θ^k, F^k(s^k-1_⋆, ω^k))
.
We showed that when is chosen such that ∇ G = σ^-1 for some activation function σ, then the steady state of the k^ th block reads:
s^k_⋆ := σ(x^k - ∇_1 U^k(s^k_⋆, θ^k) ),
which justifies the following fixed-point iteration scheme, when the block is influenced by some error signal δ s with nudging strength β:
s^k_±β, t + 1σ(x^k - ∇_1 U^k(s^k_±β, t, θ^k) ∓βδ s^k ).
The dynamics prescribed by Eq. <ref> are also used for the inference phase with β=0. To further refine Eq. (<ref>), let us re-write Eq. (<ref>) with a layer index ℓ where ℓ∈ [1, L_k] with L_k being the number of layers in the k^ th block, and replacing x^k by its explicit expression:
∀ℓ = 1 ⋯ L_k: s^k_ℓ, ±β, t + 1σ(F^k(s^k-1_⋆, ω^k-1) - ∇_s^k_ℓ U^k(s^k_±β, t, θ^k) ∓βδ s^k ).
As done in past EP works <cit.> and for notational convenience, we introduce the primitive function of the k^ th block as:
Φ^k(s^k, θ^k, s^k-1_⋆, ω^k) := s^k^⊤· F^k(s^k-1_⋆, ω^k) - U^k(s^k, θ^k)
such that Eq. (<ref>) re-writes:
∀ℓ = 1 ⋯ L_k: s^k_ℓ, ±β, t + 1σ(∇_s^k_ℓΦ(s^k_±β, t, θ^k, s^k-1_⋆, ω^k) ∓βδ s^k ).
Eq. (<ref>) depicts a synchronous scheme where all layers are simultaneously updated at each timestep. Another possible scheme, employed by <cit.>, instead prescribes to asynchronously update odd and even layers and was shown to speed up convergence in practice:
{[ ∀ ℓ∈{1, ⋯, L_k}: s^k_ℓ, ±β, t + 1/2σ(∇_s^k_ℓΦ(s^k_±β, t, θ^k, s^k-1_⋆, ω^k) ∓βδ s^k ),; ∀ ℓ∈{1, ⋯, L_k}: s^k_ℓ, ±β, t + 1σ(∇_s^k_ℓΦ(s^k_±β, t + 1/2, θ^k, s^k-1_⋆, ω^k) ∓βδ s^k ). ].
We formally depict this procedure as the subroutine inside Alg. <ref>. In practice, we observe that it was more practical to use a fixed number of iterations rather than using a convergence criterion with a fixed threshold.
Nudging the last block. From looking at the procedure prescribed by Theorem <ref> and algorithms thereof (Alg. <ref>, Alg. <ref>), all the error signals used to nudge the EB blocks are stationary, including the top-most block where the loss error signal is fed in. Namely, the augmented energy function of the last block reads as:
ℱ^N-1(s^N-1, θ^N-1, x^N-1_⋆, β) := E^N-1(s^N-1, θ^N-1, x^N-1_⋆) + β s^N-1^⊤·∇_s^N-1ℓ(ô_⋆, y),
where ô_⋆ := F^N(s^N-1_⋆, ω^N) is constant. Up to a constant, Eq. (<ref>) uses the cost function linearized around s^N-1_⋆ instead of the cost function itself. This is, however, in contrast with most EP implementations where the nudging force acting upon the EB block is usually elastic, i.e. the nudging depends on the current state of the EB block. More precisely, instead of using Eq. (<ref>), we instead use:
ℱ^N-1(s^N-1, θ^N-1, x^N-1_⋆, β) := E^N-1(s^N-1, θ^N-1, x^N-1_⋆) + βℓ(F^N(s^N-1, ω^N), y),
This results in the following asynchronous fixed-point dynamics for the last block:
{[ ∀ ℓ∈{1, ⋯, L_k}: s^k_ℓ, ±β, t + 1/2σ(∇_s^k_ℓΦ(s^k_±β, t, θ^k, s^k-1_⋆, ω^k) ∓β∇_s^kℓ(s^k_±β, t, y) ),; ∀ ℓ∈{1, ⋯, L_k}: s^k_ℓ, ±β, t + 1σ(∇_s^k_ℓΦ(s^k_±β, t + 1/2, θ^k, s^k-1_⋆, ω^k) ∓β∇_s^kℓ(s^k_±β, t, y) ). ].
The resulting subroutine, applying for the last block, is depicted inside Alg. <ref>.
Readout. <cit.> introduced the idea of the “readout” whereby the last linear layer computing the loss logits is not part of the EB free block dynamics but simply “reads out” the state of the penultimate block. In all our experiments we use such a readout in combination with the cross entropy loss function. Using our formalism, our readout is simply the last feedforward transformation used inside ℓ, namely F^N(·, ω^N).
Deep Hopfield Networks (DHNs). In our experiments, we used weight matrices of the form:
θ^k = [ 0 θ_1^k^⊤ 0 ; θ_1^k 0 θ_2^k^⊤ ; 0 θ_2^k ⋱ ⋱ ; ⋱ 0 θ_L^k^⊤; θ_L^k 0 ],
whereby each layer ℓ is only connected to its adjacent neighbors. Therefore, fully connected and convolutional DHNs with L layers have an energy function of the form:
U^k_ FC(s^k, θ^k) := -1/2s^k⊤·θ^k · s^k = -1/2∑_ℓ s^k^⊤_ℓ + 1·θ_ℓ^k · s^k_ℓ
U^k_ CONV(s^k, θ^k) := -1/2s^k ∙(θ^k ⋆ s^k) = -1/2∑_ℓ s^k_ℓ + 1∙( θ_ℓ^k ⋆ s^k_ℓ)
Detailed inference algorithm. With the aforementioned details in hand, we re-write the inference algorithm Alg. <ref> presented in the main as a subroutine.
Detailed implicit EP-BP chaining algorithm.
We provide a detailed implementation of our algorithm presented in the main (Alg. <ref>) in Alg. <ref>. As usually done for EP experiments, we always perform a “free phase” to initialize the block states ( subroutine, Alg. <ref>). Then, two nudged phases are applied to the last block and parameter gradients subsequently computed, as done classically ( subroutine for the last block, Alg. <ref>), with an extra computation to compute the error current to be applied to the penultimate block (δ s^N-2). Then, the same procedure is recursed backward through blocks (Alg. <ref>), until reaching first block.
Details about the implicit differentiation baseline.
We describe our implementation of Implicit Differentiation (ID) inside Alg. <ref>. First, we relax all blocks sequentially to equilibrium following Alg. <ref> and we do not track gradients throughout this first phase, using T_ free fixed-point iteration steps per block. Then, initializing the block states with those computed at the previous step, we re-execute the same procedure (still with Alg. <ref>), this time tracking gradients and using T_ nudge steps fixed-point iteration steps for each block. Then, we use automatic differentiation to backpropagate through the last T_ nudge steps for each block, namely backpropagating, backward in time, through equilibrium.
§.§ Details about the static gradient analysis
Single block case. Let us consider a single block with parameters θ and state s. Earlier, we showed that the (free) dynamics of a block read in the general form as:
s_t + 1∇_1 Φ(s_t, θ, x),
where we have ignored the activation function, as done by <cit.>. Therefore, the computational graph for Eq. (<ref>) is that of a recurrent neural network with a static input (x) where the parameters θ are shared across time steps. Therefore, one could view Eq. (<ref>) as:
s_t + 1∇_1 Φ(s_t, θ(t) = θ, x ).
At the end of the recurrent dynamics Eq. (<ref>), a loss is computed based on the output of the last block, namely:
𝒞 = ℓ(s_T, y),
where s_T is the state of the last block after T iterations of the recurrent dynamics Eq. (<ref>). In principle, the gradient of the loss 𝒞 with respect to θ reads as:
g_θ = d_θ(1)𝒞 + d_θ(2)𝒞 + ⋯ + d_θ(T)𝒞.
In other words, the gradient of the loss 𝒞 with respect to θ is the sum of all “sensitivities” of the loss to the same parameter taken at all timesteps where it is involved. One can define the truncated gradient at T-t, i.e. t steps backward in time from T as:
ĝ^ AD_θ(t) := ∑_k=0^t-1 d_θ(T - k)𝒞,
such that ĝ^ AD_θ(T) = g_θ. AD stands for “automatic differentiation” because this is how these gradients may be computed in practice – it can also be viewed as an instantiation of backpropagation through time (BPTT) with a static input. Then, writing the nudged dynamics prescribed by EP as:
s_t + 1^±β := ∇_1 Φ(s^±β_t, θ, x ) ∓β∇_1 ℓ(s_t^β, y),
we define the truncated EP gradient estimate at timestep t through the second phase as:
ĝ^ EP_θ(β, t) := 1/2β(∇_2 Φ(s^β_t, θ, x) - ∇_2 Φ(s^-β_t, θ, x)).
Furthermore, we assume that s is already at its steady state s_⋆ over the last K steps, i.e. T-K, T-K-1, ..., T, so that AD here occurs through equilibrium, then the following equality holds <cit.>:
∀ t = 1 ⋯ K: lim_β→ 0ĝ^ EP_θ(β, t) = ĝ^ AD_θ(t)
This property was called the Gradient Descending Updates (GDU) property. In practice, the GDU property is a tremendously helpful sanity check to debug code to make sure EP gradients are correctly computed. Our coding work heavily relied on this property.
Multiple blocks. In ff-EBMs, the GDU property may immediately hold for the last block. For penultimate block backwards, the sole difference compared to the previous paragraph is simply the nudging force that is applied to the model, for which Eq. (<ref>) may simply be changed into:
s_t + 1^±β∇_1 Φ(s^±β_t, θ, x ) ∓βδ s,
where δ s is a constant error signal. Therefore, so long as the error signal δ s computed by EP matches that computed by AD (which we proved inside Theorem <ref>), the GDU property should still hold.
§.§ Experimental Details
§.§.§ Datasets
Simulations were run on CIFAR-10, CIFAR-100 and Imagenet32 datasets, all consisting of color images of size 32 × 32 pixels. CIFAR-10 <cit.> includes 60,000 color images of objects and animals . Images are split into 10 classes, with 6,000 images per class. Training data and test data include 50,000 images, and 10,000 images respectively. Cifar-100 <cit.> likewise comprises 60,000, and features a diverse set of objects and animals split into 100 distinct classes. Each class contains 600 images. Like CIFAR-10, the dataset is divided into a training set with 50,000 images and a test set containing the remaining 10,000 images. The ImageNet32 dataset <cit.> is a downsampled version of the original ImageNet dataset <cit.> containing 1,000 classes with 1,281,167 training images, 50,000 validation images, 100,000 test images and 1000 classes.
§.§.§ Data Pre-processing
All data were normalized according to statistics shown in <ref>and augmented with 50 % random horizontal flips. Images were randomly cropped and padded with the last value along the edge of the image.
§.§.§ Simulation Details
Weight Initialization
Equilibrium propagation, similar to other machine learning paradigms reliant on fixed-point iteration <cit.>, is highly sensitive to initialization statistics <cit.>, hence conventionally difficult to tune, and requiring many iterations for the three relaxation phases. Initialization of weights as Gaussian Orthogonal Ensembles (GOE) ensures better stability (reduced variance) and, combined with other stabilizing measures discussed below, empirically yields faster convergence.
According to GOE, weights are intialized as:
W_ij∼𝒩(0, V/N), if i ≠ j
𝒩(0, 2V/N), if i = j
where 𝒩(μ, σ^2) denotes a Gaussian (normal) distribution with mean μ and variance σ^2.
State Initialization
All states are initialized as zero matrices.
Activation Functions
An important detail for faithful reproduction of these experiments is the choice and placement of activation functions applied during the iterative root-finding procedure. In the literature, clamping is conventionally applied at each layer, with the exception of the final layer, where it is sometimes included e.g. <cit.>, and sometimes omitted <cit.>. For these experiments we use both the standard hard activation employed in <cit.> and <cit.>, and the more conservative one given in <cit.>. Details are given in <ref>.
In practice, we find that the laborieux activation, in conjunction with GOE, and the omission of clamping at the output of each block significantly enhances gradient stability and speeds convergence in the multiple block setting. In the interest of multi-scale uniformity and consistency with previous literature <cit.> <cit.>, however, we apply clamping using the ernoult activation on all layers in our 6-layer architecture.
For the scaling experiments, we apply laborieux activation at every layer except the output of each block. For the 12-layer splitting experiment, we do the same, omitting clamping from the output of the final layer of each block in the block-size=4 and block-size=3 experiments. However, in the block-size=2 case we clamp the output of the second and fourth blocks to preserve dynamics of the block-size=4 split. Such consistency is not possible for the block-size=3 experiment, constituting a possible discrepancy in the scaling dynamics.
Cross-entropy Loss and Softmax Readout
Following <cit.>, all models were implemented such that the output y is removed from the system (e.g. not included in the relaxation dynamics) but is instead the function of a weight matrix: W_out of size dim(y) ×dim(s), where s is the state of the final layer. For each time-step t, ŷ_t = softmax(W_outs_t).
The cross-entropy cost function associated with the softmax readout is then:
l(s, y, W_out) = -∑_c=1^C y_c log(softmax_c(W_out· s)).
It is important to note that by convention we refer to architectures throughout this text to the exclusion of the softmax readout, which is technically an additional layer, though not involved in the relaxation process.
Architecture
Details of architectures used in the experiments reported in <ref> and <ref> are given in table <ref>. All convolutional layers used in experiments are of kernel size 3 and stride and padding 1. Max-pooling was applied with a window of 2 × 2 and stride of 2. For the 6-layer model used in the <ref> batch norm was applied after the first layer convolution and pooling operation. All other models in both experiments use batch-normalization on the first layer of each block after convolution and pooling (where applied). These details exclude the Linear Softmax readout of size n classes.
Hyperparameters
Hyperparameters and implementation details for <ref> and <ref> are given in table <ref>. Note that all architectural details for the 12-layer models are identical across splitting and scaling experiments. Additionally, identical hyperparameters were used for Cifar-100 and Imagenet experiments of <ref>. Unlike previous literature, the use of GOE intialization eliminates the need for separate layerwise learning rates and initialization parameters. One noteworthy detail is that only 100 epochs were used for the larger model for <ref> compared with 200 epochs for the smaller 12-layer model. This was due to prohibitively long run-time of training the larger model. Noteably performance still significantly improves with decreased overall runtime.
Other details
All experiments were run using Adam optimizer <cit.>and Cosine Annealing scheduler<cit.> with minimum learning rates indicated in <ref> and maximum T equal to epochs (i.e. no warm restarts). Code was implemented in Pytorch 2.0 and all simulations were run on A-100 SXM4 40GB GPUs.
§ NEURIPS PAPER CHECKLIST
* Claims
Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
Answer:
* Limitations
Question: Does the paper discuss the limitations of the work performed by the authors?
Answer:
Justification: we have a dedicated paragraph in the “Discussion” section of the paper which explicitly mentions limitations and future work.
* Theory Assumptions and Proofs
Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
Answer:
Justification: All proofs are included in the appendix.
* Experimental Result Reproducibility
Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
Answer:
Justification: detailed information to reproduce experiment is provided in the appendix.
* Open access to data and code
Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
Answer:
Justification: We only employed public datasets and will release our code at the end of the reviewing process. In the meanwhile, we provided very detailed pseudo-algorithms in the appendix.
* Experimental Setting/Details
Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
Answer:
Justification: all these details are provided in the appendix.
* Experiment Statistical Significance
Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
Answer:
Justification: we perform each of our training simulations on 3 different seeds and reported mean and standard deviation of the resulting performance.
* Experiments Compute Resources
Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
Answer:
Justification: this information is also inside our appendix.
* Code Of Ethics
Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics <https://neurips.cc/public/EthicsGuidelines>?
Answer:
* Broader Impacts
Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
Answer:
Justification: we wrote a dedicated paragraph inside our “Discussion” section.
* Safeguards
Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
Answer:
Justification: Our work poses no such risk at present as it only provides proof-of-concepts for systems which do not yet exist.
* Licenses for existing assets
Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
Answer:
Justification: this work does not use existing assets.
* New Assets
Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
Answer:
Justification: this paper does not release new assets.
* Crowdsourcing and Research with Human Subjects
Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
Answer:
Justification: our paper does not involve crowdsourcing nor research with human subjects.
* Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects
Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
Answer:
Justification: our paper does not involve crowdsourcing nor research with human subjects.
|